CN110692061A - Apparatus and method for providing summary information using artificial intelligence model - Google Patents

Apparatus and method for providing summary information using artificial intelligence model Download PDF

Info

Publication number
CN110692061A
CN110692061A CN201880035705.3A CN201880035705A CN110692061A CN 110692061 A CN110692061 A CN 110692061A CN 201880035705 A CN201880035705 A CN 201880035705A CN 110692061 A CN110692061 A CN 110692061A
Authority
CN
China
Prior art keywords
document
information
summary information
documents
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880035705.3A
Other languages
Chinese (zh)
Inventor
黄陈煐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority claimed from PCT/KR2018/008759 external-priority patent/WO2019027259A1/en
Publication of CN110692061A publication Critical patent/CN110692061A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/046Forward inferencing; Production systems

Abstract

An artificial intelligence system uses a machine learning algorithm for providing summary information input by a document to an artificial intelligence learning model trained to obtain the summary information.

Description

Apparatus and method for providing summary information using artificial intelligence model
Technical Field
Embodiments of the present disclosure relate to an electronic device providing summary information and a control method thereof, and more particularly, to an electronic device providing summary information related to at least one of a plurality of documents searched based on keywords and a control method thereof.
Embodiments of the present disclosure relate to an Artificial Intelligence (AI) system and applications thereof that use machine learning algorithms to replicate functions such as identifying and determining the human brain.
Background
In recent years, Artificial Intelligence (AI) systems imitating intelligence at the human level have been widely used in various fields. Unlike conventional rule-based intelligent systems, AI systems represent a system of learning, judgment and development. As the use of AI increases, the degree of recognition increases accordingly, so that user preferences can be more accurately understood when analysis is performed under the AI system. Therefore, rule-based intelligent systems have gradually been replaced by deep learning based AI systems.
AI techniques include machine learning (e.g., deep learning) and underlying techniques that utilize machine learning.
Machine learning can be described as an algorithm that classifies data and learns input data characteristics. The underlying technology can be described as a technology that mimics cognitive functions (such as recognition and judgment of the human brain) using machine learning algorithms (such as deep learning), which is composed of technical fields including language understanding, visual understanding, inference/prediction, knowledge representation, operational control, and the like.
The functions of the artificial intelligence technology are applied to various fields. Language understanding is a technique for recognizing, applying/processing human language/characters, including natural language processing, machine translation, dialog systems, queries and responses, speech recognition/synthesis, and the like. Visual understanding is a technique for recognizing and processing objects into human vision, including object recognition, object tracking, image search, human recognition, scene understanding, spatial understanding, image enhancement, and the like. Inference prediction is a technique for judging and logically inferring and predicting information, including knowledge/probability-based inference, optimized prediction, preference-based planning, and recommendation. Knowledge representation is a technique for automating human experience information into knowledge data, including knowledge construction (data generation/classification) and knowledge management (data utilization). Motion control is a technique for controlling the autonomous operation of a vehicle and the motion of a robot, and includes motion control (navigation, collision, driving), operation control (behavior control), and the like.
In recent years, techniques for summarizing documents and providing summary information (e.g., summary text) have been developed. In particular, recent electronic devices or programs can provide summary information by summarizing a document using a summary model obtained by artificial intelligence learning.
Accordingly, there is a need to provide a user with various user experiences through a summarization function to summarize documents using a summarization model.
Disclosure of Invention
Technical problem
Embodiments of the present disclosure provide an electronic device for selecting at least one of a plurality of documents returned as a result of a keyword-based search and providing summary information of the documents, and a control method thereof.
Solution to the problem
According to one embodiment, there is provided a method of providing a server of summary information using an artificial intelligence learning model, the method comprising: in response to receiving a search request including a keyword, searching a plurality of documents based on the keyword; in response to receiving a request for summary information of a document of a plurality of documents, using the document as input to obtain the summary information of the document from an artificial intelligence learning model trained to obtain the summary information of the document; and providing the summary information of the document to the electronic device.
According to an embodiment, there is provided a computer readable medium having stored thereon a program for executing a method for providing summary information using an artificial intelligence learning model, the method including: in response to receiving a search request including a keyword, searching a plurality of documents based on the keyword; in response to receiving a request for summary information of a document of a plurality of documents, using the document as input to obtain the summary information of the document from an artificial intelligence learning model trained to obtain the summary information of the document; and providing the summary information of the document to the electronic device.
According to one embodiment, there is provided a method of providing summary information using an artificial intelligence learning model, the method comprising: receiving an input of a keyword while displaying the first document; searching a plurality of documents based on the keywords in response to receiving a search request for searching the documents based on the keywords; in response to receiving a user instruction to insert summary information about a second document of the plurality of documents into the first document, obtaining, using the second document as input, summary information for the second document relating to the keyword from an artificial intelligence learning model trained to obtain the summary information for the second document; and inserting the obtained summary information of the second document into the first document.
According to one embodiment, there is provided a computer readable recordable having stored thereon a program for executing a method of providing summary information using an artificial intelligence learning model, the method comprising: receiving an input of a keyword while displaying the first document, and displaying the keyword together with the first document; searching a plurality of documents based on the keyword in response to receiving a search request for searching the documents based on the keyword; in response to receiving a user instruction to insert summary information about a second document of the plurality of documents into the first document, obtaining, using the second document as input, summary information of the second document relating to the keyword from an artificial intelligence learning model trained to obtain the summary information of the second document; and inserting the obtained summary information of the second document into the first document.
Advantageous effects of the invention
With the various embodiments described above, an electronic device may obtain summary information related to keywords and use the obtained summary information to provide various user experiences. In addition, the electronic device may provide summary information applicable to the user's tendencies or intellectual abilities.
Drawings
The foregoing and/or other aspects of the present disclosure will become more apparent by describing certain embodiments thereof with reference to the attached drawings, wherein:
FIG. 1 is a usage diagram that provides summary information according to one embodiment;
FIG. 2A is a block diagram illustrating an electronic device according to one embodiment;
fig. 2B is a block diagram illustrating a configuration of an electronic device according to one embodiment;
FIG. 2C is a block diagram of an electronic device according to one embodiment;
fig. 3, 4, 5, and 6 are flow diagrams illustrating methods of providing summary information according to various embodiments;
FIG. 7 is a diagram illustrating insertion of summary text, according to one embodiment;
FIG. 8 is a diagram illustrating setting the length and mood of a summary text, according to one embodiment;
FIG. 9 is a diagram illustrating setting the length and mood of a summary text based on user history according to one embodiment;
FIG. 10 is a diagram illustrating providing summary text for summarizing a received document, according to one embodiment;
FIG. 11 is a diagram illustrating words included in search summary text according to one embodiment;
FIG. 12 is a diagram illustrating providing summary information, according to one embodiment;
fig. 13 is a block diagram showing a configuration of an electronic apparatus according to an embodiment;
fig. 14A and 14B are block diagrams showing the configuration of a learning unit and a summary unit according to an embodiment;
FIG. 15 is a flow diagram of a method of inserting summary information according to one embodiment;
FIG. 16, FIG. 17, FIG. 18, and FIG. 19 are flow diagrams illustrating methods of using a network system that outlines a model, according to various embodiments; and is
Fig. 20 and 21 are flow diagrams illustrating methods of a server and an electronic device providing summary information according to various embodiments.
Detailed Description
The terms and words used in the following specification and claims are not limited to the written meaning, but are used only by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it will be apparent to those skilled in the art that the following descriptions of the various embodiments of the present disclosure are provided for illustration only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents.
In this document, the expressions "having", "may have", "include" or "may include" may be used to indicate the presence of a feature (e.g. a value, a function, an operation), but does not exclude the presence of other features.
In this document, the expressions "a or B", "at least one of a and/or B" and "one or more of a and/or B" and the like comprise all possible combinations of the listed items. For example, "a or B," "at least one of a and B," or "at least one of a or B" includes (1) only a, (2) only B, or (3) a and B.
Terms such as "first," "second," and the like may be used to describe various elements, but the elements should not be limited by these terms. These terms are only used for the purpose of distinguishing between different elements.
A component (e.g., a first component) "operably or communicatively coupled" with another component (e.g., a second component)/"operably or communicatively coupled to" another component (e.g., a first component) may be directly connected to the other element or may be connected via another element (e.g., a third element). On the other hand, when an element (e.g., a first element) "directly connects" or "directly accesses" another element (e.g., a second element), there is no other part (e.g., a third part) between these parts.
Herein, the expression "configured to" may be used interchangeably with, for example, "adapted to", "having … … capability", "designed to", "adapted to", "used to", or "capable of". The expression "configured to" does not necessarily mean "specially designed" on a hardware level. Conversely, in some cases, "a device configured as … …" may indicate that such a device may perform operations with another device or component. For example, the expression "processor configured to perform A, B and C" may indicate a dedicated processor (e.g., an embedded processor) for performing the corresponding operations or a general-purpose processor (e.g., a Central Processing Unit (CPU) or an Application Processor (AP)) that may perform the corresponding operations by executing one or more software programs stored in a memory device.
The electronic apparatus and the external device according to various embodiments of the present disclosure may include, for example, at least one of a smart phone, a tablet Personal Computer (PC), a mobile phone, a video phone, an e-book reader, a desktop computer, a laptop computer, a netbook computer, a workstation, a server, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP), an MP3 player, a medical device, a camera, or a wearable device. Wearable devices may include accessory types (e.g., watches, rings, bracelets, anklets, necklaces, a pair of glasses, contact lenses, or Head Mounted Devices (HMDs)); fabric or garment integration types (e.g., skin pads or tattoos); or a bioimplantable circuit. In some embodiments, the electronic device can be, for example, a television, a Digital Video Disc (DVD) player, an audio, a refrigerator, a vacuum cleaner, an oven, a microwave oven, a washing machine, an air purifier, a set-top box, a home automation control panel, a security control panel, a media box (e.g., Samsung HomeSyncTM, Apple TVTM, or Google TVTM), a gaming machine (e.g., xbox and playstation), an electronic dictionary, an electronic key, a camcorder, or an electronic picture frame.
In other embodiments, the electronic devices and external devices may include various medical devices (e.g., various portable medical measurement devices such as a blood glucose meter, a heart rate meter, a blood pressure meter, or a body temperature measurement device), Magnetic Resonance Angiography (MRA), Magnetic Resonance Imaging (MRI), Computed Tomography (CT), or ultrasound devices, etc.), navigation systems, Global Navigation Satellite Systems (GNSS), Event Data Recorders (EDR), Flight Data Recorders (FDR), vehicle infotainment devices, marine electronics (e.g., marine navigation devices, gyroscopic compasses, etc.), avionics, security devices, car head units, industrial or home robots, unmanned planes, Automated Teller Machines (ATMs), point of sale (POS) or internet of things (IoT) devices (e.g., light bulbs, sensors, sprinkler devices, fire alarms, etc.) Thermostat, street lamp, toaster, fitness equipment, hot water tank, heater, boiler, etc.).
In the present disclosure, the term "user" may refer to a person using an electronic device or device (e.g., an artificial intelligence electronic device).
As shown in fig. 1 and 2, the electronic device 100 may display a first document 10 in a first area. At this time, the first document includes text, and may include images and videos in addition to the text. As shown in fig. 1 (a), while the first document 10 is displayed in the first area 10, the electronic device 100 may receive a user instruction to input a keyword to the search window 20. The electronic device 100 may display the keyword on the search window 20 in response to a user instruction.
When a search command (e.g., a command for selecting a search icon) for a keyword input in the search window 20 is received, the electronic apparatus 100 searches a plurality of documents based on the keyword. At this time, the electronic apparatus 100 may search for a plurality of documents stored in the electronic apparatus 100 based on the keyword, but this is merely exemplary and an external server may be used to search for a plurality of documents. The plurality of documents may be stored on an external server, or may be stored on one or more servers and/or devices.
As shown in (b) of fig. 1, the electronic apparatus 100 may display a list 30 identifying or indicating a plurality of documents returned as a search result in the second area while the first document 10 is being displayed in the first area. At this time, the list 30 of documents may display partial areas including keywords in text contained in a plurality of documents, and may display thumbnails.
The electronic device 100 may receive a user command for selecting at least one document 40 of the plurality of documents included in the list 30 and inserting the selected at least one document 40 into the first document. As an example, as shown in (c) of fig. 1, the electronic apparatus 100 may receive a user command to select at least one document 40 among the documents included in the list 30 and drag the content of the document to a point in the first document.
When a user command for selecting at least one of the plurality of documents and inserting the selected document into the first document 10 is received, the electronic device 100 may acquire summary information summarizing the selected at least one document 40. At this time, the electronic device 100 may input the at least one document 40 to an AI learning model (e.g., a document summary model) trained to obtain summary information about the selected at least one document 40. At this time, the summary information may include various information such as a summary text, a summary image, a summary video, and the like.
The AI learning model can generate summary information related to a keyword based on receiving the keyword as an input. That is, the AI learning model can employ keywords to generate summary information.
When a user command for inserting a plurality of documents of the plurality of documents into the first document 10 is received, the AI model may obtain summary information about the documents centering on a sentence (or text) commonly included in all the documents. That is, the AI learning model may generate summary information from statements (or text or any content) that are common to multiple documents.
Further, if the selected document 40 has a catalog, the AI model may summarize the selected document and obtain summary information based on the catalog included in the selected document 40. That is, the artificial intelligence learning model can generate summary information from text or sentences included in the summary or conclusion of the catalog.
In one embodiment, the electronic device 100 may display a User Interface (UI) for setting the length of the summary information. At this time, if the length or mood of the summary information is set through the UI, the AI learning model may generate the summary information based on the set mood and length. For example, if the UI is set to generate summary information in a negative tone, the AI learning model may generate the summary information by setting a higher weight to the negative tone. As another example, if the UI is set to generate long summary information, the AI learning model may generate the summary information by extracting more than a predetermined number of words or phrases used to generate the summary information.
In another embodiment, the electronic device 100 may obtain historical information of the user, and the AI model may generate summary information based on the user historical information or demographic information. Specifically, the AI learning model may set the mood or length of summary information based on user historical information (e.g., the user's political tendencies, knowledge level, etc.). For example, if it is determined that the user has a political aggressiveness trend based on the user history information, an AI learning model may be applied to words having aggressive characteristics (e.g., progress, welfare, distribution, etc.) to generate summary information. As another example, the AI learning model may generate summary information by favoring the use of simple words and more detailed explanations in the summary information when it is determined that the user's expertise with the document is low based on the user history information.
If the summary information is obtained, the electronic device 100 may insert the obtained summary information into the first document 10. Specifically, as shown in (d) of fig. 1, the electronic apparatus 100 may insert the summary information 50 at a point indicating where the user of the first document 10 drags the input.
At this time, the summary information 50 may be distinctively displayed from other text in the first document 10 or other documents included in the list 30. For example, distinctively displaying may include, for example, displaying as having different shades, brightnesses, or complementary colors, displaying a border of the summary information by a dotted line or a solid line, or displaying an indicator indicating the summary information, or the like. Further, the electronic apparatus 100 may display the reference information on the summary information 50. That is, the electronic device 100 may display the source of the summary information 50 with the document 10.
Further, the electronic device 100 may perform an additional search when receiving a user command for requesting an additional search of at least one word in the summary information 50. At this time, the electronic apparatus 100 may perform the additional search based on the document for generating the summary information, but this is merely an example and the additional search for at least one word may be performed through a separate server.
According to various embodiments, the electronic device 100 may use at least one selected document as input data for an AI learning model to obtain summary information.
The learned AI learning model in the present disclosure may be constructed in consideration of an application field of the recognition model or a computer performance of the device. For example, the learned AI learning model may be set to use a document containing a plurality of texts as input data to acquire summary information of the document. In order to generate summary information by grasping the relationship of words included in a document, rather than generating summary information only by extracting words included in a document, the learned AI learning model may be a model based on, for example, a neural network. The AI learning model may be designed to mimic the cognitive abilities of a human on a computer, including a plurality of network nodes that mimic neurons of a human neural network and are assigned weights. A plurality of network nodes may form each connection relationship such that a neuron mimics synaptic activity of a neuron that sends and receives signals through synapses. Further, the document summary model may include, for example, a neural network model or a deep learning model developed from a neural network model. In the deep learning model, a plurality of network nodes are located at different depths (or layers), and data can be exchanged according to a convolution connection relationship. Examples of document summary models include, but are not limited to, Deep Neural Networks (DNNs), Recurrent Neural Networks (RNNs), and Bidirectional Recurrent Deep Neural Networks (BRDNNs).
Further, as described above, the electronic device 100 may generate summary information about the document selected by the user using the artificial intelligence agent. At this time, the artificial intelligence agent is a dedicated program for providing Artificial Intelligence (AI) -based services (e.g., a voice recognition service, a secretary service, a translation service, a search service, etc.) that can be executed by an existing general-purpose processor (e.g., CPU) or a separate AI-dedicated processor (e.g., a graphic processing unit, other dedicated processor, etc.). In particular, the artificial intelligence agent may control the various modules.
Specifically, the AI agent may operate if user input is received for a document summary. The AI agent may obtain text included in the document based on user input and may obtain summary information through an AI learning model.
The AI agent may operate if user input is received for a document summary (e.g., a command to select a document and drag it to a certain point). Alternatively, the artificial intelligence agent may execute prior to receiving user input for document summarization. In this case, after receiving a user input for a document summary, the AI agent of the electronic device 100 may obtain summary information of the selected document. Further, the artificial intelligence agent may be in a standby state prior to receiving user input to the document summary. Here, the standby state is a state in which: predefined user inputs are received to control operation of the AI agent. If a user input for a document summary is received while the artificial intelligence agent is in a standby state, the electronic device 100 may activate the artificial intelligence agent and obtain summary information about the selected document.
Specific examples of obtaining summary information related to a selected document will be described by various embodiments.
Fig. 2A is a block diagram illustrating an electronic device according to one embodiment.
As shown in fig. 2A, the electronic device 100 includes a display 110, a memory 120, a user interface 130, and a processor 140. Other components may additionally be included in the electronic device 100, as will be appreciated by one of ordinary skill in the art.
The display 110 may visually provide various screens. Specifically, the display 110 may display a search screen including search results or documents containing a plurality of texts. In addition, the display 110 may also display summary information that summarizes the document while displaying the document. In addition, the display 110 may display summary information that summarizes the second document in the first document.
Memory 120 may store computer readable instructions or data related to at least one other component of electronic device 100. In particular, the memory 120 may be implemented as a non-volatile memory, a flash memory, a Hard Disk Drive (HDD), or a Solid State Drive (SSD). The memory 120 is accessed by the processor 140, and read/write/modify/delete/update of data can be performed by the processor 140. In the present disclosure, the memory may include a memory 120 mounted to the electronic device 100, a ROM in the processor 140, a RAM (not shown), or a memory card (e.g., a micro SD card and a memory stick). In addition, the memory 120 may store computer readable programs and data for configuring various screens displayed in the display area of the display 110.
Further, according to one embodiment, the memory 120 may store an artificial intelligence agent for generating summary information and store an AI learning model (i.e., a document summary model). According to another embodiment, the AI learning model can be stored in another electronic device.
Memory 120 may store at least a portion of the various modules described in fig. 2C.
The user interface 130 may receive various user inputs and send signals corresponding thereto to the processor 140. In particular, the user interface 130 may comprise a touch sensor, a (digital) pen sensor, a pressure sensor, a mouse, a keyboard or keys. The touch sensor may be, for example, at least one of an electrostatic type, a pressure-sensitive type, an infrared type, and an ultrasonic type. The (digital) pen sensor may be part of the touch panel, for example, or may comprise a separate identification patch. The keys may include, for example, physical buttons, optical keys, or a keypad.
In particular, the user interface 130 may obtain the input signal according to a user input selecting the document to generate summary information or a user input selecting the document after pressing a specific button (e.g., a button for performing an artificial intelligence service). The user interface 130 may send signals corresponding to user inputs to the processor 140.
The processor 140 may be electrically connected to the display 110, memory 120, and user interface 130, e.g., via one or more buses, to control the overall operation and functionality of the electronic device 100. In particular, the processor 140 may perform operations to generate summary information of the searched document using various modules and data and the like stored in the memory 120. Specifically, when a user command for requesting a search for an input keyword is received via the user interface 130, the processor 140 may perform control to search for a plurality of documents based on the keyword, and provide the plurality of documents as a search result and output such result to the display 110. When a user input for selecting at least one document among the plurality of documents is input, the processor 140 may input the selected document to the AI learning model to acquire summary information of the document and control the display 110 to provide the summary information.
Fig. 2B is a block diagram illustrating a configuration of an electronic device according to one embodiment.
As shown in fig. 2B, the electronic device 100 may include a display 110, a memory 120, a user interface 130, a processor 140, a camera 150, a communicator 160, and an audio outputter 170. The display 110, the memory 120, and the user interface 130 have already been described with reference to fig. 2B, and a repetitive description is omitted.
The camera 150 may capture an image containing the user. At this time, the camera 150 may be disposed on at least one of the front and rear surfaces of the electronic device 100. Meanwhile, the camera 150 may be disposed inside the electronic device 100, but may be present only outside the electronic device 100, and may be connected to the electronic device 100 by a wired or wireless manner. In particular, the camera 150 may capture an image containing the user to obtain user history information.
The communicator 160 may communicate with various types of external devices according to various types of communication methods. Communicator 160 may include at least one of a Wi-Fi chip 161, a bluetooth chip 162, a wireless communication chip 163, and a Near Field Communication (NFC) chip 164. The processor 140 may communicate with an external server or various external devices using the communicator 160.
Specifically, the communicator 160 may communicate with an external search server, an external document summarization device, or an external cloud server.
The audio outputter 170 is a configuration for outputting various audio data subjected to processing such as decoding, amplification, noise filtering, and the like, and outputting various notification sounds or voice messages. In particular, the audio outputter 170 may be configured as a speaker, but may be implemented as an output for outputting audio data.
The processor 140 (or controller) may control the overall operation of the electronic device 100 using various programs stored in the memory 120.
The processor 140 may include a RAM 141, a ROM 142, a graphic processing unit 143, a main CPU144, first to nth interfaces 145-1 to 145-n, and a bus 146. At this time, the RAM 141, the ROM 142, the graphic processing unit 143, the main CPU144, the first to nth interfaces 145-1 to 145-n, and the like may be connected to each other via the bus 146.
Fig. 2C is a block diagram of an electronic device according to one embodiment.
The electronic apparatus 100 may include a search module 121, a UI generation module, a summary request detection module 125, a user history collection module 127, and a summary information insertion module 129, and the document summary apparatus 200 may include a summary model setup module 210, a document summary module 220, and a document summary model 230.
The search module 121 may obtain an input signal according to a user input of a keyword through the user interface 130. At this time, the search module 121 may obtain a keyword input into the search window based on the input signal. The search module 121 may obtain an input signal according to a user input requesting a search for an input keyword.
The search module 121 may perform a search operation based on the input keyword. In one embodiment, the search module 121 may search for documents stored in the electronic device 100 based on keywords. Specifically, the search module 121 may preferentially search for a document whose title includes an input keyword, and search for a document whose text includes an input keyword.
In another embodiment, the search module 121 may generate a query or request including keywords and send the query or request to an external search server. When the external search server performs a search operation based on a query or request, the search module 121 may receive search results from the external search server and provide the search results to the user.
The UI generation module 123 may control the display 110 to provide the search result retrieved by the search module 121. Specifically, the UI generation module 123 may provide search results indicating a plurality of retrieved documents to one area of the display screen. At this time, the search module 121 may preferentially display the search result of the document including the keyword in the title or the document including a large number of keywords.
The UI generation module 123 may display a summary setting UI for setting the summary information. The summary setting UI may be a UI for setting the tone or length of the summary information.
The UI generating module 123 may also control the display 110 to provide the summary information obtained by the document summary device 200 to the user. At this time, the UI generation module 123 may control the display 110 to insert the summary information into another document according to a user command, and may control the display 11 to display the summary information using a pop-up screen or the like.
Summary request detection module 125 may obtain an input signal to summarize at least one document of the plurality of retrieved documents based on a user input. At this time, the user input for summarizing the at least one document may be a user input for selecting the at least one document and dragging the at least one document to another document. The user input may be a user input for selecting a summary icon of at least one document and a user input for pressing a specific button (a button for executing an artificial intelligence agent) included in the electronic device 100 after selecting at least one document, and a user input for selecting a summary icon, but is not limited thereto.
The summary request detection module 125 may transmit information about the selected document to the document summary device 200 according to the input signal. At this time, the summary request detection module 125 may transmit data about the document to the document summary apparatus 200, and may transmit additional information (e.g., address information, etc.) about the document to the document summary apparatus 200.
Further, the summary request detection module 125 may transmit information on a document selected by the user, summary setting information through the summary setting UI, information on keywords, and usage history information obtained through the user history collection module 127 together.
The user history collection module 127 may collect user history information from the electronic device 100. At this time, the user history collection module 127 may collect user profile information registered by the user and use history information collected when the user uses the electronic apparatus 100.
At this time, the user profile information is information registered in advance to the electronic apparatus 100, including at least one of a name, a sex, an ID, a preference category, biometric information (e.g., tone, weight, and medical history) of the user. The usage history information is information collected from a user using the electronic device 100, and may include a preference field of the user, a political tendency of the user, a knowledge level of the user, and the like. Specifically, the user history collection module 127 searches the user's preference fields, the user's political tendencies, the user's knowledge level, etc., based on frequently visited websites, keywords to be searched by the user.
The summary information providing module 129 may provide the summary information obtained from the document summary device 200 to the user through the display 110. Specifically, the summary information providing module 129 may insert the summary information acquired according to the user input into another document. At this time, the summary information inserted in another document may be displayed to be distinguished from text contained in another document. Further, the summary information providing module 129 may display the summary information obtained from the document summary apparatus 200 on a separate pop-up screen or a separate full screen.
The summary information providing module 120 may display key, important, or highly relevant information in the summary information so as to be distinguished from other information. In addition, when an additional search request for a specific word is received, the summary information providing module 129 may display information about the specific word together around the content of the specific word.
The summary model setting module 210 may set parameters of the document summary model 230 based on the summary setting information, the information on the keywords, and the user history information received by the summary request detection module 125. In particular, the summary model setup module 210 may set the document summary model 230 to highly weighted keywords or words associated with the keywords. Further, the summary model setting module 210 may set the document summary model 230 to set the tone or length of the summary information based on the summary setting information or the user history information.
In addition, when a plurality of documents are input, the summary model setting module 210 may set the document summary model 230 to highly weight words common to the plurality of documents. In addition, the summary model setup module 210 may set the document summary model 230 to highly weight words or sentences included in the summary or conclusion section of the catalog.
The document summary module 220 may perform a summary operation on a document selected by a user using the document summary model 230 to generate summary information. The document summary module 220 may use the frequency of words, title, length of sentences, and location of sentences in the summary document to identify document components and extract key elements. The document summary module 220 may then calculate a weight for each sentence or word and determine a priority for the sentence or word. The document summary module 220 may extract keywords based on priorities of words included in the document and relationships of the words. The document summary module 220 may then generate summary information through Natural Language Processing (NLP) based on the keywords. However, the document summary method as described above is only one embodiment, and various document summary methods may be used to generate summary information.
The document summarization module 220 may summarize a document by using the documents collected by the document collection device 300. Specifically, the document summary module 220 may receive a document related to a document to be summarized to the document collection apparatus 300, and obtain summary information using the received related document and the document selected by the user. Subsequently, the document summary module 220 extracts words common to the document selected by the user and the document received by the document collection device 300, and obtains summary information using the extracted common words. For example, if the document selected by the user is an article, the document summary module 220 may obtain a newspaper article associated with the selected article via the document collection device 300 and obtain summary information about the article selected by the user.
Meanwhile, the document summarization module 220 may summarize a document in various ways in consideration of the performance of the document summarization apparatus 200 and the like. In particular, the document summary module 220 extracts extracted summaries that directly extract words, phrases, and sentences present in the document, and may obtain summary information using abstract summaries that create new documents by compressing the content of the sentences. The document summary module 220 may also include one of a general summary that is unrelated to user information that summarizes the perspective of the document author and a query-based summary that summarizes the document based on user history information.
Document summary model 230 may be an artificial intelligence learning model for obtaining summary information about a document. At this time, the document summary model 230 may learn to identify words representing constituent elements including the document, extract keywords, and learn to extract keywords based on the relationship with the words, and learn to generate summary information by the nature of the summary acquired from the training data.
As described above, the document summarizing apparatus 200 and the document collecting apparatus 300 may be implemented as separate apparatuses as servers outside the electronic device, but may be embedded in the electronic device 100.
Fig. 3, 4, 5, and 6 are flow diagrams illustrating methods of providing summary information according to various embodiments.
FIG. 3 is a flow diagram that describes a method for inserting summary information into a document, according to one embodiment.
The electronic device 100 may receive the keyword in step S305. At this time, the electronic device 100 may receive the keyword through the search window while displaying the first document.
The electronic device 10 may receive a user command for a request to search for an input keyword in step S310. For example, the electronic device 100 may receive a user command to select an icon for searching after inputting a keyword.
In step S315, the electronic device 100 may transmit a request for searching for a document based on the keyword to the document searching device 400. At this time, the search request may include information on the keyword.
In step S320, the document searching apparatus 400 may search for a document based on the keyword. At this time, the document searching apparatus 400 may be implemented as an external server separate from the electronic apparatus 100, or may be provided in the electronic apparatus 100. Specifically, the document searching device apparatus 400 may search for a plurality of documents including keywords or based on keywords.
In step S325, the document searching apparatus 400 may transmit the keyword-based search result to the electronic apparatus 100.
In step S330, the electronic device 100 may provide the search result. At this time, when the first document is provided to the first region of the display screen, the electronic apparatus 100 may provide the search result to the second region of the display screen.
In step S335, the electronic device 100 may receive a user command for inserting the summary information. At this time, the electronic apparatus 100 may receive a user command to select at least one document among the plurality of documents included in the search result and then drag the document to a point in the first document.
In step S340, the electronic device 100 may transmit information about a document for generating the summary information to the document summary device 200. At this time, the electronic device 100 may transmit information on the document, and may also transmit the summary setting information, information on the keyword, and user history information together.
In step S345, the document summarizing apparatus 200 may generate the summary information by summarizing the document. At this time, the document summarization apparatus 200 may obtain summary information by inputting information about a document as input data into the document summarization model. Specifically, the document summarizing apparatus 200 may generate the summary information based on information on keywords, user history information, and information on documents. Further, when the tone or length of the summary information is set by the user, the document summary apparatus 300 may generate the summary information based on the length or tone of the summary information set by the user.
In step S350, the document summary apparatus 200 may transmit the obtained summary information to the electronic apparatus 100.
In step S355, the electronic device 100 may insert the summary information into another document. Specifically, the electronic device 100 may insert summary information at a point of inputting a user command, wherein the summary information may be displayed differently from other text. Further, the electronic device 100 may insert reference information for the summary information together.
Fig. 4 is a flowchart describing a method of generating summary information based on the summary setting information generated through the UI according to one embodiment.
First, in step S410, the electronic device 100 may receive a summary command for a document. Specifically, the electronic device 100 may receive a user command for selecting a summary icon displayed in one region of the document. Alternatively, the electronic device 100 may receive a predetermined pattern of user touch commands in the document. Alternatively, the electronic device 100 may receive a user command for selecting a summary icon created after a predetermined button (e.g., a button for executing an AI agent) included in the electronic device 100 is selected.
In step S420, the electronic apparatus 100 may display a UI for summary setting. In this case, the UI for the summary setting may be a UI for setting the mood or length of the summary information, but is not limited thereto. Further, the UI for summarizing the setting may be in the form of a scroll bar, but is not limited thereto, and may be in the form of a menu including a plurality of icons.
In step S430, the electronic device 100 may acquire the summary setting information according to a user command input through the UI. At this time, the summary setting information may be information on the tone or length of the summary information. For example, the summary setting information may include information about whether the summary information is definite or affirmative and whether the summary information is long or short.
In step S440, the electronic device 100 may transmit information about the obtained document and summary setting information to the document summary device 200.
In step S450, the document summarizing apparatus 200 may summarize a document based on the summary setting information. Specifically, the document summary apparatus 200 may generate the summary information based on the mood or the length set by the summary setting information.
For example, if the document is an article and the mood of the summary information is set to negative, the document summarizing apparatus 200 may generate the summary information based on a negative word among words included in the article. In addition, if the document is an article and the mood of the summary information is set to be positive, the document summary apparatus 200 may generate the summary information based on a positive word among words contained in the article.
As another example, if the document is an article and the length of the summary information is set to be short, the document summarizing apparatus 200 acquires a conclusion part of the article based on the relationship between the position and the title of the sentence, and generates the summary information based on a basic background such as a date, a place, an event, and the like. If the document is for event delivery and the length of the summary information is set to medium, the document summary apparatus 200 may generate summary information for delivering events that focus on who, what, when, where, why, and how (five W and 1H: 5W 1H). When the document is an article that handles opinion conflicts and the length of the summary information is set to be medium or the like, the document summarizing apparatus 200 may generate the summary information based on the viewpoint of the key subject of the article. If the document is an article about providing a sports result and the length of the summary information is set to be medium, the document summarizing apparatus 200 may generate the summary information based on the content of the sports result. Further, when a document is used for event communication and the length of the summary information is set to be long, the document summary apparatus 200 may generate the summary information including the additional content and 5W 1H. If the document is about an opinion conflict and the length of the summary information is set to be long, the document summary apparatus 200 may generate the summary information based on the viewpoints of all subjects at issue. If the document is an article about providing a sports result and the length of the summary information is set to be long, the document summarizing apparatus 200 may generate the summary information based on the content of the highlight scene and the sports result. That is, the document summary apparatus 200 may generate the summary information in different manners based on the content of the content.
In step S460, the document summary apparatus 200 may transmit the summary information to the electronic apparatus 100.
In step S470, the electronic device 200 may provide the summary information. As an example, the electronic device 200 may match and provide the obtained summary information on an existing document. Specifically, the electronic apparatus 200 may generate a layer capable of displaying a mark at a position of a sentence or word corresponding to the summary information on the document, and may modify a script of the document (e.g., a web document). Further, a separate document image may be generated that displays the mark in the summary information section. As another example, the electronic device 200 may display the summary information on a screen separate from the document (e.g., a full screen or a pop-up screen). As another example, the summary information may be generated as a separate file and stored in the electronic device 100 or in an external cloud server.
Alternatively, the electronic apparatus 200 may provide a link to a document corresponding to the summary information along with the summary information and provide a link for confirming details of the brief summary section of the summary information within the summary information.
As described above, by summarizing a document based on the summary setting information set through the UI, it is possible to provide a user with summary information suitable for the user's needs.
FIG. 5 is a flow diagram depicting a method for summarizing a document based on user history information according to one embodiment.
First, in step S510, the electronic device 100 may receive a command for a summary of a document. Specifically, the electronic device 100 may receive a user command to select a summary icon displayed on an area of the document.
In step S520, the electronic device 100 may obtain user history information related to the document. At this time, the user history information may include user profile information registered by the user, user usage history information, document access path information, and the like.
In step S530, the electronic device 100 may transmit information about the document and the user history to the document summarizing device 200.
In step S540, the document summarizing apparatus 200 may summarize a document based on the user history information. As an example, the document summarization apparatus 200 may generate summary information by summarizing a document based on a knowledge level of a user. Specifically, based on the use history information of the user, when the search history or the document check history related to the document to be summarized is large, the document summarizing apparatus 200 may briefly summarize the basic content of the document and generate the summary information. As another example, the document summary apparatus 200 may generate the summary information by determining a degree of interest in the document based on user profile information (e.g., age, gender, etc.). Specifically, if the interest level of the document based on the user profile information is high, the document summarizing apparatus 200 may generate the summary information to shorten the basic content and to summarize the detailed content for a long time, and the document summarizing apparatus 200 may generate the summary information to summarize the basic content to be long. As another example, the document summary apparatus 200 may generate the summary information by determining a current interest level of the user based on an access path of the document. Specifically, when a document is accidentally accessed during web browsing, the document summarizing apparatus 200 may generate the summary information so that the document is less interesting and the basic content is summarized to be longer. If the document is accessed during verification of the relevant document, the document summarization appliance 200 may determine that the interest in the document is high, so that the summary information may be summarized briefly and new content summarized longer. When accessing a document through a keyword search, the document summarizing apparatus 200 may generate the summary information by summarizing the document based on the keyword.
Further, the document summary apparatus 200 may generate the summary information in different ways according to the types of articles. Specifically, when the document to be summarized is an event-based article, the document summarizing apparatus 200 may generate the summary information about the event (5W 1H). When the document to be summarized is an article including a viewpoint conflict, the document summarizing apparatus 200 may generate the summary information based on the conflicting viewpoint. If the document to be summarized is an article for conveying (sports) results, the document summarization apparatus 200 may generate summary information that focuses on the results section. Further, if there is information on a sentence style (e.g., a phrase) preferred by the user, the document summarizing apparatus 200 may generate the summary information based on the sentence style preferred by the user, and if there is no information on the sentence style preferred by the user, the document summarizing apparatus 200 may generate the summary information based on the sentence style of the document. Further, the document summarization apparatus 200 may generate summary information based on the mood (positive/negative or advanced/conservative) of the document.
In step S550, the document summary apparatus 200 may transmit the summary information to the electronic apparatus 100.
In step S560, the electronic device may provide the transmitted summary information.
FIG. 6 is a flow diagram of a method of generating summary information using related documents collected by a document collection device.
In step S610, the electronic device 100 may receive a command for summarizing a document. Specifically, the electronic device 100 may receive a user command to select a summary icon displayed on a portion of a document.
In step S620, the electronic apparatus 100 may display a UI for summarizing the settings. At this time, the UI for the summary setting may be a UI for setting a mood or a length of the summary information, but is not limited thereto.
In step S630, the electronic apparatus 100 may obtain the summary setting information and the user history information. Specifically, the electronic apparatus 100 may acquire the summary setting information via the UI and obtain user history information including user profile information and usage history information of the user.
In step S640, the electronic apparatus 100 may transmit information about the document, the summary setting information, and the user history information to the document summary apparatus 200.
The document summarizing apparatus 200 may request another document related to the document based on the information on the document in the document collecting apparatus 300 (S650). Here, the other documents related to the document may be a document having the same keyword as that of the document, a document having the same subject as that of the document, a document generated by a creator of the document, a document created within a predetermined period of time from the creation of the document, but is not limited thereto.
In step S660, the document collection apparatus 300 may search for another document related to the document, and in step S670, the document as a result of the search may be transmitted to the document summarizing apparatus 200.
In step S680, the document summarizing apparatus 200 may summarize a document using other related documents based on the summary setting information and the user history information. In particular, the document summary apparatus 200 may generate summary information based on words appearing in each of the document and other related documents. Further, the document summary apparatus 200 may generate summary information that is not literally in a document, but includes related content in other related documents. For example, if only the viewpoint of a is listed in the document and the viewpoint of B is not listed, the document summarizing apparatus 200 may generate the summary information including the viewpoint of B through other related documents.
In step S690, the document summary apparatus 200 may transmit the summary information to the electronic apparatus 100, and in step S700, the electronic apparatus 100 may provide the obtained summary information.
FIG. 7 is a diagram illustrating insertion of summary text, according to one embodiment.
First, as shown in (a) of fig. 7, the electronic device 100 displays a first document 710 in a first region (left region) of the screen and displays a search window 720 at an upper region of a second region. At this time, the electronic device 100 may display the article as a document displayed in the first area. The electronic device 100 may receive an input signal according to a user command for inputting a keyword in the search window 720, and may display the keyword in the search window 720 in response to the input signal.
When receiving a search request for a keyword, the electronic apparatus 100 requests the document searching apparatus 400 to search for a plurality of documents related to the keyword, searches for the plurality of documents from the document searching apparatus 400, and receives a search result indicating the plurality of documents from the document searching apparatus 400. At this time, as shown in (b) of fig. 7, the electronic apparatus 100 may display a list 730 including a plurality of retrieved documents in the second area. At this time, the plurality of documents included in the list may be sorted by priority with respect to the document having a higher correlation with the keyword. For example, the documents included in the list may be arranged in the order of a document having a title including a keyword, a document having text containing a keyword, and a document containing a word similar to the keyword.
The electronic device 100 may receive at least one of the retrieved document 740 and the input signal according to a user input for insertion into the first document 710 displayed in the first area. Specifically, as shown in (c) of fig. 7, the electronic apparatus 100 selects three documents 740 among the plurality of documents included in the list 730, and receives an input signal according to a user input to drag the selected document to a point in the first document 710.
The electronic device 100 may transmit information about the selected document 740 to the document summarization device 200 according to a user command to insert summary text into the first document 710. At this time, the electronic apparatus 100 may transmit information about the document 740, and may also transmit information about keywords, summary setting information for summarizing the document, user history information, and the like. Since the method of summarizing a document using information on keywords, summary setting information for summarizing the document, user history information, and the like has been described above, a detailed description will be omitted.
The document summary device 200 may generate summary text based on information about the selected document 740. At this time, the document summarization apparatus 200 may obtain the summary text by inputting the selected document as input data into the learned document summarization model. At this time, the document summarizing apparatus 200 may acquire the summarized text using the selected document and other documents related to the selected document.
As shown in (c) of fig. 7, when a plurality of documents are selected as documents to be summarized, the document summarizing apparatus 200 may generate a summary text based on a word or sentence that co-exists in the plurality of selected documents. That is, the document summary apparatus 200 may set a higher weight to a word or sentence commonly included in a plurality of selected documents and generate the summary text.
The document summary device 200 may transmit the generated summary text to the electronic device 100.
As shown in (d) of fig. 7, the electronic device 100 inserts the received summary text 750 into the first document 710 displayed in the first area at the point of inputting the user command. At this time, the electronic device 100 may display the summary text 750 to be distinguished from other texts, and may display reference information (i.e., source information) together in the summary text.
Accordingly, the user can more efficiently create a academic paper by summarizing documents obtained through search results when creating the paper and inserting summary information into the paper to more efficiently draft the paper.
FIG. 8 is a diagram illustrating setting the length and mood of a summary text according to one embodiment.
First, as shown in (a) of fig. 8, the electronic device 100 may display an article including a plurality of texts. At this time, a summary icon 810 for receiving a document summary command may be displayed around the title of the article. At this time, the summary icon 810 may be displayed while the electronic device 100 displays the article, but this is merely an example. The summary icon 810 may be displayed after inputting a predetermined button (e.g., a button for running an AI agent) or a user command of a predetermined mode of the electronic device 100 after displaying the article. After inputting a predetermined button or a predetermined pattern of user commands of the electronic device 100, the electronic device 100 may construct a separate layer including the summary icon 810 and display it in an article.
As shown in (b) of fig. 8, when a user command for selecting the icon 810, specifically, a user command for selecting an icon for summary setting in the summary icon 810 is received, the electronic apparatus 100 may display the UI 820 so as to perform document summary setting. At this time, the electronic apparatus 100 may form a separate layer including the UI810 for the document summary setting, and display the UI so as to set the attribute of the document summary to be overlaid on the article. Meanwhile, although the UI 820 for document summary setting is displayed after the summary icon 810 is selected in the above-described embodiment, the summary icon 810 and the document summary setting UI 820 may be simultaneously displayed.
Specifically, the UI 820 for document summary setting may set at least one of a length and a mood of the summary text. At this time, the UI 820 for the document summary setting may be a progress bar form 820 as shown in (b) of fig. 8, but this is merely exemplary, and it may be a menu form including a plurality of items.
When a user command is input to the UI 820, the electronic device 100 may generate summary setting information according to the user command and transmit the summary setting information together with information on a document (article) to the document summary device 200. Here, the information on the document may be text included in the article, but this is merely exemplary and may be information on a website corresponding to the article.
The document summarization apparatus 200 may generate a summary text by summarizing the document based on the received summary setting information. For example, when the length of the summary text is set to be short and the mood of the summary text is set to be neutral, the document summary apparatus 200 may extract keywords for conveying results between texts included in the article and generate the summary text containing only the extracted words. That is, as shown in (c) of fig. 8, the document summary apparatus 200 may generate a summary text containing only the result-based word.
As another example, when the length of the summary text is set to be long and the mood of the summary text is set to be negative, the document summary apparatus 200 not only conveys the result, but also extracts a keyword or a key sentence by adding a high weight to a negative word, and generates the summary text using the extracted keyword or key sentence. At this time, the document summarization apparatus 200 may generate a summary text by generating a natural sentence by performing natural language processing on the extracted keyword. That is, as shown in (d) in fig. 8, the document summary apparatus 200 displays not only the result-oriented sentence but also the sentence of the conflicting viewpoint and generates the summary text including the negative comment during the opinion conflict.
The electronic apparatus 100 may provide the summary text generated when the article is displayed in a separate pop-up screen, but this is merely exemplary and a mark may be displayed on a portion of the text included in the article corresponding to the summary text.
As described above, the document summary apparatus 200 may provide a summary text desired by the user through the summary setting information obtained through the UI 820.
FIG. 9 is a diagram illustrating setting the length and mood of summary text based on user history according to one embodiment.
First, as shown in (a) of fig. 9, the electronic device 100 may display a web page including a plurality of article links. At this time, as shown in fig. 9(a), a summary icon 910 for receiving a document summary command representing a link is displayed with respect to a representative link (i.e., the link located at the uppermost position) among the plurality of article links. The summary icon 910 may be displayed while the electronic device 100 displays the article, but this is merely an example, and the summary icon 910 (e.g., a button for executing an artificial intelligence agent) may be displayed after a web page is displayed or a user command of a predetermined pattern is received.
Further, although the summary icon 910 corresponding to the representative link is displayed in (a) in fig. 9(a), this is merely exemplary, and after selecting the summary icon 910, an article link for generating the summary text may be selected. Alternatively, summary icons respectively corresponding to the plurality of links may be displayed around the plurality of links.
Upon receiving a user command to select the summary icon 910, the electronic device 100 may obtain user history information corresponding to the representative link. At this time, the user history information may include user interests related to representative links, professional knowledge of the user, and the like. The interests of the user and the expertise of the user associated with the representative link may be determined based on profile information such as the age, gender, etc. of the user, the number of searches for other articles related to the article corresponding to the representative link, or the representative link access path.
The electronic device 100 may transmit information (representing links) about the document and the user history information to the document summarizing device 200. Here, the electronic device 100 may transmit a web address representing a link as information on the document, but this is merely exemplary and text of the article may be included in the representative link.
The document summarizing apparatus 200 may summarize a document based on information about the document and user history information. For example, if the article corresponding to the representative link is determined as a non-interesting area (or a non-professional area) by the user history information, the document summarization apparatus 200 may extract keywords from the text contained in the article corresponding to the representative link to deliver the result. At this time, as shown in (b) of fig. 9, the document summary apparatus 200 may extract keywords based on an easy-to-understand word and generate a summary text by listing the extracted keywords. As another example, if the article corresponding to the representative link through the user history information is a field of interest (or a professional field), the document summarization apparatus 200 may extract keywords or key sentences from the text contained in the article corresponding to the representative link. At this time, the document summarizing apparatus 200 may extract a keyword or a keyword sentence based on the technical terms, and provide a natural sentence as a summarized text by regarding the extracted keyword as a natural language, as shown in (c) of fig. 9.
Meanwhile, the electronic apparatus 100 may provide summary text generated in a state in which a plurality of links are displayed as separate pop-up screens, but this is merely an example, and an article corresponding to a representative link may be displayed, and a mark may be displayed in a portion corresponding to the summary text of text contained in the article.
As described above, the document summary apparatus 200 may provide the user with the summary text optimized by the user history information.
In this embodiment, the document summary apparatus 200 is described to generate summary text using the article corresponding to the representative link. However, this is merely an example, and the document summary apparatus 200 may generate the summary text using the article corresponding to the representative link and articles corresponding to other links related to the representative link.
FIG. 10 is a diagram illustrating summary text provided to summarize a received document, according to one embodiment.
The electronic device 100 may display links corresponding to a plurality of recipe documents. Here, the electronic device 100 may display a summary icon 1010 for a document summary in one area of the link located at the top of the plurality of links. As shown in (a) of fig. 10, when a link located at the uppermost position is selected by the user or when a cursor or highlight is displayed on the link located at the uppermost position, a summary icon 1010 may be displayed. As another example, a summary icon corresponding to each of the plurality of links may be displayed on the corresponding link.
As shown in (b) of fig. 10, when a user command for selecting the summary icon 1010 is input, the electronic apparatus 100 may display a UI1020 for document summary setting. At this time, the electronic apparatus 100 may constitute a separate layer including the UI 1010 for document summary setting and overlaid on the article. Although the UI1020 for the document summary setting is displayed after the summary icon 1010 is selected in the above-described embodiment, the summary icon 1010 and the document summary setting UI1020 may be displayed simultaneously.
Specifically, the UI1020 for document summary setting may set the length of the summary text. Here, the UI1020 for the document summary setting may be in the form of a progress bar 1020 as shown in (b) of fig. 10, but this is merely exemplary, and it may be in the form of a menu including a plurality of items. Meanwhile, in the above-described embodiment, the UI1020 for document summary setting may set only the length of the summary text. However, this is only an example, and whether to include an image or a video in the summary text may be set.
When a user command is input to the UI1020, the electronic device 100 may generate summary setting information according to the user command and transmit the summary setting information to the document summary device 200 together with information on a menu document. Here, the information on the document may be text included in the article, but the information on the document may be information on a website corresponding to the article.
The document summarization apparatus 200 may generate a summary text by summarizing the document based on the received summary setting information. For example, if the length of the summary text is set to be short, the document summary apparatus 200 may extract words representing materials among the texts included in the recipe article and generate the summary text including only the extracted words. That is, as shown in (c) of fig. 10, the document summary apparatus 200 may generate a summary text containing only material-oriented words. As another example, if the length of the summary text is set to be long, the document summary apparatus 200 extracts a sentence about a recipe and a word representing a material from the text included in the recipe article and displays the summary text including the material and the recipe. That is, as shown in fig. 10(d), the document summary apparatus 200 may generate a summary text including a recipe sentence and a material guide word.
The document summary device 200 may transmit the generated summary text to the electronic device 100, and the electronic device 100 may display the generated summary text on a separate full screen or pop-up screen. Alternatively, the electronic device 100 may display the recipe document and display a mark in a portion corresponding to the generated summary text of the recipe document.
Meanwhile, in the above-described embodiment, the recipe document is summarized using the summary setting information set through the UI 1020. However, the recipe document may be summarized based on the user history information. Specifically, when cooking is the expertise of the user, the document summary apparatus 200 may extract only keywords for ingredients and recipes to provide a short summary text. If cooking is a non-professional field, the document summary apparatus 200 may extract detailed words for ingredients and recipes and provide a longer summary text through natural language processing of the extracted words.
In the above embodiment, the recipe document is used to provide the summary text. However, another document may be used to provide summary text. For example, when summary text is provided using a travel-related document and set by a user to generate the brief summary text, the document summary apparatus 200 may extract words or phrases based on a travel route in the travel-related document to generate the summary text. Alternatively, when the user selects to generate long summary text, the document summary device 200 may extract words or sentences containing information related to travel destinations, major sights, and famous restaurants in addition to travel routes.
FIG. 11 is a diagram illustrating words included in search summary text, according to one embodiment.
First, as shown in fig. 11 (a), the electronic device 100 may display a web page including a plurality of article links. At this time, a summary icon 1110 for receiving a document summary command representing a link is displayed near the representative link (i.e., the link located at the uppermost position) among the plurality of article links.
When the icon 1110 is selected, the electronic device 100 may transmit information on the representative link to the document summary device 200, and the document summary device 200 may input the information on the representative link to the document summary model and obtain summary text on an article corresponding to the representative link. At this time, the electronic apparatus 100 may transmit the summary setting information and the user history information together with the representative link information, and the document summary apparatus 200 may generate the summary text based on the summary setting information and the user history information.
As shown in (b) of fig. 11, the document summary apparatus 200 may transmit the generated summary text to the electronic apparatus 100, and the electronic apparatus 100 may provide the summary text in a pop-up type.
If one of the words (or phrases, phrases) included in the summary text is selected when the summary text is provided, the electronic device 100 may transmit information about the selected word to an external search server. For example, if a word "Terminal High Availability Area Defense (THAAD)" is selected from words included in the summary text shown in (b) of fig. 11, the electronic apparatus 100 may request detailed information on the word selected by the user to the external search server.
As shown in (c) of fig. 11, when detailed information on a word selected from the external search server is received, the electronic apparatus 100 may display detailed information or additional information on the selected job within the summary text.
In the above embodiment, the external search server provides detailed information or additional information on the word selected by the user, but this is merely exemplary, and the electronic device 100 may request the detailed information or additional information from the document summarizing device 200. Here, the document summarizing apparatus 200 may acquire detailed information or additional information about the selected word using the document for generating the summary text. That is, the document summarizing apparatus 200 may acquire detailed information or additional information about a selected word in an existing document and provide it to the electronic apparatus 100.
Fig. 12 is a diagram illustrating providing summary information, according to one embodiment.
When the summary text is received from the document summary apparatus 200, the electronic apparatus 100 may provide the summary text through various methods.
Specifically, as shown in (a) of fig. 12, the electronic apparatus 100 may display a mark 1210 in a word or sentence corresponding to the summary text on an existing document (e.g., an article) and provide the summary text. Here, the electronic device 100 may generate a separate layer capable of displaying the mark 1210 at a position of a sentence or word corresponding to the summarized text on the document and display the mark 1210 overlaid on the document. Alternatively, the electronic device 100 may modify the script of the document to display the mark-up 1210 on the document, and may generate and display a separate document image in which the mark-up 1210 is displayed in the summary text portion. Meanwhile, in the above-described embodiment, in order to provide the summary text, a mark is displayed on a portion of the document corresponding to the summary text. However, this is merely exemplary, and text corresponding to the summary text may be displayed to be distinguished from other text. For example, the electronic device 100 may display the size, font, thickness, brightness, color, etc. of the text corresponding to the summarized text on the document differently from other text.
Further, as shown in (b) of fig. 12, the electronic apparatus 100 may display a pop-up screen 1220 including summary text on a document (e.g., an article). At this time, the pop-up screen 1220 may be displayed at the top of the screen so as not to interfere with the display of the existing article. However, the pop-up screen 1220 may be resized by a user's manipulation, and a display position may be changed.
As yet another example, the electronic device 100 may display a full screen including the summary text, and the summary text may be stored in the memory 120 within the electronic device 100 or in an external cloud server.
Fig. 13 is a block diagram illustrating a configuration of an electronic device according to one embodiment.
Referring to fig. 13, the electronic device may include a processor 1300, which may implement at least one of the learning unit 1310 and the summary unit 1320 when executing the summary information generation program according to computer readable instructions. The processor 1300 of fig. 13 may correspond to the processor 140 of the electronic device 100 shown in fig. 2A-2B.
The learning unit 1310 may generate or train a document summary model with criteria for summarizing a document and generating summary information. The learning unit 1310 may generate a document summary model that can generate summary information using the collected learning data. As an example, the learning unit 1310 may generate, train, or update a document summary model that is used to determine criteria for generating summary information for a document using a document containing text as learning data.
At this time, the learning unit 1310 may develop the document summary model to have a standard for generating different summary information according to the type of the document. Specifically, the learning unit 1310 may learn a document summary model to generate summary information given different criteria according to whether an input document is an article, a paper, a recipe document, or the like. For example, if the document input as learning data is an article, the learning unit 1310 may learn a document summary model to extract keywords of the article and generate summary information focused on conveying the results. When the document input as the learning data is a paper, the learning unit 1310 may learn a document summary model to extract a summary or a word or sentence included in the conclusion to generate summary information.
The summarizing unit 1320 can use predetermined document data as input data of the learned document summarizing model, and generate summary information on a predetermined document. For example, the summary unit 1320 may use document data including text as input data to a trained summary unit and generate summary information related to a document selected by a user.
At least a portion of the learning unit 1310 and at least a portion of the overview unit 1320 may be implemented as software modules or in the form of at least one hardware chip installed in the electronic device. For example, at least one of the learning unit 1310 and the summarizing unit 1320 may be manufactured in the form of a dedicated hardware chip for Artificial Intelligence (AI) or a conventional general-purpose processor such as a CPU or an application processor or only a graphic processor (e.g., GPU), and may be installed on the various electronic devices or the document summarizing device 200 described above. Herein, a dedicated hardware chip for artificial intelligence is a dedicated processor dedicated to probability calculation and has higher parallel processing performance than a conventional general-purpose processor, so that information can be rapidly processed using artificial intelligence and machine learning. When the learning unit 1310 and the summary unit 1320 are implemented in software modules (or program modules including instructions), the software modules may be stored in a computer-readable non-transitory computer-readable medium. In this case, the software module may be provided by an Operating System (OS) or by a predetermined application program. Alternatively, some of the software modules may be provided by an Operating System (OS), and some of the software modules may be provided by a predetermined application program.
In this case, the learning unit 1310 and the summarizing unit 1320 may be installed on (or implemented by) one electronic device, or installed on separate electronic devices, respectively. For example, one of the learning unit 1310 and the summarizing unit 1320 may be included in the electronic apparatus 100, and the other may be included in an external server. The learning unit 1310 and the summarizing unit 1320 may provide the model information constructed by the learning unit 1310 to the summarizing unit 1320 via a wired or wireless network, and may provide data input to the summarizing unit 1320 to the learning unit 1310 as additional learning data.
Fig. 14A and 14B are block diagrams illustrating a configuration of a learning unit and a summarizing unit according to an embodiment.
Referring to fig. 14A, the learning unit 1310 may include a learning data acquisition unit 1310-1 and a model learning unit 1310-4. The learning unit 1310 may further include at least one of a learning data preprocessing unit 1310-2, a learning data selecting unit 1310-3, and a model evaluating unit 1310-5.
The learning data acquisition unit 1310-1 may acquire learning data necessary for the summary unit in order to generate summary information about a document. In an embodiment of the present disclosure, the learning data acquisition unit 1310-1 may acquire a document including text, such as a paper, an article, e-book content, or the like, as the learning data. The learning data may be data collected or tested by the learning unit 1310 or the manufacturer of the learning unit 1310.
Model learning unit? 1310-4 may use the learning data so that the summarizing unit learns how to summarize the predetermined document. For example, the model learning unit 1310-4 may extract keywords based on the frequency of occurrence of words included in the document, the positions of the words, the relationship with the words, and extract summary information using the extracted keywords. In addition, model learning unit 1310-4 may train a summary unit to generate summary information using a plurality of documents. In particular, model learning unit 1310-4 may train a summary unit to generate summary information based on words that are common to words included in a plurality of documents.
In particular, the model learning unit 1310-4 may train the summary unit by supervised learning using at least some of the learning data as criteria. Alternatively, the model learning unit 1310-4 may train the summary unit by using learning data without any supervision by self-learning, finding the criteria for generating the summary information by unsupervised learning. In addition, the model learning unit 1310-4 may train the summary unit by using, for example, reinforcement learning based on feedback of whether the result is determined to be correct by the situation of learning. In addition, model learning unit 1310-4 may train the summary unit by using a learning algorithm that includes, for example, an error back propagation method or a gradient descent.
Further, the model learning unit 1310-4 may learn selection criteria regarding which learning data should be used to generate summary information by using the input data.
When there are a plurality of predetermined summary units, the model learning unit 1310-4 may determine a summary unit to train a summary unit having a large correlation between the input learning data and the basic learning data. In this case, the basic learning data may be classified in advance according to the data type, and the data summarization unit may be constructed in advance according to the data type. For example, the basic learning data may be classified in advance by various criteria such as an area where the learning data is generated, a time when the learning data is generated, a size of the learning data, a type of the learning data, and a creator of the learning data. For example, the model learning unit 1310-4 may generate a first document summary model for generating summary information of an article and a second document summary model for generating summary information of the article.
When the model learns, the model learning unit 1310-4 may store the learned document summary model. In this case, the model learning unit 1310-4 may store the learned document summary model in the memory 130 of the electronic device 100. Alternatively, the model learning unit 1310-4 may store the learned document summary model in a memory of a server connected to the electronic device 100 via a wired or wireless network.
The learning unit may further include a learning data pre-processor 1310-2 and a learning data selection unit 1310-3 to improve the result of the document summary model or save resources or time required for generating the document summary model.
The learning data preprocessing unit 1310-2 may preprocess the acquired data so that the acquired data may be used for learning to generate summary information. The learning data preprocessing unit 1310-2 may process the acquired data into a predetermined format, and thus the model learning unit 1310-4 may use the acquired data for learning to generate summary information.
The learning data selection unit 1310-3 may select data acquired by the learning data acquisition unit 1310-1 or data required for learning from data preprocessed by the learning data preprocessing unit 1310-2. The selected learning data may be provided to the model learning unit 1310-4. The learning data selection unit 1310-3 may select learning data required for learning from the acquired or preprocessed data according to a predetermined selection criterion. For example, the learning data selection unit 1310-3 may select only data related to text as learning data in the input document data. Further, the learning data selection unit 1310-3 may also select learning data according to a predetermined selection criterion through learning by the model learning unit 1310-4.
The learning unit 1310 may further include a model evaluation unit 1310-5 to improve the output results of the document summary model.
The model evaluation unit 1310-5 may input evaluation data to the document summary model, and if the output result from the evaluation data does not satisfy a predetermined criterion, the model evaluation may cause the model learning unit 1310-4 to iteratively learn at any number of iterations. In this case, the evaluation data may be predetermined data for evaluating a data summary model.
For example, if the number or proportion of output data of the evaluation data that is incorrect exceeds a predetermined threshold among the output results of the learned document summary model, the model evaluation unit 1310-5 may evaluate that the predetermined criterion is not satisfied.
In contrast, when there are a plurality of learned document summary models, the model evaluation unit 1310-5 may evaluate whether each learned document summary model satisfies a predetermined criterion, and determine a model satisfying the predetermined model as a final document summary model. In this case, when there are a plurality of models satisfying the predetermined criterion, the model evaluation unit 1310-5 may determine any one or a predetermined number of models preset in descending order of evaluation scores as the final document summary model.
Referring to fig. 14A, the summary unit 1320 may include a summary data acquisition unit 1320-1 and a summary data providing unit 1320-4. The summary unit 1320 may further include at least one of a summary data preprocessing unit 1320-2, a summary data selecting unit 1320-3, and a model updating unit 1320-5 in a selective manner.
The summary data acquisition unit 1320-1 can acquire document data necessary for generating the summary information. The summary data providing unit 1320-4 may generate the summary information by applying the data obtained by the summary data obtaining unit 1320-1 as an input value to the trained document summary model. The summary data providing unit 1320-4 may provide summary information according to the type of an input document. The summary data providing unit 1320-4 may apply the data selected by the summary data preprocessing unit 1320-2 or the summary data selecting unit 1320-3 to the document summary model to obtain the summary information.
As an embodiment, the summary data providing unit 1320-4 may apply the document data including the text obtained from the summary data obtaining unit 1320-1 to the trained document summary model to generate the summary information.
The summary unit 1320 may include a summary data preprocessing unit 1320-2 and a summary data selecting unit 1320-3 to improve an output result of the document summary model or save resources or time for providing the output result.
The summary data preprocessing unit 1320-2 can preprocess the acquired data so that the document data can be used as the summary data. The summary data preprocessing unit 1320-2 may process the acquired data into a predetermined format, and thus the summary data providing unit 1320-4 may generate summary information using the acquired data.
The summary data selection unit 1320-3 may select data necessary for generating the summary information from the data acquired by the summary data acquisition unit 1320-1 or the data preprocessed by the summary data preprocessing unit 1320-2. The selected data may be provided to the summary data providing unit 1320-4. The summary data selection unit 1320-3 may select some or all of the acquired or preprocessed data according to predetermined selection criteria for generating the summary information. Further, the summary data selection unit 1320-3 may select data according to a predetermined selection criterion through learning by the model learning unit 1310-4.
The model updating unit 1320-5 may control the document summary model to be updated based on the evaluation of the output result provided by the summary data providing unit 1320-4. For example, the model updating unit 1320-5 provides the output result provided by the summary data providing unit 1320-4 to the model learning unit 1310-4, and thus the model learning unit 1310-4 may additionally request to learn or update the document summary model.
Referring to fig. 14B, the external server (S) may learn a document summary model for generating summary information, and the electronic device 100 may generate the summary information based on the learning result of the server (S).
In this case, the model learning unit 1310-4 of the server (S) may perform the function of the learning unit 1310 as shown in fig. 13. The model learning unit 1310-4 of the server (S) may learn criteria on how to generate summary information.
Further, the summary data providing unit 1320-4 of the electronic device 100 may apply the document data selected by the summary data selecting unit 1320-3 to the document summary model generated by the server (S) to obtain the summary information of the document. Alternatively, the summary data providing unit 1320-4 of the electronic device 100 may receive the document summary model generated by the server (S) from the server (S) and generate the summary information using the received document summary model. In this case, the summary data providing unit 1320-4 of the electronic device 100 may apply the document data selected by the summary data selecting unit 1320-3 to the document summary model received from the server (S) to obtain summary information on the document.
FIG. 15 is a flow diagram of a method of inserting summary information according to one embodiment.
First, in step S1510, the electronic apparatus 100 may receive a keyword. At this time, the electronic apparatus 100 may display the first document in the first region, display the search window in the second region, and display the input search word in the search window of the second region.
In step S1520, the electronic apparatus 100 may determine whether a search request for a keyword has been received. At this time, the search request for the keyword may be a user command for selecting an icon for selecting a search icon included in the search window.
When a search request for a keyword is received, the electronic apparatus 100 may search a plurality of documents based on the keyword in step S1530. Here, the electronic apparatus 100 may search a plurality of documents stored in the electronic apparatus 100, but this is merely an example, and may search a plurality of documents stored in an external search server (or a cloud server).
In step S1540, the electronic apparatus 100 may provide a plurality of search documents. Here, the electronic apparatus 100 may provide a plurality of search documents in the second area as a result of the search.
In step S1550, the electronic apparatus 100 may determine whether a user command for requesting summary information of at least one of the plurality of documents has been input. At this time, the user command for requesting summary information of at least one of the plurality of documents may be a user command for selecting at least one of the plurality of documents and dragging the selected document to a point where the first region of the first document is displayed.
In step S1560, the electronic apparatus 100 may acquire the summary information of the selected document by inputting the selected document into the artificial intelligence learning model to obtain the summary information. Here, the artificial intelligence learning model is a model for acquiring summary information, and the summary information may be generated based on information about a document, summary setting information, user history information, and the like.
In step S1570, the electronic apparatus 100 may insert the summary information into another document. Specifically, the electronic device 100 may insert the summary information at a point in another document where a user command is input. At this time, the electronic apparatus 100 may distinguish the summary information from other texts and may provide the reference information together with the summary information.
Fig. 16-19 are flow diagrams illustrating methods of a network system using an overview model according to various embodiments.
In fig. 16 to 19, a network system using a document summary model includes a first part 1601, 1701, 1801, 1901, a second part 1602, 1702, 1802, 1902, and a third part 1703.
Here, the first part 1601, 1701, 1801, 1901 may be the electronic device 100, and the second part 1602, 1702, 1802, 1902 may be a server storing a text summary model. Alternatively, the first components 1601, 1701, 1801 and 1901 may be general purpose processors and the second components 1602, 1702, 1802 and 1902 may be artificial intelligence dedicated processors. Alternatively, the first part 1601, 1701, 1801, 1901 may be at least one application and the second part 1602, 1702, 1802, 1902 may be an Operating System (OS).
That is, the second component 1602, 1702, 1802, 1902, as a component with a large amount of resources, may be more integrated, dedicated, less delayed, or dominant in performance, and may be a component that processes many operations required to create, update, or apply an AI training model faster and more efficiently than the first component 1601, 1701, 1801, 1901.
In this case, an interface may be defined that may receive and transmit data between the first part 1601, 1701, 1801, 1901 and the second part 1602, 1702, 1802, 1902.
For example, an Application Program Interface (API) may be defined that has argument values (or intermediate or transmitted values) to be applied to the artificial intelligence learning model. An API may be defined as a set of subroutines or functions that may be called in any one protocol (e.g., a protocol defined in the electronic device 100) for any processing of another protocol (e.g., a protocol defined in a server). That is, an environment may be provided in which operations of another protocol may be performed by either protocol through the API.
Instead, the third part 1703 may collect and provide other documents related to the document based on data received by at least one of the first part 1701 and the second part 1702. The third component 1703 may correspond to, for example, the document collection device 300 of FIG. 2C. At this time, the data received by the third part 1703 may be, for example, information on a document selected by the user.
In one embodiment, in fig. 16, in step S1610, the first component 1601 may receive a summary command for a document. Here, the summary command for the document may include, but is not limited to, an instruction to select a summary icon included in the document, an instruction to select at least one of the retrieved plurality of documents, and the like.
In step S1620, the first part 1601 may display a UI. Here, the UI may be used to generate summary setting information and to set the length or mood of the summary information.
In step S1630, the first section 1601 may obtain summary setting information and user history information. Here, the first section 1601 may obtain summary setting information according to a user command input through the UI, and may obtain user history information including user profile information and usage history information.
In step S1640, the first part 1601 may transmit information about the document, summary setting information, and user history information to the second part 1602.
The second component 1602 can perform a document summary based on the obtained information. Specifically, the second component 1602 can apply information about the acquired document as input data to the AI model to generate summary information. Here, the second part 1602 may generate the summary information by setting parameters of the summary information based on the obtained summary setting information and the user history information.
In step S1660, the second component 1602 may send the summary information to the first component 1601.
In step S1670, first section 1601 may provide the generated summary information. Here, the first part 1601 may provide the generated summary information on a separate screen and may provide a mark to a portion corresponding to the summary information of the currently displayed document.
In another embodiment, in fig. 17, the first part 1701 may receive a summary command for a document in step S1710 and display a UI in step S1720. In step S1730, the first part 1701 may acquire summary setting information through the UI and obtain user history information. Subsequently, in step S1740, the first part 1701 may transmit information on the document, summary setting information, and user history information to the second part 1702. Steps S1710 to S1740 of fig. 17 correspond to steps S1610 to S1640 of fig. 16, and duplicate description is omitted.
In step S1750, second part 1702 may request third part 1703 to search for another document related to the document. Here, the request may include information about the document.
In step S1760, the third component 1703 may search for another document based on the information about the document. In this case, another document as a result of the search is related to the document as an input of the search. For example, if a document is an article, another document is a related article or a subsequent article. If the document is a paper, another document may be a paper in the same domain.
In step S1770, the third part 1703 may provide the search result of another document related to the document to the second part 1702.
In step S1780, the second part 1702 may summarize the document using information about the document and other documents related to the document. Specifically, the second part 1702 may obtain summary information by applying information about the document and other documents related to the document as input data to the learned AI learning model. Here, the artificial intelligence learning model may generate summary information based on words (or sentences, phrases) included in documents and other related documents. Further, the second part 1702 may generate summary information by setting parameters of the artificial intelligence learning model based on the summary setting information and the user history information acquired from the first part 1701.
In step S1790, the second part 1702 may send the summary information to the first part 1701. In step S1795, the first part 1701 may provide the generated summary information.
In another embodiment, in fig. 18, in step S1810, the first component 1801 may receive a search request. In particular, first component 1801 may receive a search request to search a plurality of documents based on a keyword entered by a user. Here, the keyword may be a keyword input in a search window of a web browser.
In step S1820, the first component 1801 may send information regarding the keyword to the second component 1802.
Here, in step S1830, the second section 1802 may search for a plurality of documents based on the keyword. In step S1840, the second section 1802 may transmit a plurality of search documents to the first section 1801.
In step S1850, the first component 1801 may receive a command for requesting summary information of at least one of the plurality of documents. In step S1860, the first section 1801 may transmit a summary command to the second section 1802. Here, the summary command may include information on at least one document and information on a keyword. In addition, if the summary setting information is obtained via the UI or if the user history information is acquired, the first section 1801 may transmit the summary setting information or the user history information to the second section 1802 together with a request command.
In step S1870, the second component 1802 may use an artificial intelligence learning model to obtain summary information about the keyword. Specifically, upon receiving a user command requesting summary information for at least one of the plurality of documents, the second component 1802 may input the at least one of the plurality of documents into an artificial intelligence learning model trained to obtain the summary information about the at least one document related to the keyword.
In step S1880, the second component 1802 may send the summary information to the first component 1801.
In step S1890, the first component 1801 may provide the received summary information.
As another embodiment, in fig. 19, the first part 1901 may display the first document in step S1910.
In step S1920, the first part 1901 may receive a keyword of the user. Here, in step S1930, the first part 1901 may transmit the keyword to the second part 1902.
In step S1940, the second part 1902 may search for a plurality of second documents based on the keywords. In step S1950, second part 1902 may transmit the plurality of searched second documents to first part 1901.
In step S1960, the first section 1901 may receive an insert command for inserting at least a portion of the plurality of second documents into the first document. Here, the insertion command may be a command to select a part of the plurality of retrieved second documents and drag the selected first document to a position where the user desires to insert.
In step S1970, first component 1901 may send at least one second document to second component 1902.
In step S1980, the second component 1902 can use an artificial intelligence learning model to obtain summary information. In particular, the second component 1902 can input at least one of the plurality of second documents into an artificial intelligence learning model that is learned to obtain summary information to obtain the summary information about the at least one second document related to the keyword.
In step S1990, the second part 1902 may send the summary information to the first part 1901.
In step S1995, the first part 1901 may insert summary information into the first document.
FIG. 20 is a flow diagram illustrating a method for a server providing summary information according to one embodiment.
In step S2010, the server may receive a search request for a keyword.
In step S2020, the server may search for a plurality of documents based on the keyword and provide the documents to the electronic apparatus 100.
In step S2030, the server may receive a user command to request summary information on at least one of the plurality of searched documents.
In step S2040, the server may input at least one of the plurality of documents into the learned AI model to obtain summary information, so as to obtain the summary information about the at least one document related to the keyword.
In step S2050, the server 200 may provide the obtained summary information to the electronic apparatus 100.
FIG. 21 is a flow diagram illustrating a method for an electronic device to provide summary information according to one embodiment.
In step S2110, the electronic apparatus 100 may receive a keyword while displaying the first document.
In step S2120, the electronic device 100 may receive a request for searching for a keyword.
In step S2130, the electronic apparatus 100 may search for a plurality of second documents based on the keywords and provide them.
In step S2140, the electronic apparatus 100 may receive a user instruction to insert summary information on at least one of the plurality of second documents into the first document.
In S2150, the electronic device 100 may input at least one of the plurality of second documents in the AI learning model that is learned to obtain the summary information about the at least one second document related to the keyword.
In step S2160, the electronic device 100 may insert the obtained summary information on the first document.
The term "module" as used in this disclosure includes a unit made of hardware, software, or firmware, and may be used interchangeably with terms such as, for example, logic blocks, components, or circuits. A module may be an integrated, constructional part or a minimal unit or part thereof performing one or more functions. For example, the module may be configured as an Application Specific Integrated Circuit (ASIC).
Various embodiments of the present disclosure may be implemented in software, including instructions stored on a machine-readable storage medium, such as a machine (e.g., a computer). A device is a device that calls storage instructions from a storage medium and may operate according to the called instructions, and may include an electronic device (e.g., electronic device 100) according to one embodiment. If the instructions are executed by a processor, the processor may perform the functions corresponding to the instructions directly or by using other components under the control of the processor. The instructions may include code that may be generated or executed by a compiler or interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Herein, "non-transitory" means that the storage medium does not include a signal and is tangible, but does not distinguish whether data is permanently stored or temporarily stored in the storage medium.
According to one embodiment, a method according to various embodiments disclosed herein may be provided as a computer program product. The computer program product may be traded between buyers and sellers as an item of merchandise. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or through an application store (e.g., PlayStore)TM) And (4) online distribution. In the case of online distribution, at least a portion of the computer program product may be temporarily stored or temporarily created on a storage medium such as a memory of a manufacturer's server, a server of an application store, or a relay server.
Each component (such as a module or program) according to various embodiments may be composed of a single entity or a plurality of entities, and some of the above sub-components may be omitted, or other components may be further included in various embodiments.
Alternatively or additionally, some components (e.g., modules or programs) may be integrated into one entity to perform the same or similar functions performed by each respective component prior to integration. Operations performed by a module, program, or other component in accordance with various embodiments may be performed sequentially, in parallel, repeatedly, or heuristically, or at least some of the operations may be performed in a different order, or omitted, or another operation may be added.
While embodiments have been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims. Therefore, the scope is defined not by the detailed description of the invention but by the appended claims, and all differences within the scope will be construed as being included in the present disclosure.

Claims (15)

1. A method of a server providing summary information using an artificial intelligence learning model, the method comprising:
in response to receiving a search request including a keyword, searching a plurality of documents based on the keyword;
in response to receiving a request for summary information of a document of the plurality of documents, using the document as input to obtain the summary information of the document from an artificial intelligence learning model trained to obtain the summary information of the document; and
providing summary information of the document to the electronic device.
2. The method of claim 1, wherein obtaining summary information for the document comprises: obtaining summary information of the document according to a summary length of the summary information based on the summary length.
3. The method of claim 2, wherein the summary length is based on user history information.
4. The method of claim 1, wherein the artificial intelligence learning model is trained to summarize a plurality of documents based on content commonly included in the plurality of documents.
5. The method of claim 1, wherein the artificial intelligence learning model is trained to summarize the document based on the entered keywords.
6. The method of claim 1, wherein the artificial intelligence learning model is trained to summarize the documents based on an index included in the documents.
7. The method of claim 1, further comprising:
in response to a user instruction to select at least one word from the summary information, obtaining additional information about the at least one word from at least one document of the plurality of documents, and providing the additional information.
8. A computer-readable medium having stored thereon a program for executing a method for providing summary information using an artificial intelligence learning model, the method comprising:
in response to receiving a search request including a keyword, searching a plurality of documents based on the keyword;
in response to receiving a request for summary information of a document of the plurality of documents, using the document as input to obtain the summary information of the document from an artificial intelligence learning model trained to obtain the summary information of the document; and
providing summary information of the document to the electronic device.
9. The computer-readable medium of claim 8, wherein the method further comprises displaying a UI for setting a summary length of the summary information,
wherein obtaining summary information of the document comprises: obtaining summary information of the document according to a summary length of the summary information based on the summary length.
10. The computer-readable medium of claim 8,
wherein the summary length is based on user history information.
11. The computer-readable medium of claim 8, wherein the keyword is entered through a search window provided to a second region of a display screen when the document is provided to a first region of the display screen,
wherein the method further comprises providing the plurality of documents to a third region.
12. The computer-readable medium of claim 11, wherein the citation information regarding the document is inserted into the document along with the summary information.
13. The computer-readable medium of claim 8, wherein the artificial intelligence learning model is trained to summarize a plurality of documents based on content commonly included in the plurality of documents.
14. The computer-readable medium of claim 8, wherein the artificial intelligence learning model is trained to summarize the document based on the entered keywords.
15. The computer-readable medium of claim 8, wherein the artificial intelligence learning model is trained to summarize the documents based on an index included in the documents.
CN201880035705.3A 2017-08-01 2018-08-01 Apparatus and method for providing summary information using artificial intelligence model Pending CN110692061A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201762539686P 2017-08-01 2017-08-01
US62/539,686 2017-08-01
KR1020180007169A KR102542049B1 (en) 2017-08-01 2018-01-19 Apparatus and Method for providing a summarized information using a artificial intelligence model
KR10-2018-0007169 2018-01-19
PCT/KR2018/008759 WO2019027259A1 (en) 2017-08-01 2018-08-01 Apparatus and method for providing summarized information using an artificial intelligence model

Publications (1)

Publication Number Publication Date
CN110692061A true CN110692061A (en) 2020-01-14

Family

ID=65370158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880035705.3A Pending CN110692061A (en) 2017-08-01 2018-08-01 Apparatus and method for providing summary information using artificial intelligence model

Country Status (3)

Country Link
EP (1) EP3602334A4 (en)
KR (1) KR102542049B1 (en)
CN (1) CN110692061A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021147590A1 (en) * 2020-01-22 2021-07-29 同济大学 Safety assistance function-oriented vehicle-mounted head-up display system
US11449518B2 (en) 2020-04-08 2022-09-20 Capital One Services, Llc Neural network-based document searching system

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102565149B1 (en) * 2020-05-27 2023-08-09 정치훈 Apparatus and method for providing summary of document
KR102532863B1 (en) * 2020-05-27 2023-05-17 정치훈 Apparatus and method for providing user interface for searching document using topic keyword
KR102519955B1 (en) * 2020-05-27 2023-04-10 정치훈 Apparatus and method for extracting of topic keyword
KR102378936B1 (en) * 2020-09-22 2022-03-28 심재우 System for managing reading and learning of digital documents
KR102424342B1 (en) * 2020-11-24 2022-07-25 울산과학기술원 Method and apparatus for generating thumbnail images
KR102575507B1 (en) * 2021-06-15 2023-09-08 주식회사 웨이커 Article writing soulution using artificial intelligence and device using the same
KR102343059B1 (en) * 2021-08-05 2021-12-27 주식회사 인피닉 Data collecting system for artificial intelligence machine learning, and device therefor
KR102548600B1 (en) * 2021-08-30 2023-06-27 계명대학교 산학협력단 Development of AI Based Surgery Result Report System and Method Using Voice Recognition Platform
KR102400767B1 (en) * 2022-02-08 2022-05-23 (주)에스투더블유 Method for collecting and preprocessing learning data of an artificial intelligence model to perform dark web document classification
KR102559891B1 (en) * 2022-04-06 2023-07-27 주식회사 유니코드 Apparatus, method and program for providing content creation integrated platform service that provides a recommended design determined using user content
KR102596191B1 (en) * 2022-10-17 2023-10-31 주식회사 아티피셜 소사이어티 Method for processing visualization of text based on artificial intelligence
KR102574619B1 (en) * 2023-02-03 2023-09-06 김용로 Advertizing system of journal channel using transforming of reasearch paper to moving picture

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110099134A1 (en) * 2009-10-28 2011-04-28 Sanika Shirwadkar Method and System for Agent Based Summarization
CN106415535A (en) * 2014-04-14 2017-02-15 微软技术许可有限责任公司 Context-sensitive search using a deep learning model

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060101012A1 (en) * 2004-11-11 2006-05-11 Chad Carson Search system presenting active abstracts including linked terms
US8335754B2 (en) * 2009-03-06 2012-12-18 Tagged, Inc. Representing a document using a semantic structure
KR20160058587A (en) * 2014-11-17 2016-05-25 삼성전자주식회사 Display apparatus and method for summarizing of document
KR101754473B1 (en) * 2015-07-01 2017-07-05 네이버 주식회사 Method and system for automatically summarizing documents to images and providing the image-based contents

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110099134A1 (en) * 2009-10-28 2011-04-28 Sanika Shirwadkar Method and System for Agent Based Summarization
CN106415535A (en) * 2014-04-14 2017-02-15 微软技术许可有限责任公司 Context-sensitive search using a deep learning model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021147590A1 (en) * 2020-01-22 2021-07-29 同济大学 Safety assistance function-oriented vehicle-mounted head-up display system
US11449518B2 (en) 2020-04-08 2022-09-20 Capital One Services, Llc Neural network-based document searching system

Also Published As

Publication number Publication date
KR20190013426A (en) 2019-02-11
EP3602334A4 (en) 2020-03-11
KR102542049B1 (en) 2023-06-12
EP3602334A1 (en) 2020-02-05

Similar Documents

Publication Publication Date Title
KR102644088B1 (en) Apparatus and Method for providing a summarized information using a artificial intelligence model
CN110692061A (en) Apparatus and method for providing summary information using artificial intelligence model
US10956007B2 (en) Electronic device and method for providing search result thereof
CN111247536B (en) Electronic device for searching related image and control method thereof
KR20240006713A (en) Electronic device and Method for changing Chatbot
US11954150B2 (en) Electronic device and method for controlling the electronic device thereof
US11115359B2 (en) Method and apparatus for importance filtering a plurality of messages
CN110998507B (en) Electronic device and method for providing search results
CN111902812A (en) Electronic device and control method thereof
KR102469717B1 (en) Electronic device and method for controlling the electronic device thereof
EP3523932B1 (en) Method and apparatus for filtering a plurality of messages
US11531722B2 (en) Electronic device and control method therefor
CN111226193B (en) Electronic device and method for changing chat robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination