WO2018128214A1 - Machine learning based artificial intelligence emoticon service providing method - Google Patents

Machine learning based artificial intelligence emoticon service providing method Download PDF

Info

Publication number
WO2018128214A1
WO2018128214A1 PCT/KR2017/001192 KR2017001192W WO2018128214A1 WO 2018128214 A1 WO2018128214 A1 WO 2018128214A1 KR 2017001192 W KR2017001192 W KR 2017001192W WO 2018128214 A1 WO2018128214 A1 WO 2018128214A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
emoticon
terminal
text
texts
Prior art date
Application number
PCT/KR2017/001192
Other languages
French (fr)
Inventor
Hyosub LEE
Original Assignee
Platfarm Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Platfarm Inc. filed Critical Platfarm Inc.
Publication of WO2018128214A1 publication Critical patent/WO2018128214A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/268Morphological analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/274Converting codes to words; Guess-ahead of partial word inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information

Definitions

  • the present invention relates to a machine learning based artificial intelligence emoticon service providing method.
  • a terminal such as a personal computer, a notebook computer, or a mobile phone may be configured to perform various functions.
  • the various functions include a data and voice communication function, a function of taking a picture or a moving image by means of a camera, a voice storing function, a function of reproducing a music file by means of a speaker system, and a function of displaying an image or a video.
  • Some terminals include an additional function of playing a game and other terminals may be implemented as a multimedia device.
  • a terminal may receive a broadcast or multicast signal to show a video or a television program.
  • the terminal may be classified into a mobile terminal (or a portable terminal) and a stationary terminal depending on whether to be movable. Further, the mobile terminal may be classified into a handheld terminal and a vehicle mount terminal depending on whether a user directly carries the terminal.
  • the terminal is implemented as a multimedia player having multiple functions such as a function of taking a photograph or a moving image, a function of reproducing music or moving image file, a function of playing a game, or a function of receiving a broadcast.
  • the present invention has been made in an effort to provide a machine learning based artificial intelligence emoticon service providing method to a user.
  • an object of the present invention is to provide a system and an application which analyze an emotion element and a context included in a message of a user using a machine learning based artificial intelligence technology and express a character emoticon in real time to a user.
  • an object of the present invention is to provide artificial intelligence technology convergence which recognizes context data (elements such as emotion, environment, and an object) through message analysis and reprocesses the context data into an emoticon which is a visual communication tool to a user.
  • context data elements such as emotion, environment, and an object
  • an object of the present invention is to provide an application and an API which are not limited to a specific service or application but universally used in consideration of a usage environment to input a text to a user.
  • an object of the present invention is to build a word vector based artificial intelligence machine learning system which learns a conversation habit pattern of a user and is more accurate as the system is used and provide the system to the user.
  • An object of the present invention is to suggest more accurate and convenient communication experience by combining a messaging communication which accounts for a majority of mobile communications, an artificial intelligence, and design technology, and innovate a usage experience of an existed emoticon which just plays an auxiliary role of a text due to inconvenient usage and limited expressions, to analyze conversation by artificial intelligence and recombine a graphic component in real time, thereby providing unlimited expressions.
  • an object of the present invention is to develop a technology that adds a design to contents which are continuously created in an SNS, blog, and media using not only a message application but also an input interface.
  • a machine learning based artificial intelligent emoticon providing method may include a first step of inputting at least one first content through a first terminal; a second step of transmitting the first content to a server, by means of the first terminal; a third step of classifying a text included in the first content by a predetermined unit to generate a plurality of second texts, by means of the server; a fourth step of filtering at least one third text which satisfies a predetermined condition, among the plurality of second texts, by means of the server; a fifth step of determining at least one first emoticon which matches the third text among a plurality of previously stored emoticons, by means of the server; and a sixth step of transmitting the first emoticon to a second terminal, by means of the server.
  • the first content may include text information, image information, moving image information, and voice information.
  • the server may extract a text included in the image information or the moving image information and classify the extracted text by a predetermined unit to generate the plurality of second texts
  • the server may convert the voice information into text information and classify the converted text information by the predetermined unit to generate the plurality of second texts.
  • the predetermined unit in the third step may be a morpheme unit and in the third step, at least a part of the plurality of second texts may be converted into a basic verb.
  • the predetermined condition in the fourth step may be whether the text is a text having meanings.
  • the third texts may be plural and the first emoticons which match the plurality of third texts may be plural.
  • the fifth step may include a step 5-1 of classifying the plurality of third texts by at least one category among a plurality of predetermined categories, by means of the server; a step 5-2 of counting the number of classified third texts for every category, by means of the server; a step 5-3 of assigning a result value obtained by counting the third texts for every category to the third text which belongs to each category, by means of the server; a step 5-4 of determining a plurality of first emoticons which matches the plurality of third texts among a plurality of previously stored emoticons, by means of the server; and a step 5-5 of determining an arrangement order of the plurality of first emoticons according to the result values of the plurality of third texts, by means of the server.
  • the method may further include: between the fifth step and the sixth step, a step 5-6 of transmitting the plurality of first emoticons and the arrangement order to the first terminal, by means of the server; a step 5-7 of displaying the plurality of first emoticons according to the arrangement order, by means of the first terminal; a step 5-8 of selecting a second emoticon among the plurality of first emoticons, by means of a user of the first terminal; and a step 5-9 of transmitting information on the second emoticon to the server, by means of the first terminal, and in the sixth step, the server may transmit the second emoticon to the second terminal.
  • the first to fifth steps may be additionally performed on the second content.
  • first terminals may be plural
  • data related to the first to sixth steps between the plurality of first terminals and the server may be stored in the server, and, the server may accumulate and use the stored data to perform machine learning.
  • the present invention may provide a machine learning based artificial intelligence emoticon service providing method to a user.
  • the present invention may provide, to a user, a system and an application, which analyze an emotion component and a context included in a message of a user using a machine learning based artificial intelligence technology and express a character emoticon in real time.
  • the present invention may provide artificial intelligence technology convergence which recognizes context data (components such as emotion, environment, and an object) through message analysis and reprocesses the context data into an emoticon which is a visual communication tool, to a user.
  • context data components such as emotion, environment, and an object
  • emoticon which is a visual communication tool
  • the present invention may provide an application and an API which are not limited to a specific service or application but universally used in consideration of a usage environment to input a text, to a user.
  • the present invention may build a word vector based artificial intelligence machine learning system which learns a conversation habit pattern of a user and is more accurate as the system is used and provide the system to the user.
  • the present invention may suggest more accurate and convenient communication experience by combining a messaging communication which accounts for a majority of mobile communications and an artificial intelligence and design technology and innovate a usage experience of an emoticon which just plays an auxiliary role of a text to analyze conversation by artificial intelligence and recombine a graphic component in real time, thereby providing unlimited expressions.
  • the present invention may develop a technology that adds a design to contents which are continuously created in an SNS, blog, and media using not only a message application but also an input interface.
  • a character emoticon expression is changed according to an input word or symbol (number) element in real time so that the user can check and express his/her emotion.
  • the present invention may reduce a technical restriction such as delay which is generated during a calculating process by a user friendly UX design and more intimately transmit a communication process to the user.
  • the present invention may consistently increase precision of the artificial intelligence by the machine learning.
  • FIG. 1 illustrates a block diagram of a machine learning based artificial intelligence emoticon service providing system suggested by the present invention.
  • FIG. 2 illustrates a block diagram of a terminal or a server which is applied to the present invention.
  • FIG. 3 illustrates a flowchart for explaining a machine learning based artificial intelligence emoticon service providing method suggested by the present invention.
  • FIG. 4 illustrates a specific example of steps of the machine learning based artificial intelligence emoticon service providing method explained in FIG. 3.
  • FIG. 5 illustrates a specific example in which an emoticon expressed in accordance with change in an emotion of a user or change in an intensity of emotion in regard to the present invention.
  • FIG. 6 is a flowchart for explaining a method that arranges emoticons in the order of relevance to be selected by a user in regard to another exemplary embodiment of the present invention.
  • FIG. 7 is a flowchart for explaining a method that, when an additional message is input from a user before transmitting an emoticon, reflects the additional message in real time to allow the user to select a relevant emoticon, in regard to another exemplary embodiment of the present invention.
  • FIG. 8 illustrates a specific example which analyzes a morpheme through contents input by a user in FIG. 6 or FIG. 7.
  • FIG. 9 illustrates a specific example of the present invention which extracts a keyword based on the morpheme analyzed in FIG. 8.
  • FIG. 10 illustrates an example of a specific operation of matching a relevant emoticon in a database based on the keyword extracted in FIG. 9.
  • FIG. 11 illustrates a specific example which arranges emoticons matched in FIG. 10 in the order of relevance and displays the emoticons to the user.
  • FIG. 12 illustrates a specific example which allows a user to select one of a plurality of emoticons displayed in FIG. 11 to transmit the selected emoticon.
  • FIG. 13 illustrates an example which when an additional message is input from the user before transmitting an emoticon in FIG. 12, reflects the additional message in real time.
  • FIGS. 14 and 15 illustrate a specific example which provides a machine learning based artificial intelligence emoticon service in the case of English, in regard to the present invention.
  • An emoticon (Emoji) market has spread around a millennial generation and is now widely used by generations. Six billion or more emoticons are sent per day around the world as of last year (Oxford University in 2015).
  • Emoticons are evolved to be simply used and provide various emotion expressions and are evolved from an auxiliary means which provides pleasure in a text communication to a new communication means which supplements limitation of a text.
  • the emoticon (Emoji) market is estimated to be 100 billion won and when a derivative product market such as a character figure is added, the domestic market is estimated to be 200 billion won.
  • the domestic market shows a rapid growth of 30 to 40% annually in terms of usage and sales (in 2015).
  • the average number of emoticons which are used in KakaoTalk is 200 million per day and the monthly average of visitors to an emoticon store is 27 million. Seven out of ten users of KakaoTalk use the emoticon store more than once a month (in 2015).
  • Miitomo which is launched by Nintendo in 2016 is a game which creates and utilizes avatars resembling a user and has gained a good response with one million downloads in three days of its launch.
  • Fleksy or SharingGIF which is a keyboard application which does not include a character but classifies GIF files which are popular on the Internet by tags to search for appropriate results is also frequently used by many users.
  • the present invention is to provide a machine learning based artificial intelligence emoticon service providing method to a user.
  • an object of the present invention is to provide a system and an application which analyze an emotion element and a context included in a message of a user using a machine learning based artificial intelligence technology and express a character emoticon in real time.
  • an object of the present invention is to provide artificial intelligence technology convergence which recognizes context data (components such as emotion, environment, and an object) through message analysis and reprocesses the context data into an emoticon which is a visual communication tool to a user.
  • an object of the present invention is to provide an application and an API which are not limited to a specific service or application but universally used in consideration of a usage environment to input a text to a user.
  • an object of the present invention is to build a word vector based artificial intelligence machine learning system which learns a conversation habit pattern of a user and is more accurate as the system is used and provide the system to the user.
  • An object of the present invention is to suggest more accurate and convenient communication experience by combining a messaging communication which accounts for a majority of mobile communications and an artificial intelligence and design technology and innovate a usage experience of an emoticon which just plays an auxiliary role of a text to analyze conversation by artificial intelligence and recombine a graphic component in real time, thereby providing unlimited expressions.
  • an object of the present invention is to develop a technology that adds a design to contents which are continuously created in an SNS, blog, and media using not only a message application but also an input interface.
  • FIG. 1 illustrates a block diagram of a machine learning based artificial intelligence emoticon service providing system suggested by the present invention.
  • a machine learning based artificial intelligence emoticon service providing system 1 suggested by the present invention may be divided into a content input unit 2, an emoticon generating unit 3, and an emoticon receiving unit 4.
  • the content input unit 2 provides a function of receiving contents such as a text, a voice, an image, or a moving image, from a user.
  • the emoticon generating unit 3 provides a function of catching a context based on contents input through the content input unit 2 to automatically convert the context into the most appropriate emoticon and transmit the emoticon.
  • the emoticon receiving unit 4 provides a function of receiving emoticon data from the emoticon generating unit 3 to display the emoticon data to the user.
  • the content input unit 2, the emoticon generating unit 3, and the emoticon receiving unit 4 may be terminals or servers.
  • the content input unit 2, the emoticon generating unit 3, and the emoticon receiving unit 4 may exchange data therebetween using short range communication or remote communication.
  • a short range communication technology applied herein may include a Bluetooth, a radio frequency identification (RFID), an infrared data association (IrDA), an ultra wideband (UWB), a ZigBee, and Wi-Fi (wireless fidelity).
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra wideband
  • ZigBee ZigBee
  • Wi-Fi wireless fidelity
  • an applied remote communication may include code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA), and single carrier frequency division multiple access (SC-FDMA) technologies.
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single carrier frequency division multiple access
  • the server or the terminal which may serve as the content input unit 2, the emoticon generating unit 3, or the emoticon receiving unit 4 will be described in detail.
  • FIG. 2 illustrates a block diagram of a terminal or a server which is applied to the present invention.
  • the terminal or the server 100 may include a wireless communication unit 110, an audio/video (A/V) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supplying unit 190.
  • A/V audio/video
  • the terminal or the server 100 may include a wireless communication unit 110, an audio/video (A/V) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supplying unit 190.
  • A/V audio/video
  • FIG. 2 the components illustrated in FIG. 2 are not essential components so that a terminal or server 100 apparatus having more components or less components may be implemented.
  • the wireless communication unit 110 may include one or more modules which enable wireless communication between a terminal device and a wireless communication system or between a terminal device and a network where the terminal device is located.
  • the wireless communication unit 110 may include a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short range communication module 114, and a position information module 115.
  • the broadcast receiving module 111 receives a broadcasting signal and/or broadcast related information from an external broadcasting management server through a broadcasting channel.
  • the broadcasting channel may include a satellite channel and a ground wave channel.
  • the broadcasting management server may refer to a server which generates and transmits the broadcasting signal and/or the broadcast related information or a server which receives a previously generated broadcasting signal and/or broadcast related information to transmit the broadcasting signal and/or broadcast related information to the terminal.
  • the broadcasting signal may include not only a TV broadcasting signal, a radio broadcasting signal, and a data broadcasting signal, but also a broadcasting signal in which the data broadcasting signal is combined with the TV broadcasting signal or the radio broadcasting signal.
  • the broadcast related information may refer to information on a broadcasting channel, a broadcasting program, or a broadcasting service provider.
  • the broadcast related information may also be provided through a mobile communication network. In this case, the broadcast related information may be received by the mobile communication module 112.
  • the broadcast related information may be an electronic program guide (EPG) of digital multimedia broadcasting (DMB) or an electronic service guide (ESG) of a digital video broadcast-handheld (DVB-B).
  • EPG electronic program guide
  • ESG electronic service guide
  • DMB digital multimedia broadcasting
  • DVB-B digital video broadcast-handheld
  • the broadcast receiving module 111 may receive a digital broadcasting signal using a digital broadcasting system such as a digital multimedia broadcasting-terrestrial (DMB-T) system, a digital multimedia broadcasting-satellite (DMB-S) system, a media forward link only (MediaFLO) system, a digital video broadcast-handheld (DVB-H) system, or an integrated services digital broadcast-terrestrial (ISDB-T) system.
  • a digital broadcasting system such as a digital multimedia broadcasting-terrestrial (DMB-T) system, a digital multimedia broadcasting-satellite (DMB-S) system, a media forward link only (MediaFLO) system, a digital video broadcast-handheld (DVB-H) system, or an integrated services digital broadcast-terrestrial (ISDB-T) system.
  • the broadcast receiving module 111 may also be configured to be suitable not only for the above-described digital broadcasting system, but also for other broadcasting system.
  • the broadcasting signal and/or broadcast related information which are received by the broadcast receiving module 111 may be stored in the memory 160.
  • the mobile communication module 112 transmits/receives a wireless signal to/from at least one of a base station, an external terminal, and a server in a mobile communication network.
  • the wireless signal may include a voice call signal, a video communication call signal, or various types of data in accordance with transmission/reception of a text/multimedia message.
  • the wireless internet module 113 refers to a module for wireless internet connection and may be installed in the terminal device or installed in the outside of the terminal device.
  • wireless internet technology wireless LAN (WLAN) (Wi-Fi), wireless broadband (Wibro), world interoperability for microwave access (Wimax), or high speed downlink packet access (HSDPA) may be used.
  • WLAN wireless LAN
  • Wibro wireless broadband
  • Wimax wireless broadband
  • HSDPA high speed downlink packet access
  • the short range communication module 114 refers to a module for short range communication.
  • a short range communication technology a Bluetooth, a radio frequency identification (RFID), an infrared data association (IrDA), an ultra wideband (UWB), a ZigBee, and the like may be used.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra wideband
  • ZigBee ZigBee
  • the position information module 115 is a module for obtaining a position of the terminal device and a representative example thereof is a global position system (GPS) module.
  • GPS global position system
  • the audio/video (A/V) input unit 120 is a device which inputs an audio signal or a video signal and may include a camera 121 and a microphone 122.
  • the camera 121 processes an image frame such as a still image or a moving image which is obtained by an image sensor in a video communication mode or a photographing mode.
  • the processed image frame may be displayed on the display unit 151.
  • the image frame which is processed in the camera 121 may be stored in the memory 160 or transmitted to the outside through the wireless communication unit 110. Two or more cameras 121 may be provided depending on a usage environment.
  • the microphone 122 receives an external sound signal by a microphone in a call mode, a recording mode, or a voice recognizing mode to process the sound signal as electrical voice data.
  • the processed voice data is converted to be transmitted to a mobile communication base station through the mobile communication module 112 to be output.
  • various noise removal algorithms which remove noises generated while receiving an external sound signal may be implemented.
  • the user input unit 130 generates input data which allows a user to control an operation of the terminal.
  • the user input unit 130 may be configured by a keypad, a dome switch, a touch pad (static pressure/ static electricity), a jog wheel, a jog switch, and the like.
  • the sensing unit 140 detects a current status of the terminal device such as an open/closed state of the terminal device, a position of the terminal device, whether a user is in contact with the terminal device, an orientation of the terminal, acceleration/reduction of the terminal to generate a sensing signal for controlling an operation of the terminal device.
  • a current status of the terminal device such as an open/closed state of the terminal device, a position of the terminal device, whether a user is in contact with the terminal device, an orientation of the terminal, acceleration/reduction of the terminal to generate a sensing signal for controlling an operation of the terminal device.
  • the sensing unit 140 may sense whether the slide phone is open or closed.
  • the sensing unit 190 may sense whether the power supplying unit supplies power or whether the interface unit 170 is coupled to an external device.
  • the sensing unit 140 may include a proximity sensor 141.
  • the output unit 150 generates outputs related to sight, hearing, and touch, and includes the display unit 151, a sound output module 152, an alarm unit 153, a haptic module 154, and a projector module 155.
  • the display unit 151 displays (outputs) information which is processed in the terminal device. For example, when the terminal device is in a phone call mode, the display unit displays an UI (user interface) or a GUI (graphic user interface) related to the phone call. When the terminal device is in a video call mode or a photographing mode, the display unit displays a photographed and/or received image, an UI or a GUI.
  • UI user interface
  • GUI graphic user interface
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT LCD), an organic light emitting diode (OLED), a flexible display, and a 3D display.
  • LCD liquid crystal display
  • TFT LCD thin film transistor liquid crystal display
  • OLED organic light emitting diode
  • flexible display and a 3D display.
  • Some of the above-mentioned displays may be configured as a transparent type or a light transmissive type display so as to see the outside therethrough. This may be called a transparent display and a representative example of the transparent display may be a transparent OLED (TOLED).
  • a rear side structure of the display unit 151 may also be configured as a light transmissive structure. According to this structure, the user may see an object located at a rear side of a terminal body through an area occupied by the display unit 151 of the terminal body.
  • Two or more display units 151 may be provided in accordance with an implementation type of the terminal device.
  • a plurality of display units may be disposed to be spaced apart from each other or to be integrated on one surface or may be disposed on different surfaces, respectively.
  • the display unit 151 and a sensor which senses a touch operation form a layered structure (hereinafter, referred to as a "touch screen")
  • the display unit 151 may be used as an input device in addition to the output device.
  • the touch sensor may be formed by a touch film, a touch sheet, or a touch pad.
  • the touch sensor may be configured to convert a change in a pressure which is applied to a specific part of the display unit 151 or an electrostatic capacity generated in a specific part of the display unit 151 into an electric input signal.
  • the touch sensor may be configured to detect not only a touched position and a touched area but also a pressure at the time of touch.
  • corresponding signal(s) are sent to a touch controller.
  • the touch controller processes the signal(s) and then transmits corresponding data to the controller 180. By doing this, the controller 180 may confirm which area of the display unit 151 is touched.
  • the proximity sensor 141 may be disposed in an internal area of the terminal device which is enclosed by the touch screen or in the vicinity of the touch screen.
  • the proximity sensor refers to a sensor which detects whether there is an object approaching a predetermined detecting surface or an object present in the vicinity thereof using force or an electromagnetic field or infrared ray, without using mechanical contact.
  • the proximity sensor has a longer lifespan and higher utilization than those of a contact type sensor.
  • the proximity sensor examples include a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror reflection type photoelectric sensor, a high frequency oscillation type proximity sensor, a capacitive proximity sensor, a magnetic proximity sensor, and an infrared proximity sensor.
  • the touch screen is an electrostatic sensor, the touch screen is configured to detect proximity of the pointer by change in an electric field according to the proximity of the pointer.
  • the touch screen may be classified as a proximity sensor.
  • proximity touch a behavior of a pointer which approaches the touch screen without being in contact with the touch screen to recognize that the pointer is located on the touch screen
  • contact touch a behavior of the pointer which is actually in contact with the touch screen
  • the proximity sensor senses proximity touch and a proximity touch pattern (for example, a proximity touch distance, a proximity touch direction, a proximity touch speed, a proximity touch time, a proximity touch position, and a proximity touch movement status).
  • a proximity touch pattern for example, a proximity touch distance, a proximity touch direction, a proximity touch speed, a proximity touch time, a proximity touch position, and a proximity touch movement status.
  • Information corresponding to the sensed proximity touch operation and proximity touch pattern may be output on the touch screen.
  • the color sensing sensor 142 is a sensor which provides a function of identifying color included in an external object.
  • the color sensing sensor 142 is a color identifying sensor configured by combining a photodiode and a color filter.
  • An example of the color sensing sensor is a monochromic color sensor which measures a quantity of light having a specific color and an integrated type color sensor which identifies a half tone.
  • the monochromic color sensor is formed by bonding a filter having a specific color transmitting characteristic on a front surface of an amorphous Si photodiode and a wavelength of incident light in an arbitrary range transmits the color filter to reach the photo diode.
  • the integrated type color sensor is formed by bonding red (R), green (G), and blue (B) filters corresponding to three primary colors of light onto front surfaces of three photodiodes integrated on one substrate.
  • the color sensing sensor 142 may provide a function of sensing at least one color included in an object photographed by the camera 121.
  • the display unit 151 may provide a light output function which diverges light to the outside.
  • the light output function in the terminal or server 100 is provided as a flashlight function.
  • the above-described light output function may be provided by a structure using an LED.
  • the sound output module 152 may output audio data which is received from the wireless communication unit 110 in the call signal receiving mode, the phone call mode, the recording mode, the voice recognizing mode, or the broadcast receiving mode or stored in the memory 160.
  • the sound output module 152 outputs a sound signal related to a function (for example, a call signal reception sound or a message reception sound) performed in the terminal.
  • the sound output module 152 may include a receiver, a speaker, a buzzer, and the like.
  • the alarm unit 153 outputs a signal for notifying that an event of the terminal device is generated. Examples of the event generated in the terminal device include call signal reception, message reception, key signal input, and touch input.
  • the alarm unit 153 may output another type of signal other than the video signal or the audio signal, for example, a signal for notifying that the event is generated, by vibration.
  • the video signal or the audio signal may be output through the display unit 151 or the voice output module 152 so that the display unit 151 or the voice output unit 152 may also be classified as a part of the alarm unit 153.
  • the haptic module 154 generates various tactile effects that the user may feel.
  • a representative example of the tactile effect generated by the haptic module is vibration.
  • An intensity and a pattern of the vibration generated by the haptic module 154 may be controlled. For example, different vibrations may be combined to be output or sequentially output.
  • the haptic module 154 may generate various tactile effects such as pin arrangement perpendicular to a contacted skin surface, an injecting force or a suction force of air through an injection port or a suction port, brush of a skin surface, contact with an electrode, effect by stimulation of electromagnetic force, and effect by reproducing a thermal feedback using a heat absorbing or heat generating element.
  • the haptic module 154 may be implemented not only to transmit a tactile effect through direct contact, but also to allow the user to feel the tactile effect through a muscular sense such as a finger or an arm. Two or more haptic modules 154 may be provided according to a configuring aspect of the portable terminal.
  • the projector module 155 is a component which performs an image project function using the terminal device and displays an image which is same as or at least partially different from the image displayed on the display unit 151 on an external screen or a wall in accordance with the control signal of the controller.
  • the projector module 155 may include a light source (not illustrated) which generates light (as an example, laser light) to output an image to the outside, an image generating unit (not illustrated) which generates an image to be output to the outside using light generated by the light source, and a lens (not illustrated) which enlarges and outputs the image at a predetermined focal distance to the outside.
  • the projector module 155 may include a device (not illustrated) which mechanically moves the lens or the entire modules to adjust an image projecting direction.
  • the projector module 155 may be classified into a cathode ray tube (CRT) module, a liquid crystal display (LCD) module, and a digital light processing (DLP) module depending on a device type of the display unit. Specifically, in the DLP module, light generated in the light source is reflected by a digital micromirror device (DMD) chip to enlarge and project the generated image so that it is advantageous to reduce a size of the projector module 151.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • DLP digital light processing
  • the projector module 155 may be provided at a side, a front side, or a rear side of the terminal device in a length direction. However, it is needless to say that the projector module 155 may be provided at nay position of the terminal device as needed.
  • the memory unit 160 may store a program for processing and controlling the controller 180 and perform a function for temporarily storing data to be input/output (for example, a contact list, a message, audio, a still image, or a moving image).
  • the memory unit 160 may also store usage frequency (for example, usage frequency of the phone book, the message, and the multimedia) for the data. Further, the memory unit 160 also store data related to various patterns of vibration and sound which are output at the time of touch input on the touch screen.
  • the memory 160 may include at least one storing medium of a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, an SD or XD memory), a random access memory (RAM), a static random access memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, and an optical disk.
  • the terminal device may operate in association with a web storage which performs a storage function of the memory 160 on the Internet.
  • the interface unit 170 serves as a passage to all external equipment which is connected to the terminal device.
  • the interface unit 170 receives data or power from the external equipment to transmit the data or power to each component in the terminal device or transmits the data in the terminal device to the external equipment.
  • the interface unit may include a wired/wireless headset port, an external charger port, a wired/wireless data port, a memory card port, a port connecting devices with identification modules, an audio input/output (I/O) port, a video input/output (I/O) port, an earphone port, and the like.
  • the identification module is a chip which stores various information for authenticating a permission of the terminal device and may include a user identity module (UIM), a subscriber identity module (SIM), a universal subscriber identity module (USIM), and the like.
  • a device in which the identification module is provided (hereinafter, identification device) may be manufactured as a smart card. Therefore, the identification device may be connected to the terminal through a port.
  • the interface unit may serve as a passage through which power from the cradle is supplied to the mobile terminal or various command signals input from the cradle by the user is transmitted to the mobile terminal.
  • Various command signals input from the cradle or the corresponding power may also operate by a signal for recognizing that the mobile terminal is accurately installed on the cradle.
  • the controller 180 generally controls an overall operation of the terminal device.
  • the controller 180 performs related control and process for voice call, data communication, and video call.
  • the control unit 180 may include a multimedia module 181 for reproducing a multimedia.
  • the multimedia module 181 may be implemented in the control unit 180 or separately implemented from the control unit 180.
  • the control unit 180 may perform a pattern recognition process for recognizing a handwriting input or a drawing input performed on the touch screen as characters and images, respectively.
  • the power supplying unit 190 is applied with external power and internal power by the control of the controller 180 to supply power required for operations of the components.
  • Various exemplary embodiment described herein may be implemented in a recording medium which is readable by a computer or other similar device using software, hardware, or a combination thereof.
  • the exemplary embodiment described herein may be implemented by using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, and electric units for performing other functions.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, micro-controllers, microprocessors, and electric units for performing other functions.
  • controller 180 the controller 180.
  • exemplary embodiments such as procedures and functions described in this specification may be implemented by separate software modules.
  • the software modules may perform one or more functions and operations described in this specification.
  • a software code may be implemented by a software application which is written by an appropriate program language.
  • the software code may be stored in the memory 120 and executed by the controller 180.
  • a specific function of the present invention will be described based on the terminal or server 100 elements of the content input unit 2, the emoticon generating unit 3, and the emoticon receiving unit 4 which configure the above-described emoticon service providing system 1.
  • the present invention suggests a structure which converts Native APIs corresponding to iOS and Android, respectively after compiling and efficiently responds to various devices.
  • the present invention suggests a system which stores chatting data of the user as an analyzed key token in a DB and allows the user data accumulated in the DB to contribute to enhance precision of word vector based artificial intelligence again, to increase precision of the artificial intelligence as the usage is increased (word vector based machine learning).
  • a character emoticon expression is changed according to an input word or symbol (number) element in real time so that the user can check and express his/her emotion.
  • a technical restriction such as delay which is generated during a calculating process is reduced by a user friendly UX design and a communication process may be more intimately transmitted to the user.
  • precision of the artificial intelligence may be consistently increased by the machine learning.
  • FIG. 3 illustrates a flowchart for explaining a machine learning based artificial intelligence emoticon service providing method suggested by the present invention.
  • a step S110 of receiving contents such as a text, a voice, an image, or a moving image from a user, by means of a terminal or server 100 of the content input unit 2 according to the present invention is performed first.
  • the terminal or server 100 of the emoticon generating unit 3 analyzes a morpheme based on the input contents (S120).
  • step S120 a controller 180 of the terminal or server 100 performs an operation of converting the contents into a key token through an operation which analyzes the morpheme and converts an expressed word into a basic verb.
  • step S120 the terminal or server 100 of the emoticon generating unit 3 performs an operation of analyzing a context of contents input in step S110 through verbs, nouns, adjectives, and punctuation marks (S130).
  • step S130 the terminal or server 100 of the emoticon generating unit 3 performs an operation of matching an emoticon and key token information (S140).
  • step S140 the control unit 180 of the terminal or server 100 may apply individual emoticon tags or use a plurality of emoticons through an emoticon category.
  • step S140 when the emoticon matching operation is completed, the terminal or server 100 of the emoticon generating unit 3 transmits the determined emoticon to the terminal or server 100 of the emoticon receiving unit 4.
  • the present invention provides a smart application which analyzes an emotion and contexts of a text or a voice message to automatically transmit emoticons.
  • FIG. 4 illustrates a specific example of steps of the machine learning based artificial intelligence emoticon service providing method explained in FIG. 3.
  • step S110 the user inputs contents of "Let's eat chikenkichikchikeeeenchiken"
  • step S120 the terminal or server 10 of the emoticon generating unit 3 extracts "chicken”, “chikeee”, “en”, “eat” and "let's”.
  • step S120 an operation of converting the contents into basic verb is performed and "eat” is converted into “eatte” and “eat” is converted into “go”.
  • step S140 “chicken” obtained by analyzing the morpheme matches a category "when the user is hungry”, “eat” matches a category "when the user is hungry” and “when the user is full”, and “go” matches a category when "the user is excited”.
  • step S140 a status of "76% of joy”, “42% of sadness”, “50% of anger”, “12% of fear” and “22% of surprise” may be extracted through API analysis of a key token extracted in step S120.
  • the terminal or server 100 of the emoticon generating unit 3 transmits the determined emoticon to the terminal or server 100 of the emoticon receiving unit 4.
  • images which match the extracted keyword are generated and combined in accordance with a predetermined rule to generate an emoticon.
  • step S150 various variables are combined so that emotions may be expressed by an infinite number of emoticons.
  • the user may experience convenience when the emotion and the situation of the user are converted into the most appropriate emoticon through accurate analysis and pleasure of communication through which emotions are shared.
  • the emoticon since the emoticon is generated by reflecting an emotion of the user, an entire context of conversation and the used keyword and contents is called from the cloud, the emoticon may be provided in accordance with the situation at every time.
  • image processing is applied to a resultant so that even the same emoticon may be differently expressed depending on the intensity of emotion and the user may perform more delicate communication.
  • FIG. 5 illustrates a specific example in which an emoticon expressed in accordance with change in an emotion of a user or change in an intensity of emotion in regard to the present invention.
  • FIGS. 5A, 5B, and 5C illustrate examples that emoticons whose color and size is changed in accordance with change in the emotion of the user or change in an intensity of emotion are represented.
  • a method of arranging emoticons in the order of relevance to allow the user to select the emoticon may be provided.
  • FIG. 6 is a flowchart for explaining a method that arranges emoticons in the order of relevance to be selected by a user in regard to another exemplary embodiment of the present invention.
  • a step S110 of receiving contents such as a text, a voice, an image, or a moving image from a user, by means of a terminal or server 100 of the content input unit 2 according to the present invention is performed first.
  • the terminal or server 100 of the emoticon generating unit 3 analyzes a morpheme based on the input contents (S120).
  • step S120 the terminal or server 100 of the emoticon generating unit 3 performs an operation of analyzing a context of contents input in step S110 through verbs, nouns, adjectives, and punctuation marks (S130).
  • the terminal or server 100 of the emoticon generating unit 3 performs an operation of matching the emoticon and key token information but differently from the above-described step S140, the terminal or server 100 of the emoticon generating unit 3 performs a step S210 of calculating relevance with an emoticon in the database (DB) to match the emoticon.
  • the terminal or server 100 of the emoticon generating unit 3 arranges the emoticons in the order of relevance to provide the emoticon to the user of the terminal or server 100 of the content input unit 2 (S220).
  • the user of the terminal or server 100 of the content input unit 2 selects a specific emoticon among emoticons arranged in the order of relevance (S230), and the selected emoticon is transmitted to the terminal or server 100 of the emoticon receiving unit 4 (S150).
  • the user may be highly likely to select the emoticon having high relevance which is arranged in the front side among the arranged emoticons.
  • an event in which a context of the contents of the user is changed may be generated.
  • FIG. 7 is a flowchart for explaining a method that, when an additional message is input by a user before transmitting an emoticon, reflects the additional message in real time to allow the user to select a relevant emoticon, in regard to another exemplary embodiment of the present invention.
  • Steps S110 to S130 and S210 of FIG. 7 correspond to steps S110 to S130 and S210 described in FIG. 6, so that the description thereof will be omitted for the sake of simplicity of the specification.
  • the process of FIG. 7 further includes a step S310 of determining whether any one of a text, a voice, an image, and a moving image is additionally added by means of the terminal or server 100 of the content input unit 2.
  • step S310 there is no additionally input content in step S310, the same process as in FIG. 6 is performed.
  • the terminal or server 100 of the emoticon generating unit 3 performs a step of analyzing a morpheme based on the additionally input content and an operation of analyzing a context of the additionally input content through verbs, nouns, adjectives, and punctuation marks (S330).
  • the terminal or server 100 of the emoticon generating unit 3 matches the emoticon in consideration of the added keyword (S340) and reflects a matching result in step S340 and a matching result through step S210 in real time, and arranges emoticons in the order of relevance to transmit the emoticon to the terminal or server 100 of the content input unit 2 (S350).
  • the user of the terminal or server 100 of the content input unit 2 selects a specific emoticon among emoticons arranged in the order of relevance (S230), and the selected emoticon is transmitted to the terminal or server 100 of the emoticon receiving unit 4 (S150).
  • FIGS. 6 and 7 Individual steps described in FIGS. 6 and 7 will be described in detail with reference to the drawings.
  • FIG. 8 illustrates a specific example which analyzes a morpheme through contents input by a user in FIG. 6 or 7.
  • FIG. 8 illustrates a specific example of step S120 of FIG. 6 or 7.
  • the terminal or server 100 of the emoticon generating unit 3 may analyze morphemes of "on”, “way”, “bus”, “wrong” “got”, “today”, “late”, “how”, “do” and "?”.
  • FIG. 9 illustrates a specific example of the present invention which extracts a keyword based on the morphemes analyzed in FIG. 8.
  • FIG. 9 illustrates a specific example of step S130 of FIG. 6 or 7.
  • the terminal or server 100 of the emoticon generating unit 3 select a morpheme having a strong semantic element from morphemes of emotion, situation, and punctuation marks of "on", “way”, “bus”, “wrong” “got”, “today”, “late”, “how”, “do” and "?”.
  • FIG. 10 illustrates an example of a specific operation of matching a relevant emoticon in a database and the keyword extracted in FIG. 9.
  • FIG. 10 illustrates a specific example of step S210 of FIG. 6 or 7.
  • FIG. 9 a plurality of emoticons corresponding to "wrong”, “late”, and “how” which are morphemes selected in FIG. 9 is represented to match each other.
  • the plurality of emoticons may be arranged in the order of relevance.
  • FIG. 11 illustrates a specific example which arranges emoticons matched in FIG. 10 in the order of relevance and displays the emoticons to the user.
  • FIG. 11 illustrates a specific example of step S220 of FIG. 6 or step S350 of FIG. 7.
  • Six emoticons may have relevant evaluations of 0.64, 0.63, 0.62, 0.71, 0.92, and 0.59.
  • FIG. 12 illustrates a specific example which allows a user to select one of a plurality of emoticons displayed in FIG. 11 to transmit the selected emoticon.
  • the emoticons are arranged in the descending order of 0.64, 0.63, 0.62, 0.71, 0.92, and 0.59 calculated in FIG. 11 to be displayed in the terminal or server 100 of the content input unit 2 and the user may select a specific emoticon. Further, in step S150, the selected emoticon is transmitted to the terminal or server 100 of the emoticon receiving unit 4.
  • FIG. 13 illustrates an example that after step S210 described in FIG. 7, further includes a step S310 of determining whether any one of a text, a voice, an image, and a moving image is additionally added by means of the terminal or server 100 of the content input unit 2.
  • the terminal or server 100 of the emoticon generating unit 3 performs a step S320 of analyzing a morpheme based on the additionally input contents ("Because I waited all the time, it serves you right, lol") and an operation of analyzing a context of the additionally input content through the verbs, nouns, adjectives, and punctuation marks (S330), and matches the emoticon in consideration of the added keyword (S340). Further, the terminal or server 100 of the emoticon generating unit 3 reflects the matching result in step S340 and the matching result through the previous step S210 in real time and arranges the emoticons in the order of relevance to transmit the emoticon to the terminal or server 100 of the content input unit 2 (S350).
  • the user of the terminal or server 100 of the content input unit 2 reflects the matching result in step S340 and the matching result through the previous step S210 in real time to select a specific emoticon among emoticons arranged in the order of relevance (S230), and transmits the selected emoticon to the terminal or server 100 of the emoticon receiving unit 4 (S150).
  • FIGS. 14 and 15 illustrate a specific example which provides a machine learning based artificial intelligence emoticon service in the case of English, in regard to the present invention.
  • the user of the terminal or server 100 of the content input unit 2 inputs a text 210 of "Yes, I'm on my-" through the display unit 151, a specific example in which a plurality of relevant emoticons 220 is displayed on the display unit 151 according to the method of FIG. 3, 6, or 7 is illustrated.
  • the user of the terminal or server 100 of the content input unit 2 inputs a text 230 of "it seems to be a little late...because I go-" through the display unit 151, a specific example in which a plurality of relevant emoticons 240 is displayed on the display unit 151 according to the method of FIG. 3, 6, or 7 is illustrated.
  • the machine learning based artificial intelligence emoticon service providing system 1 may be utilized as a system which does not store an emoticon image in a device or an OS, but is managed in a cloud to be flexibly provided in real time in accordance with a usage context.
  • the machine learning based artificial intelligence emoticon service providing system 1 may be utilized as a system which classifies a data type (emotion, situation, or information) of a text input to a smart phone (terminal) to be automatically converted into a graphic image according to a separate modeling principle.
  • machine learning based artificial intelligence emoticon service providing system 1 may be utilized as a system which substitutes a morpheme detected from a text of an instant message and a semantic element into an indirect advertising image to provide an advertising service based thereon.
  • the machine learning based artificial intelligence emoticon service providing system 1 may be utilized as a system which recognizes a situation and an emotion in a voice message transmitted while chatting to substitute the situation and emotion into an emoticon and adds a voice message to the emoticon to transmit the emoticon and the voice message together.
  • a machine learning based artificial intelligence emoticon service method may be provided to the user.
  • the present invention may provide a system and an application which analyze an emotion element and a context included in a message of a user using a machine learning based artificial intelligence technology and express a character emoticon in real time.
  • the present invention may provide artificial intelligence technology convergence which recognizes context data (components such as emotion, environment, and an object) through message analysis and reprocesses the context data into an emoticon which is a visual communication tool to a user.
  • context data components such as emotion, environment, and an object
  • the present invention may provide an application and an API which are not limited to a specific service or application but universally used in consideration of a usage environment to input a text to a user.
  • the present invention may build a word vector based artificial intelligence machine learning system which learns a conversation habit pattern of a user and is more accurate as the system is used and provide the system to the user.
  • the present invention may suggest more accurate and convenient communication experience by combining a messaging communication which accounts for a majority of mobile communications and an artificial intelligence and design technology and innovate a usage experience of an emoticon which just plays an auxiliary role of a text to analyze conversation by artificial intelligence and recombine a graphic component in real time, thereby providing unlimited expressions.
  • the present invention may develop a technology that adds a design to contents which are continuously created in an SNS, blog, and media using not only a message application but also an input interface.
  • a character emoticon expression is changed according to an input word or symbol (number) element in real time so that the user can check and express his/her emotion.
  • a technical restriction such as delay which is generated during a calculating process is reduced by a user friendly UX design and a communication process may be more intimately transmitted to the user.
  • precision of the artificial intelligence may be consistently increased by the machine learning.
  • the present invention can be implemented as a computer-readable code in a computer-readable recording medium.
  • the computer readable recording medium includes all types of recording device in which data readable by a computer system is stored. Examples of the computer readable recording medium are a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storing device and also implemented as a carrier wave (for example, transmission through the Internet).
  • the computer readable recording medium is distributed in computer systems connected through a network and a computer readable code is stored therein and executed in a distributed manner. Further, a functional program, a code, and a code segment which may implement the present disclosure may be easily deducted by a programmer in the art.
  • the configuration and method of embodiments as described above may not be applied with limitation, but the embodiments may be configured by selectively combining all or a part of each embodiment such that various modifications may be made.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Resources & Organizations (AREA)
  • Computing Systems (AREA)
  • Economics (AREA)
  • Medical Informatics (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Mathematical Physics (AREA)
  • Marketing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Primary Health Care (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Quality & Reliability (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention relates to a machine learning based artificial intelligence emoticon service providing method. According to an exemplary embodiment of the present invention, a machine learning based artificial intelligence emoticon providing method includes a first step of inputting at least one first content through a first terminal; a second step of transmitting the first content to a server, by means of the first terminal; a third step of classifying a text included in the first content by a predetermined unit to generate a plurality of second texts, by means of the server; a fourth step of filtering at least one third text which satisfies a predetermined condition among the plurality of second texts, by means of the server; a fifth step of determining at least one first emoticon which matches the third text among a plurality of previously stored emoticons, by means of the server; and a sixth step of transmitting the first emoticon to a second terminal, by means of the server.

Description

MACHINE LEARNING BASED ARTIFICIAL INTELLIGENCE EMOTICON SERVICE PROVIDING METHOD
The present invention relates to a machine learning based artificial intelligence emoticon service providing method.
A terminal such as a personal computer, a notebook computer, or a mobile phone may be configured to perform various functions. Examples of the various functions include a data and voice communication function, a function of taking a picture or a moving image by means of a camera, a voice storing function, a function of reproducing a music file by means of a speaker system, and a function of displaying an image or a video. Some terminals include an additional function of playing a game and other terminals may be implemented as a multimedia device. Moreover, recently, a terminal may receive a broadcast or multicast signal to show a video or a television program.
Generally, the terminal may be classified into a mobile terminal (or a portable terminal) and a stationary terminal depending on whether to be movable. Further, the mobile terminal may be classified into a handheld terminal and a vehicle mount terminal depending on whether a user directly carries the terminal.
As the function of the terminal is diversified, for example, the terminal is implemented as a multimedia player having multiple functions such as a function of taking a photograph or a moving image, a function of reproducing music or moving image file, a function of playing a game, or a function of receiving a broadcast.
In order to support and increase the above-described functions of the terminal, it is considered to improve a structural part and/or a software part of the terminal.
In the meantime, in a mobile environment using a terminal, a communication method of users is gradually moving toward visual communication and an agent for visualizing and designing a message to be transmitted by a user is necessary.
Currently, there are attempts to communicate using various icons and photos in order to express an emotion of the user of the terminal, but the process is complicated so that it is difficult to provide delicate and rich expression.
Therefore, there is a need for a method for providing a pleasant experience of the user by adding artificial intelligence and design technology and converging various characters.
The present invention has been made in an effort to provide a machine learning based artificial intelligence emoticon service providing method to a user.
Specifically, an object of the present invention is to provide a system and an application which analyze an emotion element and a context included in a message of a user using a machine learning based artificial intelligence technology and express a character emoticon in real time to a user.
Further, an object of the present invention is to provide artificial intelligence technology convergence which recognizes context data (elements such as emotion, environment, and an object) through message analysis and reprocesses the context data into an emoticon which is a visual communication tool to a user.
Furthermore, an object of the present invention is to provide an application and an API which are not limited to a specific service or application but universally used in consideration of a usage environment to input a text to a user.
Further, an object of the present invention is to build a word vector based artificial intelligence machine learning system which learns a conversation habit pattern of a user and is more accurate as the system is used and provide the system to the user.
An object of the present invention is to suggest more accurate and convenient communication experience by combining a messaging communication which accounts for a majority of mobile communications, an artificial intelligence, and design technology, and innovate a usage experience of an existed emoticon which just plays an auxiliary role of a text due to inconvenient usage and limited expressions, to analyze conversation by artificial intelligence and recombine a graphic component in real time, thereby providing unlimited expressions.
Further, an object of the present invention is to develop a technology that adds a design to contents which are continuously created in an SNS, blog, and media using not only a message application but also an input interface.
Other technical objects to be achieved in the present disclosure are not limited to the aforementioned technical objects, and other not-mentioned technical objects will be obviously understood by those skilled in the art from the description below.
According to an exemplary embodiment of the present invention, a machine learning based artificial intelligent emoticon providing method may include a first step of inputting at least one first content through a first terminal; a second step of transmitting the first content to a server, by means of the first terminal; a third step of classifying a text included in the first content by a predetermined unit to generate a plurality of second texts, by means of the server; a fourth step of filtering at least one third text which satisfies a predetermined condition, among the plurality of second texts, by means of the server; a fifth step of determining at least one first emoticon which matches the third text among a plurality of previously stored emoticons, by means of the server; and a sixth step of transmitting the first emoticon to a second terminal, by means of the server.
Further, the first content may include text information, image information, moving image information, and voice information.
Further, when the first content is image information or moving image information, in the third step, the server may extract a text included in the image information or the moving image information and classify the extracted text by a predetermined unit to generate the plurality of second texts, and when the first content is voice information, in the third step, the server may convert the voice information into text information and classify the converted text information by the predetermined unit to generate the plurality of second texts.
Further, the predetermined unit in the third step may be a morpheme unit and in the third step, at least a part of the plurality of second texts may be converted into a basic verb.
Further, the predetermined condition in the fourth step may be whether the text is a text having meanings.
Further, the third texts may be plural and the first emoticons which match the plurality of third texts may be plural.
Further, the fifth step may include a step 5-1 of classifying the plurality of third texts by at least one category among a plurality of predetermined categories, by means of the server; a step 5-2 of counting the number of classified third texts for every category, by means of the server; a step 5-3 of assigning a result value obtained by counting the third texts for every category to the third text which belongs to each category, by means of the server; a step 5-4 of determining a plurality of first emoticons which matches the plurality of third texts among a plurality of previously stored emoticons, by means of the server; and a step 5-5 of determining an arrangement order of the plurality of first emoticons according to the result values of the plurality of third texts, by means of the server.
Further, the method may further include: between the fifth step and the sixth step, a step 5-6 of transmitting the plurality of first emoticons and the arrangement order to the first terminal, by means of the server; a step 5-7 of displaying the plurality of first emoticons according to the arrangement order, by means of the first terminal; a step 5-8 of selecting a second emoticon among the plurality of first emoticons, by means of a user of the first terminal; and a step 5-9 of transmitting information on the second emoticon to the server, by means of the first terminal, and in the sixth step, the server may transmit the second emoticon to the second terminal.
Further, before the sixth step, when at least one second content is additionally input by means of the first terminal, the first to fifth steps may be additionally performed on the second content.
Further, the first terminals may be plural, data related to the first to sixth steps between the plurality of first terminals and the server may be stored in the server, and, the server may accumulate and use the stored data to perform machine learning.
The present invention may provide a machine learning based artificial intelligence emoticon service providing method to a user.
Specifically, the present invention may provide, to a user, a system and an application, which analyze an emotion component and a context included in a message of a user using a machine learning based artificial intelligence technology and express a character emoticon in real time.
Further, the present invention may provide artificial intelligence technology convergence which recognizes context data (components such as emotion, environment, and an object) through message analysis and reprocesses the context data into an emoticon which is a visual communication tool, to a user.
Furthermore, the present invention may provide an application and an API which are not limited to a specific service or application but universally used in consideration of a usage environment to input a text, to a user.
Further, the present invention may build a word vector based artificial intelligence machine learning system which learns a conversation habit pattern of a user and is more accurate as the system is used and provide the system to the user.
The present invention may suggest more accurate and convenient communication experience by combining a messaging communication which accounts for a majority of mobile communications and an artificial intelligence and design technology and innovate a usage experience of an emoticon which just plays an auxiliary role of a text to analyze conversation by artificial intelligence and recombine a graphic component in real time, thereby providing unlimited expressions.
Further, the present invention may develop a technology that adds a design to contents which are continuously created in an SNS, blog, and media using not only a message application but also an input interface.
Further, according to the present invention, differently from the usage method of the related art which repeatedly uses purchased emoticon contents, various variables such as keywords, emotions, usage frequency, preference, climate, date, time, places, and issues are analyzed and combined so that new design may be used at every time.
Further, according to the present invention, a character emoticon expression is changed according to an input word or symbol (number) element in real time so that the user can check and express his/her emotion.
Further, the present invention may reduce a technical restriction such as delay which is generated during a calculating process by a user friendly UX design and more intimately transmit a communication process to the user.
Furthermore, the present invention may consistently increase precision of the artificial intelligence by the machine learning.
The effects to be achieved by the present disclosure are not limited to aforementioned effects and other effects, which are not mentioned above, will be apparently understood by those skilled in the art from the following description.
The accompanying drawings in the specification illustrate an exemplary embodiment of the present disclosure. The technical spirit of the present disclosure will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings. Therefore, the present disclosure will not be interpreted to be limited to the drawings:
FIG. 1 illustrates a block diagram of a machine learning based artificial intelligence emoticon service providing system suggested by the present invention.
FIG. 2 illustrates a block diagram of a terminal or a server which is applied to the present invention.
FIG. 3 illustrates a flowchart for explaining a machine learning based artificial intelligence emoticon service providing method suggested by the present invention.
FIG. 4 illustrates a specific example of steps of the machine learning based artificial intelligence emoticon service providing method explained in FIG. 3.
FIG. 5 illustrates a specific example in which an emoticon expressed in accordance with change in an emotion of a user or change in an intensity of emotion in regard to the present invention.
FIG. 6 is a flowchart for explaining a method that arranges emoticons in the order of relevance to be selected by a user in regard to another exemplary embodiment of the present invention.
FIG. 7 is a flowchart for explaining a method that, when an additional message is input from a user before transmitting an emoticon, reflects the additional message in real time to allow the user to select a relevant emoticon, in regard to another exemplary embodiment of the present invention.
FIG. 8 illustrates a specific example which analyzes a morpheme through contents input by a user in FIG. 6 or FIG. 7.
FIG. 9 illustrates a specific example of the present invention which extracts a keyword based on the morpheme analyzed in FIG. 8.
FIG. 10 illustrates an example of a specific operation of matching a relevant emoticon in a database based on the keyword extracted in FIG. 9.
FIG. 11 illustrates a specific example which arranges emoticons matched in FIG. 10 in the order of relevance and displays the emoticons to the user.
FIG. 12 illustrates a specific example which allows a user to select one of a plurality of emoticons displayed in FIG. 11 to transmit the selected emoticon.
FIG. 13 illustrates an example which when an additional message is input from the user before transmitting an emoticon in FIG. 12, reflects the additional message in real time.
FIGS. 14 and 15 illustrate a specific example which provides a machine learning based artificial intelligence emoticon service in the case of English, in regard to the present invention.
An emoticon (Emoji) market has spread around a millennial generation and is now widely used by generations. Six billion or more emoticons are sent per day around the world as of last year (Oxford University in 2015).
Emoticons are evolved to be simply used and provide various emotion expressions and are evolved from an auxiliary means which provides pleasure in a text communication to a new communication means which supplements limitation of a text.
As an emoticon (Emoji) global market, Facebook provides a free emoticon as a separate message application and WeChat of China sells paid emoticons.
Further, Snapchat recently takes over Bitstrips, which makes personal emoticons, for $ 100 million (about 116.7 billion won) (in 2016).
In the domestic market, the emoticon (Emoji) market is estimated to be 100 billion won and when a derivative product market such as a character figure is added, the domestic market is estimated to be 200 billion won. The domestic market shows a rapid growth of 30 to 40% annually in terms of usage and sales (in 2015).
The average number of emoticons which are used in KakaoTalk is 200 million per day and the monthly average of visitors to an emoticon store is 27 million. Seven out of ten users of KakaoTalk use the emoticon store more than once a month (in 2015).
"Miitomo" which is launched by Nintendo in 2016 is a game which creates and utilizes avatars resembling a user and has gained a good response with one million downloads in three days of its launch.
Further, Fleksy or SharingGIF which is a keyboard application which does not include a character but classifies GIF files which are popular on the Internet by tags to search for appropriate results is also frequently used by many users.
As a result, in a mobile environment using a terminal, a communication method of users is gradually moving toward visual communication and an agent for visualizing and designing a message to be transmitted by a user is necessary.
Currently, there are attempts to communicate using various icons and photos in order to express an emotion of the user of the terminal, but the process is complicated. Therefore, it is difficult to provide delicate and rich expression.
Therefore, there is a need for a method for providing a pleasant experience of the user by adding artificial intelligence and design technology and converging various characters.
The present invention is to provide a machine learning based artificial intelligence emoticon service providing method to a user.
Specifically, an object of the present invention is to provide a system and an application which analyze an emotion element and a context included in a message of a user using a machine learning based artificial intelligence technology and express a character emoticon in real time.
Further, an object of the present invention is to provide artificial intelligence technology convergence which recognizes context data (components such as emotion, environment, and an object) through message analysis and reprocesses the context data into an emoticon which is a visual communication tool to a user.
Furthermore, an object of the present invention is to provide an application and an API which are not limited to a specific service or application but universally used in consideration of a usage environment to input a text to a user.
Further, an object of the present invention is to build a word vector based artificial intelligence machine learning system which learns a conversation habit pattern of a user and is more accurate as the system is used and provide the system to the user.
An object of the present invention is to suggest more accurate and convenient communication experience by combining a messaging communication which accounts for a majority of mobile communications and an artificial intelligence and design technology and innovate a usage experience of an emoticon which just plays an auxiliary role of a text to analyze conversation by artificial intelligence and recombine a graphic component in real time, thereby providing unlimited expressions.
Further, an object of the present invention is to develop a technology that adds a design to contents which are continuously created in an SNS, blog, and media using not only a message application but also an input interface.
FIG. 1 illustrates a block diagram of a machine learning based artificial intelligence emoticon service providing system suggested by the present invention.
Referring to FIG. 1, a machine learning based artificial intelligence emoticon service providing system 1 suggested by the present invention may be divided into a content input unit 2, an emoticon generating unit 3, and an emoticon receiving unit 4.
The content input unit 2 provides a function of receiving contents such as a text, a voice, an image, or a moving image, from a user.
Next, the emoticon generating unit 3 provides a function of catching a context based on contents input through the content input unit 2 to automatically convert the context into the most appropriate emoticon and transmit the emoticon.
Further, the emoticon receiving unit 4 provides a function of receiving emoticon data from the emoticon generating unit 3 to display the emoticon data to the user.
The content input unit 2, the emoticon generating unit 3, and the emoticon receiving unit 4 may be terminals or servers.
Further, the content input unit 2, the emoticon generating unit 3, and the emoticon receiving unit 4 may exchange data therebetween using short range communication or remote communication.
A short range communication technology applied herein may include a Bluetooth, a radio frequency identification (RFID), an infrared data association (IrDA), an ultra wideband (UWB), a ZigBee, and Wi-Fi (wireless fidelity).
Further, an applied remote communication may include code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA), and single carrier frequency division multiple access (SC-FDMA) technologies.
Prior to description of the machine learning based artificial intelligence emoticon service providing system 1 suggested by the present invention, the server or the terminal which may serve as the content input unit 2, the emoticon generating unit 3, or the emoticon receiving unit 4 will be described in detail.
FIG. 2 illustrates a block diagram of a terminal or a server which is applied to the present invention.
Referring to FIG. 2, the terminal or the server 100 may include a wireless communication unit 110, an audio/video (A/V) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supplying unit 190.
However, the components illustrated in FIG. 2 are not essential components so that a terminal or server 100 apparatus having more components or less components may be implemented.
Hereinafter, the components will be described in sequence.
The wireless communication unit 110 may include one or more modules which enable wireless communication between a terminal device and a wireless communication system or between a terminal device and a network where the terminal device is located. For example, the wireless communication unit 110 may include a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short range communication module 114, and a position information module 115.
The broadcast receiving module 111 receives a broadcasting signal and/or broadcast related information from an external broadcasting management server through a broadcasting channel.
The broadcasting channel may include a satellite channel and a ground wave channel. The broadcasting management server may refer to a server which generates and transmits the broadcasting signal and/or the broadcast related information or a server which receives a previously generated broadcasting signal and/or broadcast related information to transmit the broadcasting signal and/or broadcast related information to the terminal. The broadcasting signal may include not only a TV broadcasting signal, a radio broadcasting signal, and a data broadcasting signal, but also a broadcasting signal in which the data broadcasting signal is combined with the TV broadcasting signal or the radio broadcasting signal.
The broadcast related information may refer to information on a broadcasting channel, a broadcasting program, or a broadcasting service provider. The broadcast related information may also be provided through a mobile communication network. In this case, the broadcast related information may be received by the mobile communication module 112.
There are various types of broadcast related information. For example, the broadcast related information may be an electronic program guide (EPG) of digital multimedia broadcasting (DMB) or an electronic service guide (ESG) of a digital video broadcast-handheld (DVB-B).
The broadcast receiving module 111 may receive a digital broadcasting signal using a digital broadcasting system such as a digital multimedia broadcasting-terrestrial (DMB-T) system, a digital multimedia broadcasting-satellite (DMB-S) system, a media forward link only (MediaFLO) system, a digital video broadcast-handheld (DVB-H) system, or an integrated services digital broadcast-terrestrial (ISDB-T) system. The broadcast receiving module 111 may also be configured to be suitable not only for the above-described digital broadcasting system, but also for other broadcasting system.
The broadcasting signal and/or broadcast related information which are received by the broadcast receiving module 111 may be stored in the memory 160.
The mobile communication module 112 transmits/receives a wireless signal to/from at least one of a base station, an external terminal, and a server in a mobile communication network. The wireless signal may include a voice call signal, a video communication call signal, or various types of data in accordance with transmission/reception of a text/multimedia message.
The wireless internet module 113 refers to a module for wireless internet connection and may be installed in the terminal device or installed in the outside of the terminal device. As a wireless internet technology, wireless LAN (WLAN) (Wi-Fi), wireless broadband (Wibro), world interoperability for microwave access (Wimax), or high speed downlink packet access (HSDPA) may be used.
The short range communication module 114 refers to a module for short range communication. As a short range communication technology, a Bluetooth, a radio frequency identification (RFID), an infrared data association (IrDA), an ultra wideband (UWB), a ZigBee, and the like may be used.
The position information module 115 is a module for obtaining a position of the terminal device and a representative example thereof is a global position system (GPS) module.
Referring to FIG. 2, the audio/video (A/V) input unit 120 is a device which inputs an audio signal or a video signal and may include a camera 121 and a microphone 122. The camera 121 processes an image frame such as a still image or a moving image which is obtained by an image sensor in a video communication mode or a photographing mode. The processed image frame may be displayed on the display unit 151.
The image frame which is processed in the camera 121 may be stored in the memory 160 or transmitted to the outside through the wireless communication unit 110. Two or more cameras 121 may be provided depending on a usage environment.
The microphone 122 receives an external sound signal by a microphone in a call mode, a recording mode, or a voice recognizing mode to process the sound signal as electrical voice data. In the case of the call mode, the processed voice data is converted to be transmitted to a mobile communication base station through the mobile communication module 112 to be output. In the microphone 122, various noise removal algorithms which remove noises generated while receiving an external sound signal may be implemented.
The user input unit 130 generates input data which allows a user to control an operation of the terminal. The user input unit 130 may be configured by a keypad, a dome switch, a touch pad (static pressure/ static electricity), a jog wheel, a jog switch, and the like.
The sensing unit 140 detects a current status of the terminal device such as an open/closed state of the terminal device, a position of the terminal device, whether a user is in contact with the terminal device, an orientation of the terminal, acceleration/reduction of the terminal to generate a sensing signal for controlling an operation of the terminal device. For example, when the terminal device is a slide phone type terminal, the sensing unit 140 may sense whether the slide phone is open or closed. Further, the sensing unit 190 may sense whether the power supplying unit supplies power or whether the interface unit 170 is coupled to an external device. In the meantime, the sensing unit 140 may include a proximity sensor 141.
The output unit 150 generates outputs related to sight, hearing, and touch, and includes the display unit 151, a sound output module 152, an alarm unit 153, a haptic module 154, and a projector module 155.
The display unit 151 displays (outputs) information which is processed in the terminal device. For example, when the terminal device is in a phone call mode, the display unit displays an UI (user interface) or a GUI (graphic user interface) related to the phone call. When the terminal device is in a video call mode or a photographing mode, the display unit displays a photographed and/or received image, an UI or a GUI.
The display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT LCD), an organic light emitting diode (OLED), a flexible display, and a 3D display.
Some of the above-mentioned displays may be configured as a transparent type or a light transmissive type display so as to see the outside therethrough. This may be called a transparent display and a representative example of the transparent display may be a transparent OLED (TOLED). A rear side structure of the display unit 151 may also be configured as a light transmissive structure. According to this structure, the user may see an object located at a rear side of a terminal body through an area occupied by the display unit 151 of the terminal body.
Two or more display units 151 may be provided in accordance with an implementation type of the terminal device. For example, in the terminal device, a plurality of display units may be disposed to be spaced apart from each other or to be integrated on one surface or may be disposed on different surfaces, respectively.
When the display unit 151 and a sensor (hereinafter, referred to as a "touch sensor") which senses a touch operation form a layered structure (hereinafter, referred to as a "touch screen"), the display unit 151 may be used as an input device in addition to the output device. For example, the touch sensor may be formed by a touch film, a touch sheet, or a touch pad.
The touch sensor may be configured to convert a change in a pressure which is applied to a specific part of the display unit 151 or an electrostatic capacity generated in a specific part of the display unit 151 into an electric input signal. The touch sensor may be configured to detect not only a touched position and a touched area but also a pressure at the time of touch.
When there is a touch input to the touch sensor, corresponding signal(s) are sent to a touch controller. The touch controller processes the signal(s) and then transmits corresponding data to the controller 180. By doing this, the controller 180 may confirm which area of the display unit 151 is touched.
The proximity sensor 141 may be disposed in an internal area of the terminal device which is enclosed by the touch screen or in the vicinity of the touch screen. The proximity sensor refers to a sensor which detects whether there is an object approaching a predetermined detecting surface or an object present in the vicinity thereof using force or an electromagnetic field or infrared ray, without using mechanical contact. The proximity sensor has a longer lifespan and higher utilization than those of a contact type sensor.
Examples of the proximity sensor include a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror reflection type photoelectric sensor, a high frequency oscillation type proximity sensor, a capacitive proximity sensor, a magnetic proximity sensor, and an infrared proximity sensor. When the touch screen is an electrostatic sensor, the touch screen is configured to detect proximity of the pointer by change in an electric field according to the proximity of the pointer. In this case, the touch screen (touch sensor) may be classified as a proximity sensor.
Hereinafter, for the convenience of description, a behavior of a pointer which approaches the touch screen without being in contact with the touch screen to recognize that the pointer is located on the touch screen is referred to as "proximity touch" and a behavior of the pointer which is actually in contact with the touch screen is referred to as "contact touch". A position where proximity touch on the touch screen is achieved by a pointer refers to a position where the pointer vertically corresponds to the touch screen when the pointer is proximately touched.
The proximity sensor senses proximity touch and a proximity touch pattern (for example, a proximity touch distance, a proximity touch direction, a proximity touch speed, a proximity touch time, a proximity touch position, and a proximity touch movement status). Information corresponding to the sensed proximity touch operation and proximity touch pattern may be output on the touch screen.
Further, the color sensing sensor 142 is a sensor which provides a function of identifying color included in an external object.
The color sensing sensor 142 is a color identifying sensor configured by combining a photodiode and a color filter. An example of the color sensing sensor is a monochromic color sensor which measures a quantity of light having a specific color and an integrated type color sensor which identifies a half tone.
The monochromic color sensor is formed by bonding a filter having a specific color transmitting characteristic on a front surface of an amorphous Si photodiode and a wavelength of incident light in an arbitrary range transmits the color filter to reach the photo diode.
The integrated type color sensor is formed by bonding red (R), green (G), and blue (B) filters corresponding to three primary colors of light onto front surfaces of three photodiodes integrated on one substrate.
Therefore, since only R, G, and B components of the incident light reach each photo diode, original color may be identified in accordance with principle of three primary colors of light.
The color sensing sensor 142 according to the present invention may provide a function of sensing at least one color included in an object photographed by the camera 121.
Further, the display unit 151 may provide a light output function which diverges light to the outside.
Currently, the light output function in the terminal or server 100 is provided as a flashlight function.
The above-described light output function may be provided by a structure using an LED.
However, it is obvious that the configuration of the present invention is not limited thereto, but all technical contents which diverge light to the outside may be combined.
The sound output module 152 may output audio data which is received from the wireless communication unit 110 in the call signal receiving mode, the phone call mode, the recording mode, the voice recognizing mode, or the broadcast receiving mode or stored in the memory 160. The sound output module 152 outputs a sound signal related to a function (for example, a call signal reception sound or a message reception sound) performed in the terminal. The sound output module 152 may include a receiver, a speaker, a buzzer, and the like.
The alarm unit 153 outputs a signal for notifying that an event of the terminal device is generated. Examples of the event generated in the terminal device include call signal reception, message reception, key signal input, and touch input. The alarm unit 153 may output another type of signal other than the video signal or the audio signal, for example, a signal for notifying that the event is generated, by vibration. The video signal or the audio signal may be output through the display unit 151 or the voice output module 152 so that the display unit 151 or the voice output unit 152 may also be classified as a part of the alarm unit 153.
The haptic module 154 generates various tactile effects that the user may feel. A representative example of the tactile effect generated by the haptic module is vibration. An intensity and a pattern of the vibration generated by the haptic module 154 may be controlled. For example, different vibrations may be combined to be output or sequentially output.
In addition to the vibration, the haptic module 154 may generate various tactile effects such as pin arrangement perpendicular to a contacted skin surface, an injecting force or a suction force of air through an injection port or a suction port, brush of a skin surface, contact with an electrode, effect by stimulation of electromagnetic force, and effect by reproducing a thermal feedback using a heat absorbing or heat generating element.
The haptic module 154 may be implemented not only to transmit a tactile effect through direct contact, but also to allow the user to feel the tactile effect through a muscular sense such as a finger or an arm. Two or more haptic modules 154 may be provided according to a configuring aspect of the portable terminal.
The projector module 155 is a component which performs an image project function using the terminal device and displays an image which is same as or at least partially different from the image displayed on the display unit 151 on an external screen or a wall in accordance with the control signal of the controller.
Specifically, the projector module 155 may include a light source (not illustrated) which generates light (as an example, laser light) to output an image to the outside, an image generating unit (not illustrated) which generates an image to be output to the outside using light generated by the light source, and a lens (not illustrated) which enlarges and outputs the image at a predetermined focal distance to the outside. Further, the projector module 155 may include a device (not illustrated) which mechanically moves the lens or the entire modules to adjust an image projecting direction.
The projector module 155 may be classified into a cathode ray tube (CRT) module, a liquid crystal display (LCD) module, and a digital light processing (DLP) module depending on a device type of the display unit. Specifically, in the DLP module, light generated in the light source is reflected by a digital micromirror device (DMD) chip to enlarge and project the generated image so that it is advantageous to reduce a size of the projector module 151.
Desirably, the projector module 155 may be provided at a side, a front side, or a rear side of the terminal device in a length direction. However, it is needless to say that the projector module 155 may be provided at nay position of the terminal device as needed.
The memory unit 160 may store a program for processing and controlling the controller 180 and perform a function for temporarily storing data to be input/output (for example, a contact list, a message, audio, a still image, or a moving image). The memory unit 160 may also store usage frequency (for example, usage frequency of the phone book, the message, and the multimedia) for the data. Further, the memory unit 160 also store data related to various patterns of vibration and sound which are output at the time of touch input on the touch screen.
The memory 160 may include at least one storing medium of a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, an SD or XD memory), a random access memory (RAM), a static random access memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, and an optical disk. However, the terminal device may operate in association with a web storage which performs a storage function of the memory 160 on the Internet.
The interface unit 170 serves as a passage to all external equipment which is connected to the terminal device. The interface unit 170 receives data or power from the external equipment to transmit the data or power to each component in the terminal device or transmits the data in the terminal device to the external equipment. For example, the interface unit may include a wired/wireless headset port, an external charger port, a wired/wireless data port, a memory card port, a port connecting devices with identification modules, an audio input/output (I/O) port, a video input/output (I/O) port, an earphone port, and the like.
The identification module is a chip which stores various information for authenticating a permission of the terminal device and may include a user identity module (UIM), a subscriber identity module (SIM), a universal subscriber identity module (USIM), and the like. A device in which the identification module is provided (hereinafter, identification device) may be manufactured as a smart card. Therefore, the identification device may be connected to the terminal through a port.
When the mobile terminal is connected to an external cradle, the interface unit may serve as a passage through which power from the cradle is supplied to the mobile terminal or various command signals input from the cradle by the user is transmitted to the mobile terminal. Various command signals input from the cradle or the corresponding power may also operate by a signal for recognizing that the mobile terminal is accurately installed on the cradle.
The controller 180 generally controls an overall operation of the terminal device.
For example, the controller 180 performs related control and process for voice call, data communication, and video call. The control unit 180 may include a multimedia module 181 for reproducing a multimedia. The multimedia module 181 may be implemented in the control unit 180 or separately implemented from the control unit 180.
The control unit 180 may perform a pattern recognition process for recognizing a handwriting input or a drawing input performed on the touch screen as characters and images, respectively.
The power supplying unit 190 is applied with external power and internal power by the control of the controller 180 to supply power required for operations of the components.
Various exemplary embodiment described herein may be implemented in a recording medium which is readable by a computer or other similar device using software, hardware, or a combination thereof.
According to a hardware implementation, the exemplary embodiment described herein may be implemented by using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, and electric units for performing other functions. In some cases, the exemplary embodiments described in this specification may be implemented by the controller 180.
According to a software implementation, exemplary embodiments such as procedures and functions described in this specification may be implemented by separate software modules. The software modules may perform one or more functions and operations described in this specification. A software code may be implemented by a software application which is written by an appropriate program language. The software code may be stored in the memory 120 and executed by the controller 180.
A specific function of the present invention will be described based on the terminal or server 100 elements of the content input unit 2, the emoticon generating unit 3, and the emoticon receiving unit 4 which configure the above-described emoticon service providing system 1.
Further, the present invention suggests a structure which converts Native APIs corresponding to iOS and Android, respectively after compiling and efficiently responds to various devices.
Further, the present invention suggests a system which stores chatting data of the user as an analyzed key token in a DB and allows the user data accumulated in the DB to contribute to enhance precision of word vector based artificial intelligence again, to increase precision of the artificial intelligence as the usage is increased (word vector based machine learning).
Further, according to the present invention, differently from the usage method of the related art which repeatedly uses purchased emoticon contents, various variables such as keywords, emotions, usage frequency, preference, climate, date, time, places, and issues are analyzed and combined so that new design may be used at every time.
Further, according to the present invention, a character emoticon expression is changed according to an input word or symbol (number) element in real time so that the user can check and express his/her emotion.
Further, according to the present invention, a technical restriction such as delay which is generated during a calculating process is reduced by a user friendly UX design and a communication process may be more intimately transmitted to the user.
Furthermore, according to the present invention, precision of the artificial intelligence may be consistently increased by the machine learning.
FIG. 3 illustrates a flowchart for explaining a machine learning based artificial intelligence emoticon service providing method suggested by the present invention.
Referring to FIG. 3, a step S110 of receiving contents such as a text, a voice, an image, or a moving image from a user, by means of a terminal or server 100 of the content input unit 2 according to the present invention is performed first.
Next, the terminal or server 100 of the emoticon generating unit 3 analyzes a morpheme based on the input contents (S120).
In step S120, a controller 180 of the terminal or server 100 performs an operation of converting the contents into a key token through an operation which analyzes the morpheme and converts an expressed word into a basic verb.
Further, next to step S120, the terminal or server 100 of the emoticon generating unit 3 performs an operation of analyzing a context of contents input in step S110 through verbs, nouns, adjectives, and punctuation marks (S130).
Through step S130, the terminal or server 100 of the emoticon generating unit 3 performs an operation of matching an emoticon and key token information (S140).
In step S140, the control unit 180 of the terminal or server 100 may apply individual emoticon tags or use a plurality of emoticons through an emoticon category.
In step S140, when the emoticon matching operation is completed, the terminal or server 100 of the emoticon generating unit 3 transmits the determined emoticon to the terminal or server 100 of the emoticon receiving unit 4.
As a result, the present invention provides a smart application which analyzes an emotion and contexts of a text or a voice message to automatically transmit emoticons.
By doing this, a new communication experience which 1) analyzes an emotion of a user which is hard to be expressed by the text method of the related art by an artificial intelligence API and 2) designs an emoticon in real time utilizing its own agent to exchange emotions is provided.
FIG. 4 illustrates a specific example of steps of the machine learning based artificial intelligence emoticon service providing method explained in FIG. 3.
Referring to FIG. 4, as an example of step S110, the user inputs contents of "Let's eat chikenkichikchikeeeenchiken"
Further, referring to FIG. 4, as an example of step S120, the terminal or server 10 of the emoticon generating unit 3 extracts "chicken", "chikeee", "en", "eat" and "let's".
Further, as an example of step S120, an operation of converting the contents into basic verb is performed and "eat" is converted into "eatte" and "eat" is converted into "go".
Next, in FIG. 4, as an example of step S140, "chicken" obtained by analyzing the morpheme matches a category "when the user is hungry", "eat" matches a category "when the user is hungry" and "when the user is full", and "go" matches a category when "the user is excited".
Further, as an example of step S140, a status of "76% of joy", "42% of sadness", "50% of anger", "12% of fear" and "22% of surprise" may be extracted through API analysis of a key token extracted in step S120.
Next, based on steps S130 and S140, the terminal or server 100 of the emoticon generating unit 3 transmits the determined emoticon to the terminal or server 100 of the emoticon receiving unit 4.
As a result, a context and nuance of a plurality of situations (for example, 49 situations) and emotions (joy, sadness, anger, fear, and surprise) of a text which is currently being input are analyzed using its own algorithm and an external API and an emotion tone of an emoticon design may be determined.
Further, images which match the extracted keyword are generated and combined in accordance with a predetermined rule to generate an emoticon.
In the meantime, in the present invention, in step S150, various variables are combined so that emotions may be expressed by an infinite number of emoticons.
The user may experience convenience when the emotion and the situation of the user are converted into the most appropriate emoticon through accurate analysis and pleasure of communication through which emotions are shared.
That is, since the emoticon is generated by reflecting an emotion of the user, an entire context of conversation and the used keyword and contents is called from the cloud, the emoticon may be provided in accordance with the situation at every time.
Further, in the present invention, image processing is applied to a resultant so that even the same emoticon may be differently expressed depending on the intensity of emotion and the user may perform more delicate communication.
FIG. 5 illustrates a specific example in which an emoticon expressed in accordance with change in an emotion of a user or change in an intensity of emotion in regard to the present invention.
In FIG. 5, even though emoticons, related to the chicken, which are described as an example in FIG. 4 are represented as the same way, FIGS. 5A, 5B, and 5C illustrate examples that emoticons whose color and size is changed in accordance with change in the emotion of the user or change in an intensity of emotion are represented.
In the meantime, according to another exemplary embodiment of the present invention, in addition to a method of providing a matched emoticon to the user, a method of arranging emoticons in the order of relevance to allow the user to select the emoticon may be provided.
FIG. 6 is a flowchart for explaining a method that arranges emoticons in the order of relevance to be selected by a user in regard to another exemplary embodiment of the present invention.
Referring to FIG. 6, a step S110 of receiving contents such as a text, a voice, an image, or a moving image from a user, by means of a terminal or server 100 of the content input unit 2 according to the present invention is performed first.
Next, the terminal or server 100 of the emoticon generating unit 3 analyzes a morpheme based on the input contents (S120).
Further, next to step S120, the terminal or server 100 of the emoticon generating unit 3 performs an operation of analyzing a context of contents input in step S110 through verbs, nouns, adjectives, and punctuation marks (S130).
Next, the terminal or server 100 of the emoticon generating unit 3 performs an operation of matching the emoticon and key token information but differently from the above-described step S140, the terminal or server 100 of the emoticon generating unit 3 performs a step S210 of calculating relevance with an emoticon in the database (DB) to match the emoticon.
That is, when "chicken" obtained by analyzing the morpheme described in FIG. 4 as an example matches a category "when the user is hungry", "eat" matches a category "when the user is hungry" and "when the user is full", and "go" matches a category when "the user is excited", the relevance with the matched category is calculated to match the emoticon.
Further, when a status such as "76% of joy", "42% of sadness", "50% of anger", "12% of fear" and "22% of surprise" is extracted through API analysis described in FIG. 4 as an example, an emoticon having the highest relevance may be matched.
Thereafter, differently from the method of the related art, the terminal or server 100 of the emoticon generating unit 3 arranges the emoticons in the order of relevance to provide the emoticon to the user of the terminal or server 100 of the content input unit 2 (S220).
In this case, the user of the terminal or server 100 of the content input unit 2 selects a specific emoticon among emoticons arranged in the order of relevance (S230), and the selected emoticon is transmitted to the terminal or server 100 of the emoticon receiving unit 4 (S150).
In this case, the user may be highly likely to select the emoticon having high relevance which is arranged in the front side among the arranged emoticons.
In the meantime, when an additional message is input from the user before transmitting the emoticon, an event in which a context of the contents of the user is changed may be generated.
Therefore, according to the present invention, when an additional message is input from the user before transmitting the emoticon in response to the event, a method which reflects the additional message in real time to allow the user to select an emoticon having relevance.
FIG. 7 is a flowchart for explaining a method that, when an additional message is input by a user before transmitting an emoticon, reflects the additional message in real time to allow the user to select a relevant emoticon, in regard to another exemplary embodiment of the present invention.
Steps S110 to S130 and S210 of FIG. 7 correspond to steps S110 to S130 and S210 described in FIG. 6, so that the description thereof will be omitted for the sake of simplicity of the specification.
Differently from the process in FIG. 6, after step S210, the process of FIG. 7 further includes a step S310 of determining whether any one of a text, a voice, an image, and a moving image is additionally added by means of the terminal or server 100 of the content input unit 2.
In this case, there is no additionally input content in step S310, the same process as in FIG. 6 is performed. However, when there is an additionally input content, the terminal or server 100 of the emoticon generating unit 3 performs a step of analyzing a morpheme based on the additionally input content and an operation of analyzing a context of the additionally input content through verbs, nouns, adjectives, and punctuation marks (S330).
Further, the terminal or server 100 of the emoticon generating unit 3 matches the emoticon in consideration of the added keyword (S340) and reflects a matching result in step S340 and a matching result through step S210 in real time, and arranges emoticons in the order of relevance to transmit the emoticon to the terminal or server 100 of the content input unit 2 (S350).
Thereafter, the user of the terminal or server 100 of the content input unit 2 selects a specific emoticon among emoticons arranged in the order of relevance (S230), and the selected emoticon is transmitted to the terminal or server 100 of the emoticon receiving unit 4 (S150).
Individual steps described in FIGS. 6 and 7 will be described in detail with reference to the drawings.
FIG. 8 illustrates a specific example which analyzes a morpheme through contents input by a user in FIG. 6 or 7.
FIG. 8 illustrates a specific example of step S120 of FIG. 6 or 7.
Referring to FIG. 8, an example that the user of the terminal or server 100 of the content input unit 2 inputs that "It seems to be late today because I got wrong bus on the way. How can I do that?"
In this case, the terminal or server 100 of the emoticon generating unit 3 may analyze morphemes of "on", "way", "bus", "wrong" "got", "today", "late", "how", "do" and "?".
Next, FIG. 9 illustrates a specific example of the present invention which extracts a keyword based on the morphemes analyzed in FIG. 8.
FIG. 9 illustrates a specific example of step S130 of FIG. 6 or 7.
Referring to FIG. 9, the terminal or server 100 of the emoticon generating unit 3 select a morpheme having a strong semantic element from morphemes of emotion, situation, and punctuation marks of "on", "way", "bus", "wrong" "got", "today", "late", "how", "do" and "?".
In FIG. 9, "wrong", "late", and "how" may be selected morphemes.
Further, FIG. 10 illustrates an example of a specific operation of matching a relevant emoticon in a database and the keyword extracted in FIG. 9.
FIG. 10 illustrates a specific example of step S210 of FIG. 6 or 7.
Referring to FIG. 9, a plurality of emoticons corresponding to "wrong", "late", and "how" which are morphemes selected in FIG. 9 is represented to match each other.
Thereafter, the plurality of emoticons may be arranged in the order of relevance.
FIG. 11 illustrates a specific example which arranges emoticons matched in FIG. 10 in the order of relevance and displays the emoticons to the user.
FIG. 11 illustrates a specific example of step S220 of FIG. 6 or step S350 of FIG. 7.
Referring to FIG. 11, a specific aspect in which image addresses URL of the emoticon contents are listed in the descending order of relevance by numerical values in the range of 0.0 to 1.0 points to be recommended is illustrated.
Six emoticons may have relevant evaluations of 0.64, 0.63, 0.62, 0.71, 0.92, and 0.59.
FIG. 12 illustrates a specific example which allows a user to select one of a plurality of emoticons displayed in FIG. 11 to transmit the selected emoticon.
Referring to FIG. 12, the emoticons are arranged in the descending order of 0.64, 0.63, 0.62, 0.71, 0.92, and 0.59 calculated in FIG. 11 to be displayed in the terminal or server 100 of the content input unit 2 and the user may select a specific emoticon. Further, in step S150, the selected emoticon is transmitted to the terminal or server 100 of the emoticon receiving unit 4.
In the meantime, FIG. 13 illustrates an example that after step S210 described in FIG. 7, further includes a step S310 of determining whether any one of a text, a voice, an image, and a moving image is additionally added by means of the terminal or server 100 of the content input unit 2.
In FIG. 13, an additional text saying that "Because I waited all the time, it serves you right, lol" is input through the terminal or server 100 of the content input unit 2 before transmitting the emoticon.
Therefore, the terminal or server 100 of the emoticon generating unit 3 performs a step S320 of analyzing a morpheme based on the additionally input contents ("Because I waited all the time, it serves you right, lol") and an operation of analyzing a context of the additionally input content through the verbs, nouns, adjectives, and punctuation marks (S330), and matches the emoticon in consideration of the added keyword (S340). Further, the terminal or server 100 of the emoticon generating unit 3 reflects the matching result in step S340 and the matching result through the previous step S210 in real time and arranges the emoticons in the order of relevance to transmit the emoticon to the terminal or server 100 of the content input unit 2 (S350).
Thereafter, as illustrated in FIG. 13, the user of the terminal or server 100 of the content input unit 2 reflects the matching result in step S340 and the matching result through the previous step S210 in real time to select a specific emoticon among emoticons arranged in the order of relevance (S230), and transmits the selected emoticon to the terminal or server 100 of the emoticon receiving unit 4 (S150).
However, in the above-described configuration of the present invention, an example which is applied based on "Korean alphabet" but the contents of the present invention is not limited to "Korean alphabet".
FIGS. 14 and 15 illustrate a specific example which provides a machine learning based artificial intelligence emoticon service in the case of English, in regard to the present invention.
Referring to FIG. 14, the user of the terminal or server 100 of the content input unit 2 inputs a text 210 of "Yes, I'm on my-" through the display unit 151, a specific example in which a plurality of relevant emoticons 220 is displayed on the display unit 151 according to the method of FIG. 3, 6, or 7 is illustrated.
Further, referring to FIG. 15, the user of the terminal or server 100 of the content input unit 2 inputs a text 230 of "it seems to be a little late...because I go-" through the display unit 151, a specific example in which a plurality of relevant emoticons 240 is displayed on the display unit 151 according to the method of FIG. 3, 6, or 7 is illustrated.
In the meantime, the machine learning based artificial intelligence emoticon service providing system 1 according to the present invention may be utilized as a system which does not store an emoticon image in a device or an OS, but is managed in a cloud to be flexibly provided in real time in accordance with a usage context.
Further, the machine learning based artificial intelligence emoticon service providing system 1 according to the present invention may be utilized as a system which classifies a data type (emotion, situation, or information) of a text input to a smart phone (terminal) to be automatically converted into a graphic image according to a separate modeling principle.
Further, the machine learning based artificial intelligence emoticon service providing system 1 according to the present invention may be utilized as a system which substitutes a morpheme detected from a text of an instant message and a semantic element into an indirect advertising image to provide an advertising service based thereon.
Further, the machine learning based artificial intelligence emoticon service providing system 1 according to the present invention may be utilized as a system which recognizes a situation and an emotion in a voice message transmitted while chatting to substitute the situation and emotion into an emoticon and adds a voice message to the emoticon to transmit the emoticon and the voice message together.
When the above-described configuration of the present invention is applied, a machine learning based artificial intelligence emoticon service method may be provided to the user.
Specifically, the present invention may provide a system and an application which analyze an emotion element and a context included in a message of a user using a machine learning based artificial intelligence technology and express a character emoticon in real time.
Further, the present invention may provide artificial intelligence technology convergence which recognizes context data (components such as emotion, environment, and an object) through message analysis and reprocesses the context data into an emoticon which is a visual communication tool to a user.
Furthermore, the present invention may provide an application and an API which are not limited to a specific service or application but universally used in consideration of a usage environment to input a text to a user.
Further, the present invention may build a word vector based artificial intelligence machine learning system which learns a conversation habit pattern of a user and is more accurate as the system is used and provide the system to the user.
The present invention may suggest more accurate and convenient communication experience by combining a messaging communication which accounts for a majority of mobile communications and an artificial intelligence and design technology and innovate a usage experience of an emoticon which just plays an auxiliary role of a text to analyze conversation by artificial intelligence and recombine a graphic component in real time, thereby providing unlimited expressions.
Further, the present invention may develop a technology that adds a design to contents which are continuously created in an SNS, blog, and media using not only a message application but also an input interface.
Further, according to the present invention, differently from the usage method of the related art which repeatedly uses purchased emoticon contents, various variables such as keywords, emotions, usage frequency, preference, climate, date, time, places, and issues are analyzed and combined so that new design may be used at every time.
Further, according to the present invention, a character emoticon expression is changed according to an input word or symbol (number) element in real time so that the user can check and express his/her emotion.
Further, according to the present invention, a technical restriction such as delay which is generated during a calculating process is reduced by a user friendly UX design and a communication process may be more intimately transmitted to the user.
Furthermore, according to the present invention, precision of the artificial intelligence may be consistently increased by the machine learning.
The present invention can be implemented as a computer-readable code in a computer-readable recording medium. The computer readable recording medium includes all types of recording device in which data readable by a computer system is stored. Examples of the computer readable recording medium are a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storing device and also implemented as a carrier wave (for example, transmission through the Internet).
Further, the computer readable recording medium is distributed in computer systems connected through a network and a computer readable code is stored therein and executed in a distributed manner. Further, a functional program, a code, and a code segment which may implement the present disclosure may be easily deducted by a programmer in the art.
In the apparatus and the method thereof described above, the configuration and method of embodiments as described above may not be applied with limitation, but the embodiments may be configured by selectively combining all or a part of each embodiment such that various modifications may be made.

Claims (10)

  1. A machine learning based artificial intelligence emoticon providing method, the method comprising:
    a first step of inputting at least one first content through a first terminal;
    a second step of transmitting the first content to a server, by means of the first terminal;
    a third step of classifying a text included in the first content by a predetermined unit to generate a plurality of second texts, by means of the server;
    a fourth step of filtering at least one third text which satisfies a predetermined condition, among the plurality of second texts, by means of the server;
    a fifth step of determining at least one first emoticon which matches the third text, among a plurality of previously stored emoticons, by means of the server; and
    a sixth step of transmitting the first emoticon to a second terminal, by means of the server.
  2. The method of claim 1, wherein the first content includes text information, image information, moving image information, and voice information.
  3. The method of claim 2, wherein when the first content is image information or moving image information, in the third step, the server extracts a text included in the image information or the moving image information and classifies the extracted text by a predetermined unit to generate the plurality of second texts; and when the first content is voice information, in the third step, the server converts the voice information into text information and classifies the converted text information by the predetermined unit to generate the plurality of second texts.
  4. The method of claim 1, wherein the predetermined unit in the third step is a morpheme unit and in the third step, at least a part of the plurality of second texts is converted into a basic verb.
  5. The method of claim 1, wherein the predetermined condition in the fourth step is whether the text is a text having meanings.
  6. The method of claim 1, wherein the third texts are plural, and the first emoticons which match the plurality of third texts are plural.
  7. The method of claim 6, wherein the fifth step includes:
    a step 5-1 of classifying the plurality of third texts by at least one category among a plurality of predetermined categories, by means of the server;
    a step 5-2 of counting the number of classified third texts for every category, by means of the server;
    a step 5-3 of assigning a result value obtained by counting the third texts for every category to the third text which belongs to each category, by means of the server;
    a step 5-4 of determining a plurality of first emoticons which matches the plurality of third texts among a plurality of previously stored emoticons, by means of the server; and
    a step 5-5 of determining an arrangement order of the plurality of first emoticons according to the result values of the plurality of third texts, by means of the server.
  8. The method of claim 7, further comprising:
    between the fifth step and the sixth step,
    a step 5-6 of transmitting the plurality of first emoticons and the arrangement order to the first terminal, by means of the server;
    a step 5-7 of displaying the plurality of first emoticons according to the arrangement order, by means of the first terminal;
    a step 5-8 of selecting a second emoticon among the plurality of first emoticons, by means of a user of the first terminal; and
    a step 5-9 of transmitting information on the second emoticon to the server, by means of the first terminal,
    wherein in the sixth step, the server transmits the second emoticon to the second terminal.
  9. The method of claim 1, wherein before the sixth step, when at least one second content is additionally input by means of the first terminal, the first to fifth steps are additionally performed on the second content.
  10. The method of claim 1, wherein the first terminals are plural, data related to the first to sixth steps between the plurality of first terminals and the server is stored in the server, and the server accumulates and uses the stored data to perform machine learning.
PCT/KR2017/001192 2017-01-05 2017-02-03 Machine learning based artificial intelligence emoticon service providing method WO2018128214A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2017-0001950 2017-01-05
KR1020170001950 2017-01-05
KR10-2017-0001949 2017-01-05
KR1020170001949 2017-01-05

Publications (1)

Publication Number Publication Date
WO2018128214A1 true WO2018128214A1 (en) 2018-07-12

Family

ID=62789428

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/001192 WO2018128214A1 (en) 2017-01-05 2017-02-03 Machine learning based artificial intelligence emoticon service providing method

Country Status (1)

Country Link
WO (1) WO2018128214A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3852044A1 (en) * 2020-01-15 2021-07-21 Beijing Dajia Internet Information Technology Co., Ltd. Method and device for commenting on multimedia resource
US11562510B2 (en) 2019-12-21 2023-01-24 Samsung Electronics Co., Ltd. Real-time context based emoticon generation system and method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100627853B1 (en) * 2005-06-01 2006-09-26 에스케이 텔레콤주식회사 A method for converting sms message to multimedia message and sending the multimedia message and text-image converting server
WO2008111699A1 (en) * 2007-03-14 2008-09-18 Strastar A method of converting sms mo message to emoticon sms or mms message
US20090058860A1 (en) * 2005-04-04 2009-03-05 Mor (F) Dynamics Pty Ltd. Method for Transforming Language Into a Visual Form
US20150286371A1 (en) * 2012-10-31 2015-10-08 Aniways Advertising Solutions Ltd. Custom emoticon generation
US20160283454A1 (en) * 2014-07-07 2016-09-29 Machine Zone, Inc. Systems and methods for identifying and suggesting emoticons

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090058860A1 (en) * 2005-04-04 2009-03-05 Mor (F) Dynamics Pty Ltd. Method for Transforming Language Into a Visual Form
KR100627853B1 (en) * 2005-06-01 2006-09-26 에스케이 텔레콤주식회사 A method for converting sms message to multimedia message and sending the multimedia message and text-image converting server
WO2008111699A1 (en) * 2007-03-14 2008-09-18 Strastar A method of converting sms mo message to emoticon sms or mms message
US20150286371A1 (en) * 2012-10-31 2015-10-08 Aniways Advertising Solutions Ltd. Custom emoticon generation
US20160283454A1 (en) * 2014-07-07 2016-09-29 Machine Zone, Inc. Systems and methods for identifying and suggesting emoticons

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11562510B2 (en) 2019-12-21 2023-01-24 Samsung Electronics Co., Ltd. Real-time context based emoticon generation system and method thereof
EP3852044A1 (en) * 2020-01-15 2021-07-21 Beijing Dajia Internet Information Technology Co., Ltd. Method and device for commenting on multimedia resource
US11394675B2 (en) 2020-01-15 2022-07-19 Beijing Dajia Internet Information Technology Co., Ltd. Method and device for commenting on multimedia resource

Similar Documents

Publication Publication Date Title
WO2020045927A1 (en) Electronic device and method for generating short cut of quick command
WO2014025186A1 (en) Method for providing message function and electronic device thereof
WO2014119889A1 (en) Method of displaying user interface on device, and device
WO2018030594A1 (en) Mobile terminal and method for controlling the same
WO2014157886A1 (en) Method and device for executing application
WO2014003391A1 (en) Method and apparatus for displaying content
WO2015057013A1 (en) Method by which portable device displays information through wearable device, and device therefor
WO2018117349A1 (en) Mobile terminal and method for controlling the same
WO2014035209A1 (en) Method and apparatus for providing intelligent service using inputted character in a user device
WO2015194693A1 (en) Video display device and operation method therefor
WO2019160198A1 (en) Mobile terminal and control method therefor
WO2022131521A1 (en) Input device comprising touchscreen, and operation method of same
WO2020180034A1 (en) Method and device for providing user-selection-based information
WO2016039509A1 (en) Terminal and method for operating same
WO2016200005A1 (en) Mobile terminal and display operating method thereof
WO2018026116A1 (en) Terminal and controlling method thereof
WO2014142412A1 (en) Mobile device and control method for the same
WO2017094984A1 (en) Mobile device and controlling method thereof
WO2015178716A1 (en) Search method and device
WO2018128214A1 (en) Machine learning based artificial intelligence emoticon service providing method
WO2015170799A1 (en) Message providing method and message providing device
WO2021075705A1 (en) Electronic device and control method therefor
WO2023132459A1 (en) Electronic device for displaying ar object, and method therefor
WO2018004061A1 (en) Method for touch control in mobile real-time simulation game
WO2020075960A1 (en) Electronic device, external electronic device, and method for controlling external electronic device by using electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17890125

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.09.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17890125

Country of ref document: EP

Kind code of ref document: A1