WO2017039258A1 - Dispositif et procédé permettant d'insérer un texte d'image - Google Patents

Dispositif et procédé permettant d'insérer un texte d'image Download PDF

Info

Publication number
WO2017039258A1
WO2017039258A1 PCT/KR2016/009584 KR2016009584W WO2017039258A1 WO 2017039258 A1 WO2017039258 A1 WO 2017039258A1 KR 2016009584 W KR2016009584 W KR 2016009584W WO 2017039258 A1 WO2017039258 A1 WO 2017039258A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
text
style
user
area
Prior art date
Application number
PCT/KR2016/009584
Other languages
English (en)
Korean (ko)
Inventor
전수영
권지용
Original Assignee
스타십벤딩머신 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 스타십벤딩머신 주식회사 filed Critical 스타십벤딩머신 주식회사
Priority claimed from KR1020160109792A external-priority patent/KR101852901B1/ko
Publication of WO2017039258A1 publication Critical patent/WO2017039258A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Definitions

  • the present invention relates to an apparatus for inserting an image text and an inserting method, and more particularly, to an apparatus and method for inserting a text whose style is changed into a selected image by changing the text input by the user into a style that matches the image selected by the user. will be.
  • the terminal is diversified, the terminal is implemented in the form of a multimedia player having complex functions such as taking a picture or video, playing a music or video file, playing a game, and receiving a broadcast.
  • the terminal provides the content to the user in the form of playing or executing the content provided through the external content providing server or previously stored on the external display device or its own display.
  • Korean Patent Publication No. 10-2012-0005153 is disclosed.
  • the conventional terminal only provides the stored content, there is a problem that can not be modified and modified.
  • In processing an image included in the content into a motion image there was a problem that is not intuitive because it had to go through a complicated process.
  • the background art described above is technical information that the inventors possess for the derivation of the present invention or acquired in the derivation process of the present invention, and is not necessarily a publicly known technique disclosed to the general public before the application of the present invention. .
  • One embodiment of the present invention is to provide an image text insertion device and insertion method.
  • an image for storing at least one image, matching and storing the style corresponding to each image A storage unit, an image selection unit for selecting one of the at least one image, and a style conversion unit for converting a style of text input from the user based on a style matched with the selected image
  • a text image generation unit for inserting the converted text into the selected image to generate a text image
  • a text image providing unit for providing the text image through a layer provided on at least one area of the screen of the user terminal.
  • a method of inserting text into an image listing at least one image and receiving one from a user, based on a style matched to the selected image, input from the user Converting a style of the received text, inserting the style-converted text into the selected image to generate a text image, and providing the text image through a layer provided in at least one area of the screen of the user terminal.
  • an embodiment of the present invention can propose an image text insertion apparatus and an image text insertion method.
  • any one of the problem solving means of the present invention it is possible to provide a natural image in which the image and the text are integrated by changing and inserting the text input by the user according to the style matched to the image.
  • FIG. 1 is a network diagram including a text insertion system according to an embodiment of the present invention.
  • FIG. 2 is a block diagram schematically showing the configuration of a text insertion apparatus according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a text insertion method according to an embodiment of the present invention.
  • 4 to 6 are exemplary views for explaining a text insertion method according to an embodiment of the present invention.
  • FIG. 1 is a block diagram schematically showing the configuration of a text insertion system 100 according to an embodiment of the present invention.
  • the text insertion system 100 includes a user terminal (10).
  • the user terminal 10 may be installed with a program that provides an interface for allowing a user to insert text into an image.
  • the program is implemented as a text insertion method according to an embodiment of the present invention
  • the user terminal 10 is installed the program can operate independently according to the text insertion method according to an embodiment of the present invention, and also Implemented as a server 20 and a group of server-client system can operate according to the text insertion method according to an embodiment of the present invention.
  • the term 'image' refers to a variety of information that is digitally produced and distributed / edited, and includes, for example, emoticons (including static emoticons and dynamic emoticons) and videos.
  • the "image” may include an image, a video, etc. representing the user's emotion in the chat, depending on the function provided.
  • the user terminal 10 as described above may be implemented on an electronic terminal including an interface that enables user interaction.
  • the electronic terminal may be implemented as a computer, a portable terminal, a television, a wearable device, or the like, which may be connected to a remote server through the network N or may be connected to another terminal and the server.
  • the computer includes, for example, a laptop, desktop, laptop, etc., which is equipped with a web browser
  • the portable terminal is, for example, a wireless communication device that ensures portability and mobility.
  • the television may include an Internet Protocol Television (IPTV), an Internet Television (Internet Television), a terrestrial TV, a cable TV, or the like.
  • IPTV Internet Protocol Television
  • Internet Television Internet Television
  • the wearable device is, for example, an information processing device of a type that can be worn directly on a human body such as a watch, glasses, accessories, clothes, shoes, etc., and is connected to a remote server or another terminal via a network directly or through another information processing device. It can be connected with.
  • the user terminal 10 may include an input device such as a wired / wireless communication module, a display, a keyboard and a mouse, a touch sensor, and a non-contact control device.
  • an input device such as a wired / wireless communication module, a display, a keyboard and a mouse, a touch sensor, and a non-contact control device.
  • the user terminal 10 may output an image to the display.
  • the display and the touch sensor form a mutual layer structure or are integrally formed (hereinafter referred to as a touch screen)
  • the display may be used as an input device in addition to the output device.
  • the touch sensor has a form of a touch film, a touch sheet, a touch pad, or the like
  • the touch sensor may be stacked on the display to form a layer structure, and may be included in the display.
  • the touch sensor may be configured to convert a change in pressure applied to a specific portion of the display or capacitance occurring at a specific portion of the display into an electrical input signal.
  • the touch sensor may be configured to detect not only the position and area of the touch but also the pressure at the touch.
  • the touch controller processes the signal (s) and then transmits the corresponding data to the control device. As a result, the control device can know which area of the display is touched.
  • the user terminal 10 may communicate with the server 20 via the network (N).
  • the network N may include a local area network (LAN), a wide area network (WAN), a value added network (VAN), a personal local area network (PAN), a mobile communication network. It can be implemented in all kinds of wired / wireless networks such as mobile radio communication network (Wibro), wireless broadband Internet (Wibro), mobile WiMAX, high speed downlink packet access (HSDPA), or satellite communication network.
  • LAN local area network
  • WAN wide area network
  • VAN value added network
  • PAN personal local area network
  • a mobile communication network It can be implemented in all kinds of wired / wireless networks such as mobile radio communication network (Wibro), wireless broadband Internet (Wibro), mobile WiMAX, high speed downlink packet access (HSDPA), or satellite communication network.
  • Wibro mobile radio communication network
  • WiMAX wireless broadband Internet
  • HSDPA high speed downlink packet access
  • the server 20 may be a server to share the edited content with other user terminal, for example, the server 20, which can provide a web service to each of the user terminal 10 or other server It may be a web server, or may be, for example, a portal site server or a server system of various web content providers such as a chat service providing server.
  • the server 20 may implement a server-client system together with the user terminal 10 to support text insertion in an image.
  • the server 20 as described above may be implemented as a group of server systems such as a load balancing server, a database server, including a web server.
  • Text insertion apparatus 10 is an image storage unit 110, layer providing unit 120, image selection unit 130, style conversion unit 140, text image generation unit 150 And a text image providing unit 160.
  • the text insertion apparatus 10 having more or fewer components may be implemented.
  • the image storage unit 110 may store at least one image and match and store a style corresponding to each image.
  • the image storage unit 110 may store at least one image that can be used while creating a post in a chat, blog or SNS, etc., each style is matched with the style style that is a rule for the shape of the letter to match the image Can be.
  • the style is information on the font, size, color, three-dimensional, pattern (texture), outline (outline), the effect around the letter or the dynamic effect (animation) of the letter.
  • the image storage unit 110 may store a character image having a serious appearance, and the character image may be matched with a style defined by a font type of 'bow' and '10'.
  • the image storage unit 110 may store an angry character image, the character image is defined as a 'gothic' font and '13' font size, the flame effect around the letters and the letter shaking the dynamic effect The style can be matched.
  • the image storage unit 110 may match and store information on the text area, which is the area where the text is inserted for each of the at least one image.
  • the image storage unit 110 may receive a text area for inserting a test for each image and match and store the text area.
  • the image storage unit 110 may receive a text area from the administrator on the upper right side of the image with respect to the surprised image, and match the stored text area with the surprised image.
  • the image storage unit 110 may set a lower end of each image as a text area for all images, and when the user changes the text area for a specific image, the changed area is corresponding to the image. Can be updated and saved in the text area of.
  • the layer providing unit 120 may provide a layer in which at least one of the at least one image and the text image is displayed on at least one area of the screen of the user terminal.
  • the layer providing unit 120 may detect a user input. When the user input is detected, the layer providing unit 120 may provide a layer.
  • the layer providing unit 120 is a text image that is an image on the top of the keypad displayed on the screen or an image in which the text input by the user is inserted. It can provide a layer of a constant size that can be displayed.
  • the image selector 130 may list at least one image and receive one from the user.
  • the image selecting unit 130 may detect a user input for inputting text by the user, and may provide at least one image through a layer when the user input is detected. For example, when a user inputs text in an input window, the image selector 130 may provide at least one image to a layer provided on the keypad.
  • the image selector 130 may analyze the text of the user input in the input window, and determine an image suitable for the meaning of the text and display it differently from other images.
  • the image selection unit 130 may display an image of disappointment larger than other images.
  • the image selector 130 may provide at least one image to a layer according to a user's image providing request. For example, when a user selects an image display button and requests to provide an image, the image selector 130 may provide at least one image through a layer.
  • the image selector 130 may receive a user's selection of any one of at least one image provided through the layer.
  • the image selector 130 may display only the selected image on the layer or enlarge the size of the selected image than the remaining images.
  • the style converting unit 140 may convert the style of the text input from the user based on the style matched with the selected image.
  • the style conversion unit 140 may change at least one of font, font size, and font color of the text based on the style matched with the selected image.
  • the style converting unit 140 changes the font of the text 'no' input by the user to Gothic according to the style matched to the surprised image. And the thickness of a letter can be thickened.
  • the style conversion unit 140 may additionally insert at least one character into the received text based on the style of the selected image.
  • the text “no !!!” can be inserted into the text "no” entered by the user by adding an additional "!!!” character to emphasize the presentation of the text, depending on the user's surprise image. Can be generated.
  • the text image generation unit 150 may generate a text image by inserting the text whose style is converted into the selected image.
  • the text image generation unit 150 may place the text whose style is changed by the style conversion unit 140 in the text area of the selected image.
  • the text image generation unit 150 may extract information about the text area of the image of the image selected by the user from the image storage unit 110 and based on the extracted text area information.
  • the text whose style has been changed can be inserted into the text area of the image.
  • the text image generation unit 150 may change the configuration of the text based on the information on the text area of the selected image.
  • the text image generation unit 150 may reduce the size of the text whose style is changed according to the size of the text area of the image.
  • the text image generator 150 may change the size of the text to 2.5 cm wide and 1 cm wide. have.
  • the text image generating unit 150 may change the arrangement of the text based on the text area of the selected image.
  • the text image generation unit 150 may change the arrangement of the text in two lines based on the text area of the selected image.
  • the text image generation unit 150 may generate a text image by inserting the text whose style is changed into the text area of the image.
  • the text image generation unit 150 receives the changed style from the style changing unit 140 in real time in the text area of the selected image in real time. Can be inserted.
  • the text image generating unit 150 may receive the text whose style has been changed from the style changing unit 140 and insert the text into the text area of the image.
  • the text image providing unit 160 may provide a text image through a layer provided on at least one area of the screen of the user terminal.
  • the text image providing unit 160 may receive the text image from the text image generating unit 150 and display the text image on the layer provided by the layer providing unit 120.
  • the text image providing unit 160 may receive the text image generated from the text image generating unit 150 and display the transferred text image on the layer provided on the keypad displayed for the user to input text. have.
  • the text image providing unit 160 removes the text image displayed in the layer when the user requests the transmission of the chat message entered in the input window, and displays the text image in the chat window in which the chat message is displayed. can do.
  • the text image providing unit 160 may provide the generated text image to a social network service (SNS) selected by the user.
  • SNS social network service
  • the text image providing unit 160 may provide a list listing at least one SNS when the user requests a transmission to post the generated text image to his or her SNS, and at least one selected from the user. Text images can be sent to each SNS server for posting.
  • the text image providing unit 160 may receive member information from the user in order to access the user's SNS account.
  • the text insertion method according to the embodiment shown in FIG. 3 includes the steps of time series processing in the text insertion device 10 shown in FIG. 2. Therefore, even if omitted below, the above descriptions regarding the text insertion apparatus 10 shown in FIG. 2 may be applied to the text insertion method according to the embodiment shown in FIG. 3.
  • the text insertion apparatus 10 may match a style for each of at least one image, and may match and store information about a text area that is an area where text is inserted for each of the at least one image (S3001).
  • the text insertion apparatus 10 may match and store a style, which is information on a font, size, or thickness of text having a feeling most similar to the expression represented by the image for each image.
  • the text insertion device 10 may store a bold gothic font having a font size of 12 as an image for the image of the appearance of surprise.
  • the text insertion apparatus 10 may store the style for each image in advance by setting it from an administrator or by receiving it from a user.
  • the text insertion apparatus 10 may store a text area matched for each image.
  • the text insertion apparatus 10 may be set as a text area into which a text input by a user is to be inserted into an image of anger and set as the bottom of the image, and may store the set text area by matching the image. .
  • the text insertion apparatus 10 may store a preset text area or may store a location set by a user as a text area.
  • the text insertion apparatus 10 may list at least one image, and receive one from the user (S3002).
  • the text insertion apparatus 10 may provide a layer in which an image or a text image is displayed on at least a portion of the screen of the user terminal.
  • the text insertion apparatus 10 may provide a separate semitransparent layer on an input window through which a user inputs text.
  • the text insertion apparatus 10 may list and provide at least one image stored in step S3001 through the provided layer.
  • the text insertion apparatus 10 may detect a user input, and when a user input is detected, the text insertion apparatus 10 may provide an image list, which is a list of the at least one image, through the layer to display any one image. Can be chosen.
  • the text insertion apparatus 10 may provide at least one image through a layer.
  • the text insertion apparatus 10 may selectively analyze the text input by the user and select and provide an image corresponding to the meaning of the text.
  • FIG. 4 is an exemplary diagram according to an embodiment of the present invention, when the text insertion device 10 detects a user input in the input window 401, at least one of the layers 403 on the keypad 402. Image 405 may be provided.
  • the text insertion apparatus 10 analyzes the meaning of the text input in the input window 401 to layer at least one image corresponding to the analyzed meaning. 403 may be provided.
  • the text insertion apparatus 10 may provide at least one image through a layer before receiving text input from the user, and receive text from the user after receiving any one image. .
  • the text insertion apparatus 10 may display at least one image through a layer according to an image providing request of a user before receiving text.
  • the text insertion apparatus 10 may display the images by sorting according to the frequency selected by the user for each image.
  • the text insertion apparatus 10 may provide at least one image through a layer 501 provided at the bottom of the screen 500 of the user terminal. .
  • the text insertion apparatus 10 may display the selected image 502 on the additionally provided layer 503.
  • the text insertion apparatus 10 may convert the style of the text input from the user based on the style matched with the selected image (S3003).
  • the text insertion apparatus 10 may convert the text input from the user according to the style of the image selected in step S3002 in real time.
  • the text insertion apparatus 10 may convert the input text according to the style of the image selected in step S3002 when the user completes input of the text (for example, clicking a send or save button). Can be.
  • the text insertion apparatus 10 may additionally insert at least one character into the received text based on the style of the selected image.
  • the text insertion apparatus 10 may input the text "what is” according to the style matched to the selected image. You can convert it to “What !!!” by adding “!!!” at the same time.
  • the text insertion apparatus 10 is a “head-to-head” image 602 selected by the user to “smells fishy” input to the input window 601. You can convert it to Gothic, which is a style that matches your image.
  • the appearance of the text can correspond to the representation represented by the image.
  • the text insertion apparatus 10 may generate a text image by inserting the style-converted text into the image selected in step S3002 (S3004).
  • the text insertion apparatus 10 may extract information about the text area matched to each image and stored in step S3001, and insert the text whose style is converted in step S3003 based on the information on the extracted text area. can do.
  • the text insertion apparatus 10 may generate a text image by inserting the text to be converted in real time in the text area of the selected image. Through this, the text insertion apparatus 10 may allow the text input by the user and the text inserted in the text image to be interlocked.
  • the text insertion apparatus 10 when the user converts the style when the user completes the text input in step S3003, the text insertion apparatus 10 inserts text into the image to insert a text image into the image when the text is input and the style is converted. Can be generated.
  • the text insertion apparatus 10 may generate a text image that is an image in which text is inserted into the text area of the selected image and provide the text image generated through the layer provided in operation S3002 (S3005).
  • the text insertion apparatus 10 when the user detects the text input, inserts the text input by the user in real time into the text area of the image selected by the user through the steps S3002 to S3004, and the text image. Can be provided as.
  • the text insertion apparatus 10 may insert “smell fishy” 603, which is text generated by changing a style of text input in an input window in a text area of an image 602 selected by a user.
  • the inserted text image 604 may be displayed through the layer 605.
  • the text insertion apparatus 10 may provide a text image inserted in the text area of the image by replacing the image displayed in step S3002 with the text inputted in step S3002 through S3004. .
  • the text insertion device 10 may be provided by transmitting the generated text image to the SNS selected by the user.
  • the text insertion apparatus 10 may provide the user with at least one SNS for posting the text image, and may transmit the text image to the server of the selected SNS by selecting at least one from the user.
  • the text insertion device 10 may receive member information from the user to access the user's SNS account.
  • the text insertion method according to the embodiment described with reference to FIG. 3 may also be implemented in the form of a recording medium including instructions executable by a computer, such as a program module executed by the computer.
  • Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media.
  • computer readable media may include both computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transmission mechanism, and includes any information delivery media.
  • the text insertion method may be implemented as a computer program (or computer program product) including instructions executable by a computer.
  • the computer program includes programmable machine instructions processed by the processor and may be implemented in a high-level programming language, an object-oriented programming language, an assembly language, or a machine language.
  • the computer program may also be recorded on tangible computer readable media (eg, memory, hard disks, magnetic / optical media or solid-state drives, etc.).
  • the text insertion method may be implemented by executing the computer program as described above by the computing device.
  • the computing device may include at least a portion of a processor, a memory, a storage device, a high speed interface connected to the memory and a high speed expansion port, and a low speed interface connected to the low speed bus and the storage device.
  • a processor may include at least a portion of a processor, a memory, a storage device, a high speed interface connected to the memory and a high speed expansion port, and a low speed interface connected to the low speed bus and the storage device.
  • Each of these components are connected to each other using a variety of buses and may be mounted on a common motherboard or otherwise mounted in a suitable manner.
  • the processor may process instructions within the computing device, such as to display graphical information for providing a graphical user interface (GUI) on an external input, output device, such as a display connected to a high speed interface. Instructions stored in memory or storage. In other embodiments, multiple processors and / or multiple buses may be used with appropriately multiple memories and memory types.
  • the processor may also be implemented as a chipset consisting of chips comprising a plurality of independent analog and / or digital processors.
  • the memory also stores information within the computing device.
  • the memory may consist of a volatile memory unit or a collection thereof.
  • the memory may consist of a nonvolatile memory unit or a collection thereof.
  • the memory may also be other forms of computer readable media, such as, for example, magnetic or optical disks.
  • the storage device can provide a large amount of storage space to the computing device.
  • the storage device may be a computer readable medium or a configuration including such a medium, and may include, for example, devices or other configurations within a storage area network (SAN), and may include a floppy disk device, a hard disk device, an optical disk device, Or a tape device, flash memory, or similar other semiconductor memory device or device array.
  • SAN storage area network
  • the present invention is industrially applicable to an apparatus and method for inserting an image text by changing a text input by a user into a style that matches the image selected by the user and inserting the changed text into the selected image.

Abstract

La présente invention concerne un dispositif et un procédé permettant d'insérer un texte d'image. Le dispositif permettant d'insérer un texte d'image, selon un premier mode de réalisation de la présente invention, comprend : une unité de sauvegarde d'image pour sauvegarder au moins une image, et mettre en correspondance et sauvegarder un style correspondant à chacune des images ; une unité de sélection d'image pour lister ladite image et recevoir une sélection d'une image à partir d'un utilisateur ; une unité de changement de style pour changer le style d'un texte entré par l'utilisateur sur la base d'un style correspondant à l'image qui a été sélectionnée ; une unité de génération d'image de texte pour insérer, dans l'image qui a été sélectionnée, le texte dont le style a été changé, pour générer une image de texte ; et une unité de fourniture d'image de texte pour fournir l'image de texte par l'intermédiaire d'une couche disposée sur au moins une zone d'un écran sur un terminal utilisateur.
PCT/KR2016/009584 2015-08-28 2016-08-29 Dispositif et procédé permettant d'insérer un texte d'image WO2017039258A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20150121667 2015-08-28
KR10-2015-0121667 2015-08-28
KR10-2016-0109792 2016-08-29
KR1020160109792A KR101852901B1 (ko) 2015-08-28 2016-08-29 이미지 텍스트 삽입 장치 및 삽입 방법

Publications (1)

Publication Number Publication Date
WO2017039258A1 true WO2017039258A1 (fr) 2017-03-09

Family

ID=58187928

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2016/009584 WO2017039258A1 (fr) 2015-08-28 2016-08-29 Dispositif et procédé permettant d'insérer un texte d'image

Country Status (1)

Country Link
WO (1) WO2017039258A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001175235A (ja) * 1999-12-16 2001-06-29 Nec Corp 携帯無線通信端末およびそのスタイル処理方法
KR20060083240A (ko) * 2005-01-14 2006-07-20 (주)필링크 멀티미디어메시지 생성 시스템 및 방법
WO2014068573A1 (fr) * 2012-10-31 2014-05-08 Aniways Advertising Solutions Ltd. Génération de smileys personnalisés
KR20140057451A (ko) * 2012-11-02 2014-05-13 박상현 이미지와 이모티콘을 중첩하여 하나 이상의 메신저 서버에 등록하는 시스템
KR20150068509A (ko) * 2013-12-11 2015-06-22 에스케이플래닛 주식회사 메신저 프로그램에서의 이미지를 이용한 커뮤니케이션 방법, 이를 위한 장치 및 시스템

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001175235A (ja) * 1999-12-16 2001-06-29 Nec Corp 携帯無線通信端末およびそのスタイル処理方法
KR20060083240A (ko) * 2005-01-14 2006-07-20 (주)필링크 멀티미디어메시지 생성 시스템 및 방법
WO2014068573A1 (fr) * 2012-10-31 2014-05-08 Aniways Advertising Solutions Ltd. Génération de smileys personnalisés
KR20140057451A (ko) * 2012-11-02 2014-05-13 박상현 이미지와 이모티콘을 중첩하여 하나 이상의 메신저 서버에 등록하는 시스템
KR20150068509A (ko) * 2013-12-11 2015-06-22 에스케이플래닛 주식회사 메신저 프로그램에서의 이미지를 이용한 커뮤니케이션 방법, 이를 위한 장치 및 시스템

Similar Documents

Publication Publication Date Title
WO2015064903A1 (fr) Affichage de messages dans un dispositif électronique
KR101852901B1 (ko) 이미지 텍스트 삽입 장치 및 삽입 방법
US20180213289A1 (en) Method of authorizing video scene and metadata
WO2019139270A1 (fr) Dispositif d'affichage et procédé de fourniture de contenu associé
CN103530096A (zh) 远程控制方法、远程控制设备和显示设备
WO2014088355A1 (fr) Appareil de terminal utilisateur et son procédé de commande
US8731283B2 (en) Information processing apparatus, information processing method and information processing program
WO2017111312A1 (fr) Dispositif électronique, et procédé de gestion de programmes d'application associés
WO2017039337A1 (fr) Procédé et dispositif de marquage de liens inclus dans une capture d'écran de page web
WO2012050314A9 (fr) Procédé et système de fourniture de contenus d'arrière-plan d'un moyen de saisie de clé virtuelle
WO2014058250A1 (fr) Terminal utilisateur, serveur fournissant un service de réseau social et procédé de fourniture de contenus
WO2014035209A1 (fr) Procédé et appareil permettant de fournir un service intelligent au moyen d'un caractère entré dans un dispositif utilisateur
WO2016150281A1 (fr) Procédé, terminal mobile et système permettant d'afficher un fichier vidéo de prévisualisation
WO2017171231A1 (fr) Procédé de synthèse d'image et dispositif électronique utilisant celui-ci
WO2016186325A1 (fr) Système et procédé de service de réseau social par image
CN108989869B (zh) 视频画面播放方法、装置、设备及计算机可读存储介质
WO2018128400A1 (fr) Procédé permettant de partager des données et dispositif électronique associé
US20120266077A1 (en) Image display device providing feedback messages
WO2017065582A1 (fr) Partage, par un dispositif électronique, d'un contenu avec un dispositif externe et procédé de partage de contenu associé
WO2015020259A1 (fr) Dispositif intelligent capable d'effectuer une conversion multilingue
US20120266066A1 (en) Image display device providing subject-dependent feedback
WO2019107799A1 (fr) Procédé et appareil de déplacement d'un champ d'entrée
WO2014051381A1 (fr) Appareil électronique, procédé de création de contenu multimédia et support d'enregistrement lisible par ordinateur stockant un programme permettant d'exécuter le procédé
WO2015060685A1 (fr) Dispositif électronique et procédé de fourniture de données publicitaires par le dispositif électronique
WO2020045909A1 (fr) Appareil et procédé pour logiciel intégré d'interface utilisateur pour sélection multiple et fonctionnement d'informations segmentées non consécutives

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16842224

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16842224

Country of ref document: EP

Kind code of ref document: A1