WO2018038277A1 - 대화방을 통해 각 사용자의 상태를 반영한 화상 데이터를 공유하는 메시지 공유 방법메시지 공유 방법 및 상기 방법을 실행시키기 위한 컴퓨터 프로그램 - Google Patents
대화방을 통해 각 사용자의 상태를 반영한 화상 데이터를 공유하는 메시지 공유 방법메시지 공유 방법 및 상기 방법을 실행시키기 위한 컴퓨터 프로그램 Download PDFInfo
- Publication number
- WO2018038277A1 WO2018038277A1 PCT/KR2016/009218 KR2016009218W WO2018038277A1 WO 2018038277 A1 WO2018038277 A1 WO 2018038277A1 KR 2016009218 W KR2016009218 W KR 2016009218W WO 2018038277 A1 WO2018038277 A1 WO 2018038277A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- text
- user
- message
- sharing
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/103—Formatting, i.e. changing of presentation of documents
- G06F40/109—Font handling; Temporal or kinetic typography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
- H04L51/046—Interoperability with other network applications or services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/21—Monitoring or handling of messages
- H04L51/222—Monitoring or handling of messages using geographical location information, e.g. messages transmitted or received in proximity of a certain spot or area
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/103—Formatting, i.e. changing of presentation of documents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Definitions
- the present invention relates to a message sharing message sharing method for sharing image data reflecting the state of each user via a chat room, and a computer program for executing the method.
- Chat system is a system that communicates by sending and receiving video and messages using their respective terminals. Conventional users have been able to share text or image data input through each user terminal in a chat room.
- the present invention extracts an object related to a user's emotion, location or surroundings in consideration of text input by a user, image data captured by a camera, and the like, and generates a message data in which the object is appropriately combined.
- a method and a computer program can be provided.
- the present invention can provide a message sharing method and computer program for controlling the generated image data and the text to be shared through a chat room.
- a computer program according to an embodiment of the present invention may be stored in a medium to execute any one of the message sharing methods according to an embodiment of the present invention using a computer.
- a computer readable recording medium for recording another method for implementing the present invention, another system, and a computer program for executing the method.
- Message sharing method and computer program extracts an object related to the user's emotion, location or surroundings in consideration of text input by the user, image data taken by the camera, and the like, the object Can generate appropriately combined image data.
- the message sharing method and the computer program according to the embodiments of the present invention can control to share the composite message through the chat room.
- FIG. 1 is a block diagram showing the structure of a user terminal according to embodiments of the present invention.
- FIG. 2 is a view showing the structure of a chat system according to embodiments of the present invention.
- 4 through 8 are diagrams for describing examples of image data converted by the message sharing method and the generated composite messages.
- 9 through 10 are flowcharts of message sharing methods according to embodiments of the present invention.
- circuit refers to, alone or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and / or firmware that stores instructions executed by, for example, programmable circuitry. It may include.
- the application may be implemented as code or instructions that may be executed on programmable circuitry, such as a host processor or other programmable circuitry.
- a module may be implemented as a circuit.
- the circuit can be implemented as an integrated circuit, such as an integrated circuit chip.
- FIG. 1 is a view showing a user terminal 100 according to an embodiment of the present invention.
- a user terminal 100 may include a camera 110, an input unit 120, an output unit 130, a sensor unit 140, a processor 150, and a storage medium 160. ) May be included.
- the camera 110 may obtain an image frame such as a still image or a moving image through an image sensor in a video call mode or a photographing mode.
- the image captured by the image sensor may be processed by the processor 150 or a separate image processor (not shown).
- the image frame processed by the camera 110 may be stored in the storage medium 160 or transmitted to the outside. Two or more cameras 110 may be provided according to the configuration of the terminal.
- the camera 110 may further include a microphone that also receives an external sound signal and processes it into electrical voice data.
- the microphone may use various noise reduction algorithms to remove noise generated while receiving an external sound signal.
- the input unit 120 means a means for a user to input data for controlling the user terminal 100.
- the user input unit 150 includes a key pad, a dome switch, a touch pad (contact capacitive type, pressure resistive type, infrared sensing type, surface ultrasonic conduction type, and integral type). Tension measurement method, piezo effect method, etc.), a jog wheel, a jog switch, and the like, but are not limited thereto.
- the output unit 130 outputs the information processed or generated in the user terminal 100.
- the output unit 130 may output a user interface provided when the game application is executed.
- the output unit 130 and the touch pad form a layer structure and constitute a touch screen
- the output unit 130 may be used as an input device in addition to the output device.
- the output unit 130 may be a liquid crystal display, a thin film transistor-liquid crystal display, an organic light-emitting diode, a flexible display, or a three-dimensional display. 3D display, an electrophoretic display.
- the device 100 may include two or more display units 131 according to the implementation form of the device 100. In this case, the two or more display units 131 may be disposed to face each other using a hinge.
- the sensor unit 140 may include a GPS sensor that calculates a geographical position of the user terminal 100 using satellite communication.
- the processor 150 typically controls the overall operation of the user terminal 100.
- the processor 150 may generally control the camera 110, the inputter 120, the outputter 130, the sensor 140, and the like through instructions stored in the storage medium 160.
- the processor 150 may include any kind of device capable of processing data, such as a processor.
- the 'processor' may refer to a data processing apparatus embedded in hardware having, for example, a circuit physically structured to perform a function represented by code or instructions included in a program.
- a data processing device embedded in hardware, a microprocessor, a central processing unit (CPU), a processor core, a multiprocessor, and an application-specific integrated device (ASIC) It may include a processing device such as a circuit, a field programmable gate array (FPGA), etc., but the scope of the present invention is not limited thereto.
- the storage medium 160 may store various data and software used during the operation of the user terminal 100, such as an operating system, an application, a program, a library, and a driver.
- the storage medium 160 may include a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD memory), Random Access Memory (RAM) Static Random Access Memory (SRAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Read-Only Memory (PROM), Magnetic Memory, Magnetic It may include a storage medium of at least one type of disk, optical disk.
- the user terminal 100 may operate a web storage or a cloud server that performs a storage function of the storage medium 160 on the Internet.
- Programs stored in the storage medium 160 may be classified into a plurality of modules according to their functions.
- the programs may be classified into a UI module, a touch screen module, a notification module, and the like.
- the image receiver 161 receives image data generated by the camera 110.
- the image receiver 161 captures image data for a preset time on the basis of a time point at which transmission is input through the input unit 120.
- the text receiver 162 receives the text input through the input unit 120.
- the object extracting unit 163 analyzes the image data and the text, analyzes the text, uses the at least one of a result of analyzing the image data and a sensing value obtained from the sensor unit 140.
- the object refers to sensory data such as visual data or audio data, and may further include an effect of modifying a basic image such as animation effects.
- the object extractor 163 may extract the first object related to the user using the result of analyzing the text. More specifically, the text input by the user may be analyzed using a natural language processing method, and an object related to the user may be extracted using the analyzed result.
- the object extractor 163 separates the text input by the user into morpheme units, extracts adjectives representing the emotions of the user from the text, and has the emotions of the user based on the extracted adjectives. You can extract objects related to. For example, when the text input by the user includes the adjective 'sad', the object extractor 163 may extract an image corresponding to the sad emotion of the user as an object. In addition, the object extractor 163 may be divided into morpheme units, extract a verb corresponding to the current behavior of the user from the text, and extract an object related to the current behavior of the user based on the extracted verb. have.
- the object extractor 163 may extract the theater image as an object related to the user's action.
- the object extractor 163 separates the text input by the user into morpheme units, extracts an adverb indicating a location where the user is located among the text, and extracts an object related to the location where the user is located based on the adverb. can do.
- the object extractor 163 may analyze image data photographed through a camera and extract an object related to the user by using the result of analyzing the image data. More specifically, the object extractor 163 determines an area corresponding to a user currently present in image data captured by the camera as an analysis area, extracts an edge or a feature point included in the analysis area, and A facial expression of a person included in the analysis region may be determined based on an edge or a feature point, and an object corresponding to the facial expression may be extracted. For example, when the image data photographed through the camera includes a person with an angry expression, the object extractor 163 may extract an image corresponding to 'angry' from the image data as an object.
- the object extractor 163 determines an area corresponding to the user present in the image data currently captured by the camera as the analysis area, extracts an edge or a feature point included in the analysis area, and extracts the edge or feature point. Further text may be extracted from the image data by analyzing the mouth shape of the person included in the analysis region on the basis of. The object extractor 163 may generate an image corresponding to the additional text as a second object.
- the object extractor 163 extracts a face region corresponding to a human face included in the image data by using a face recognition algorithm, and corresponds to the face region to form a first object or a first object related to the user. 2 You can place objects.
- the object extractor 163 may extract an object related to the user in consideration of the position of the user acquired by the sensor unit. For example, when the location of the user acquired by the sensor unit is 'Paris', the object extractor 163 may extract the Eiffel tower, which is a representative building corresponding to the 'Paris', as an object related to the user. . In this case, the object extractor 163 may extract a representative building corresponding to the location of the user, the location itself, a background around the location, and the like as an object related to the user.
- the object extractor 163 may extract an object related to the user in consideration of weather information obtained through another application (search, portal, weather notification, etc.) mounted in the user terminal or obtained through the sensor unit. For example, when the current weather is 'cloudy', the object extractor 163 may extract an image, such as a cloud or a gray sky, corresponding to the 'cloudy' as an object.
- another application search, portal, weather notification, etc.
- the object extractor 163 may extract an image, such as a cloud or a gray sky, corresponding to the 'cloudy' as an object.
- the object extractor 163 may determine a season at the current time point by using date and time information, and extract an image according to the season as an object.
- the date and time information may be obtained through the base station, the Internet network, or measured based on information input by the user.
- the object extracting unit 163 may extract an image of sprouting in response to spring as an object, and extract an image of a beach, a bikini, or the like as an object in response to the summer, and in response to autumn, the image of the autumn leaves may be extracted. It can be extracted as an object, and images such as snow or snowman can be extracted as objects in response to winter.
- the object extractor 163 may determine a season at the current time point by using date and time information, and extract an image according to the season as an object.
- the object extractor 163 may extract an image of a tree, Santa Claus, or the like corresponding to Christmas as the season as an object. If the current view is a duplicate, the object extractor 163 may extract an image such as 'samgyetang', 'air conditioner', 'water play', 'beach', etc. corresponding to the 'duplicate' season. .
- the object extracted by the object extractor 163 may be two-dimensional data, or may be three-dimensional data changed over time.
- the object extractor 163 may add animation effects to the extracted object.
- the object extractor 163 may add an animation effect of turning the fan blades to the fan.
- the object extractor 163 may add a tear effect or a raining animation effect to express a rainy weather in order to express a sad person.
- the object extractor 163 may add an effect of gradually increasing the firecrackers to express the firecrackers.
- the content generator 164 may edit the image data to further include the extracted object. More specifically, the content generator 164 may determine a display position of each object so that the extracted object is properly represented. The content generation unit 164 may determine the position of the extracted object in consideration of the human face in the image data in relation to the shy image, the angry image, and the like, which should be properly displayed on the human face. In addition, the content generation unit 164 may edit the image data to display the extracted object on the background of the image data. For example, the content generation unit 164 may include the text obtained from the image data in the image data, and change the image data to be expressed in such a way that the text is ejected from the mouth of a person. The content generator 164 may allow the extracted “angry” image to exist above the head of a person. The content generator 164 may change the image data so that the extracted “tear” image flows through the human eye.
- the content generation unit 164 may convert the image data into thumbnails. In this case, the content generation unit 164 may convert the image data into thumbnails according to the profile format of the user. The content generating unit 164 may convert the size (resolution) of the image data or delete a part of the image data according to the profile format. In addition, the content generator 164 may reduce the size of the image data by lowering the resolution of the image data according to the profile format.
- the profile refers to user related information displayed together with the message provided in the chat room.
- the content generator 164 may generate a composite message combining the converted image data and the text.
- the content generating unit 164 analyzes the image data and the text, analyzes the image data, analyzes the text, and considers at least one of the sensing values. You can convert it to an image.
- the content generator 164 may change the font type, the font size, and the font color of the text, but may express the text as an image. That is, the content generator 164 may generate a complex message combining the converted image data and the text converted into the image.
- the content generator 164 not only provides the text input by the user, but also generates a composite message combining the currently photographed image data including an object reflecting the user's emotion, location or current weather, season, season, and the like.
- the user terminal 100 may provide a complex message visualizing information such as the emotion, location, and weather of the user, and acquire additional information related to the user without recognizing the text. can do.
- the content generation unit 164 may change the font, font size, text color, etc. for representing the text even when providing the text.
- the content generation unit 164 may be configured to display morphemes included in the text with different fonts, font sizes, and text colors, even for one text.
- the content generating unit 164 may also display a text display method in two dimensions, and may include an animation effect in which text is generated from a mouth of a person included in image data.
- an animation effect provided with text may be an effect flying from the left or the right, an effect flying from the top or the bottom.
- the content provider 165 may provide the converted image data and the text to be shared with other users through a chat room.
- the content provider 165 provides the generated composite message to be shared with other users through a chat room.
- the content provider 165 provides the composite message through a chat room that can exchange messages between a plurality of users.
- the size of the compound message may vary depending on the input image data, text or extracted object.
- the content providing unit 165 provides the animation effect included in the composite message only for a preset default time, for example, 15 minutes after the first providing, and after 15 minutes, without the animation effect included in the composite message.
- Compound messages can be provided two-dimensionally.
- the generated composite message may be set as a profile picture of each user, and more specifically, may function temporarily as a profile only in the corresponding chat room.
- the user terminal 100 may extract an object from the input text or image data to more visually express the emotion, mood, state, etc. to be delivered by the user.
- the user terminal 100 according to the embodiments of the present invention may receive a user's selection of an object to be added to image data shared through a chat room.
- the user terminal 100 according to the embodiments of the present invention may share data other than text through a chat room, and in particular, converts data corresponding to a plurality of input data types into a single content. Can be provided.
- the user terminal 100 may select from among a plurality of pre-stored objects an object that may further maximize the feeling of “I love you” in response to the “I love you” input by the user.
- the user terminal 100 arranges the messages shared in the chat room in a time series and generates one data, so that the object or compound added to the image data even after a long time has passed since the input of each message.
- Messages can convey different states (location, time, weather, emotions, etc.) at the time the message is entered. That is, the user terminal 100 may further transmit a location where each user is located, weather at the time when each message is input, time at which each message is input, or the like through an object or a compound message.
- FIG. 2 is a view showing the structure of a chat system according to embodiments of the present invention.
- the user may access the chat server 200 through the user terminals 100, 101, and 102.
- the user terminal 100 may download a chat application provided by the chat server 200.
- the user terminal 100 may transmit usage information of at least one other application mounted to the chat server 200.
- the plurality of user terminals 100 refers to a communication terminal capable of using a web service in a wired / wireless communication environment.
- the user terminal 100 may be a personal computer 101 of the user, or may be a portable terminal 102 of the user.
- the portable terminal 102 is illustrated as a smart phone, but the spirit of the present invention is not limited thereto.
- a terminal equipped with an application capable of web browsing may be used without limitation.
- the user terminal 100 may include a computer (eg, desktop, laptop, tablet, etc.), a media computing platform (eg, cable, satellite set-top box, digital video recorder), a handheld computing device ( PDAs, email clients, etc.), any form of cell phone, or any other type of computing or communication platform, but the invention is not so limited.
- a computer eg, desktop, laptop, tablet, etc.
- a media computing platform eg, cable, satellite set-top box, digital video recorder
- any form of cell phone e.g., any other type of computing or communication platform, but the invention is not so limited.
- the chat server 200 may provide a general chat service.
- the chat server 200 may create and remove a chat room according to a request received from a user.
- the chat server 200 may receive a compound message generated by the first user terminal and provide the compound message to other user terminals included in the chat room.
- the communication network 300 connects the plurality of user terminals 100 and the chat server 200. That is, the communication network 300 refers to a communication network that provides a connection path for transmitting and receiving data after the user terminals 100 access the chat server 200.
- the communication network 300 may be, for example, wired networks such as local area networks (LANs), wide area networks (WANs), metropolitan area networks (MANs), integrated service digital networks (ISDNs), wireless LANs, CDMA, Bluetooth, satellite communications, and the like. Although it may encompass a wireless network, the scope of the present invention is not limited thereto.
- FIG 3 is a view for explaining a user interface of a chat room provided to a user terminal.
- the user interface of the chat room provided to the user terminal may include a first area A1 for providing a message exchanged through the chat room and a second area A2 for providing text and image data acquired through the user terminal in advance. have.
- the first area A1 is generally arranged in chronological order. When a new message is input, a new message is added at the bottom, and thus the message located at the top may be out of the display area.
- the first area A1 may be provided with a message along with profile information of a user who inputs the message.
- the user terminal 100 may display image data S1 captured by the camera in a part of the second area A2 instead of the user's profile information.
- the user terminal 100 may display the text S2 input by the user in the remaining area of the second area A2.
- image data including text and extracted objects may be provided as a separate region in the lowermost region S3 of the first region A1.
- a compound message combining text data input by a user and image data including an extracted object may be provided in the lowermost area S3 of the first area A1.
- a single message combining the image data, the text, the image data and the object extracted through the text may be provided in one area without distinguishing the image data and the area displaying the text. do.
- the area S3 in which the message is displayed may also be provided as separate areas of image data and text.
- the area provided to the chat room may be represented as data input by the user as a whole.
- the sender of each message can be more easily recognized.
- 4 through 8 are diagrams for describing examples of image data converted by the message sharing method and the generated composite messages.
- the input text may be included in a speech bubble connected to a person included in image data.
- FIG. 4 illustrates an example of extracting an object based on location information of a user terminal and generating image data or a composite message including the object.
- the location of the user terminal is' fly '
- the image data can be converted as shown in FIG.
- the location of the user terminal is 'Paris', as shown in FIG. Can be generated.
- the object may be extracted based on the location information of the user terminal.
- the extracted object is represented together with the image data photographed at the present time.
- the compound message may be converted into text and provided with an image. As shown in FIG. 4C, the compound message may further include text represented by a speech bubble.
- FIG. 5 illustrates an example of extracting an object based on weather information determined by a user terminal and generating image data or a composite message including the object.
- the weather information retrieved based on a location and a current time of the user terminal In the case of 'rain', image data may be generated as shown in FIG. 4B including the 'rainy background' in order to more easily recognize the obtained weather.
- a composite message may be generated as shown in FIG. 4 (c) including 'rainy background' in order to more easily recognize the acquired weather. have.
- FIG. 6 illustrates an example of generating a compound message including an object related to a user's emotion extracted from text input by a user, and reflecting image data or a compound message by reflecting 'per hungry' input by the user. Can be generated. Specifically, a person and a face shape included in the image data photographed according to the input text 'hunger' as shown in FIG. 6 (b) may further include an image representing the hunger and an emoticon.
- the user terminal 100 may analyze the text 'not too much' by the user and determine the user's emotion as 'angry' through the context of the text.
- the image data or the composite message may include an image corresponding to 'angry' on the person shape of the image data.
- the compound message may change the display method of the input text, that is, change the characters of the text larger and thicker, and change the shape of the speech bubble as shown in FIG. 7 (c) according to the user's emotion. have.
- the speech balloon may have a jagged shape in correspondence with the user's 'angry'.
- the message sharing method may include adding an image expressing the emotion in relation to the emotion of the user acquired through text or image data, changing a font corresponding to the emotion, and ancillary animation. The effect can be reflected.
- the image data or the composite message may further include a tree image related to the season, and the user's clothes may be added to the image data.
- the tuxedos can be overlapped, taking into account the entered text. That is, when the input text is a date application, the user's clothes may be changed to a tuxedo corresponding thereto.
- FIG. 9 is a flowchart of a message sharing method according to embodiments of the present invention.
- the message sharing method may include image data receiving step S110, text receiving step S120, sensing value receiving step S130, object extracting step S140, and image data. It may include an editing step (S150).
- the user terminal 100 receives image data photographed through a camera. According to the input by the user, image data is taken for a preset time, for example, for 2 seconds. In S120, the user terminal 100 receives text input through an input unit. In S130, the user terminal 100 may receive a sensing value of the user's location, the surrounding weather, time, etc. through the sensor unit.
- the user terminal 100 analyzes the text and the image data, and analyzes the text, using the at least one of a result of analyzing the image data and a sensing value obtained from the sensor unit.
- the related, first object may be extracted. Since the operation of S140 is the same as the operation of the object extraction unit 163, detailed description thereof will be omitted.
- the user terminal 100 may edit the image data to further include the extracted object.
- the user terminal 100 may determine a display position of each object so that the extracted object is properly represented.
- the user terminal 100 may determine the location of the extracted object in consideration of the human face in the image data in relation to the shy image, the angry image, and the like, which should be properly displayed on the face of the person.
- the user terminal 100 may edit the image data to display the extracted object on the background of the image data.
- the user terminal 100 edits the image data so that the first object is represented at the determined position.
- the user terminal 100 may convert the edited image data into thumbnails.
- the thumbnail may be set to each user's profile. Converting to thumbnail refers to changing the size of the edited image data, removing a part of the image data, or reducing the capacity of the image data.
- a message sharing method may include image data receiving step S210, text receiving step S220, sensing value receiving step S230, object extracting step S240, and image data. It may include an editing step (S250), a compound message generation step (S260).
- the user terminal 100 receives image data photographed through a camera. According to the input by the user, image data is taken for a preset time, for example, for 2 seconds. In S220, the user terminal 100 receives the text input through the input unit. In S230, the user terminal 100 may receive a sensing value of the user's location, the surrounding weather, time, etc. through the sensor unit.
- the user terminal 100 analyzes the text and the image data, and analyzes the text, using the at least one of a result of analyzing the image data and a sensing value obtained from the sensor unit.
- the related, first object may be extracted. Since the operation of S240 is the same as the operation of the object extraction unit 163, detailed description thereof will be omitted.
- the user terminal 100 may edit the image data to further include the extracted object.
- the user terminal 100 may determine a display position of each object so that the extracted object is properly represented.
- the user terminal 100 may determine the location of the extracted object in consideration of the human face in the image data in relation to the shy image, the angry image, and the like, which should be properly displayed on the face of the person.
- the user terminal 100 may edit the image data to display the extracted object on the background of the image data.
- the user terminal 100 edits the image data so that the first object is represented at the determined position.
- the user terminal 100 may convert the edited image data into thumbnails.
- the user terminal 100 may generate the converted image data and the text as one compound message.
- the user terminal 100 may convert the text into an image.
- the composite message may include the converted image data and the text converted into the image.
- Embodiments according to the present invention described above may be implemented in the form of a computer program that can be executed through various components on a computer, such a computer program may be recorded on a computer readable medium.
- the media may be magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROMs and DVDs, magneto-optical media such as floptical disks, and ROMs.
- Hardware devices specifically configured to store and execute program instructions such as memory, RAM, flash memory, and the like.
- the medium may include an intangible medium implemented in a form that can be transmitted on a network.
- the medium may be a form of a software or an application that can be transmitted and distributed through a network.
- the computer program may be specially designed and configured for the present invention, or may be known and available to those skilled in the computer software field.
- Examples of computer programs may include not only machine code generated by a compiler, but also high-level language code executable by a computer using an interpreter or the like.
- connection or connection members of the lines between the components shown in the drawings by way of example shows a functional connection and / or physical or circuit connections, in the actual device replaceable or additional various functional connections, physical It may be represented as a connection, or circuit connections.
- such as "essential”, “important” may not be a necessary component for the application of the present invention.
Abstract
Description
Claims (20)
- 촬상부, 입력부, 센서부, 및 프로세서를 포함하는 컴퓨터를 이용하여,상기 촬상부를 통해 촬영된 화상 데이터를 수신하는 단계;상기 입력부를 통해 사용자에 의해 입력된 텍스트를 수신하는 단계;상기 프로세서가 상기 텍스트 및 상기 화상 데이터를 분석하고, 상기 텍스트를 분석한 결과, 상기 화상 데이터를 분석한 결과 및 상기 센서부로부터 획득된 센싱값 중 적어도 하나를 이용하여 상기 사용자와 관련된, 제1 객체를 추출하는 단계;상기 추출된 제1 객체를 더 포함하도록 상기 화상 데이터를 편집하고, 상기 편집한 화상 데이터를 썸네일 형식으로 변환하는 단계;상기 변환된 화상 데이터 및 상기 텍스트를 대화방을 통해 다른 사용자와 공유하는 단계;를 포함하는 메시지 공유 방법을 실행시키기 위하여 컴퓨터 판독 가능한 매체에 저장된 컴퓨터 프로그램.
- 제1항에 있어서,상기 사용자와 관련된 제1 객체를 추출하는 단계는상기 텍스트를 형태소 단위로 분리하고, 상기 텍스트 중에서 사용자의 감정을 나타내는 형용사를 추출하고, 상기 형용사를 기초로 상기 사용자가 가지고 있는 감정과 관련된 제1 객체를 추출하는, 메시지 공유 방법을 실행시키기 위하여 컴퓨터 판독 가능한 매체에 저장된 컴퓨터 프로그램.
- 제1항에 있어서,상기 사용자와 관련된 제1 객체를 추출하는 단계는상기 화상 데이터를 분석하여 사람의 형상과 대응되는 영역을 추출하고, 상기 영역에 포함된 엣지(edge)를 통해 사용자의 감정을 결정하고, 상기 사용자의 감정과 관련된 제1 객체를 추출하는, 메시지 공유 방법을 실행시키기 위하여 컴퓨터 판독 가능한 매체에 저장된 컴퓨터 프로그램.
- 제1항에 있어서,상기 사용자와 관련된 제1 객체를 추출하는 단계는상기 센서부를 통해 획득된 위치 또는 시간을 고려하여, 사용자 단말의 위치 또는 시간과 대응되는 제1 객체를 추출하는, 메시지 공유 방법을 실행시키기 위하여 컴퓨터 판독 가능한 매체에 저장된 컴퓨터 프로그램.
- 제1항에 있어서,상기 화상 데이터를 변환하는 단계는얼굴 인식 알고리즘을 이용하여, 상기 화상 데이터에 포함된 사람 얼굴을 중심으로 상기 추출된 제1 객체를 추가하며, 상기 사람 얼굴을 중심으로 화상 데이터의 크기를 조정하는, 메시지 공유 방법을 실행시키기 위하여 컴퓨터 판독 가능한 매체에 저장된 컴퓨터 프로그램.
- 제1항에 있어서,상기 공유하는 단계는상기 제1 객체를 고려하여, 상기 텍스트의 폰트, 폰트의 크기, 글자 색 중 적어도 하나를 변경하고, 상기 폰트가 변경된 텍스트를 대화방을 통해 다른 사용자와 공유하는, 메시지 공유 방법을 실행시키기 위하여 컴퓨터 판독 가능한 매체에 저장된 컴퓨터 프로그램.
- 제1항에 있어서,상기 공유하는 단계는상기 변환된 화상 데이터 및 상기 텍스트를 결합한 복합 메시지를 생성하고, 상기 복합 메시지를 대화방을 통해 다른 사용자와 공유하는, 메시지 공유 방법을 실행시키기 위하여 컴퓨터 판독 가능한 매체에 저장된 컴퓨터 프로그램.
- 제7항에 있어서,상기 공유하는 단계는상기 프로세서가 상기 텍스트 및 상기 화상 데이터를 분석하고, 상기 텍스트를 분석한 결과, 상기 화상 데이터를 분석한 결과 및 상기 센서부로부터 획득된 센싱값 중 적어도 하나를 이용하여 상기 텍스트를 이미지로 변환하고, 상기 변환된 텍스트 및 상기 변환된 화상 데이터를 결합한 복합 메시지를 생성하고, 상기 복합 메시지를 대화방을 통해 다른 사용자와 공유하는, 메시지 공유 방법을 실행시키기 위하여 컴퓨터 판독 가능한 매체에 저장된 컴퓨터 프로그램.
- 제1항에 있어서,상기 공유하는 단계는상기 화상 데이터에 포함된 사람의 입모양을 분석하여 획득된 텍스트를 제2 객체로 추출하고, 상기 화상 데이터가 상기 제2 객체를 포함하도록 변경하고, 상기 변경된 화상 데이터 및 상기 텍스트를 대화방을 통해 다른 사용자와 공유하는, 메시지 공유 방법을 실행시키기 위하여 컴퓨터 판독 가능한 매체에 저장된 컴퓨터 프로그램.
- 제1항에 있어서,상기 변환된 화상 데이터는각 사용자의 프로필 정보로 등록될 수 있는, 메시지 공유 방법을 실행시키기 위하여 컴퓨터 판독 가능한 매체에 저장된 컴퓨터 프로그램.
- 촬상부, 입력부, 및 프로세서를 포함하는 컴퓨터의 메시지 공유 방법에 있어서,상기 촬상부를 통해 촬영된 화상 데이터를 수신하는 단계;상기 입력부를 통해 사용자에 의해 입력된 텍스트를 수신하는 단계;상기 프로세서가 상기 텍스트 및 상기 화상 데이터를 분석하고, 상기 텍스트를 분석한 결과, 상기 화상 데이터를 분석한 결과 및 상기 센서부로부터 획득된 센싱값 중 적어도 하나를 이용하여 상기 사용자와 관련된, 제1 객체를 추출하는 단계;상기 추출된 제1 객체를 더 포함하도록 상기 화상 데이터를 편집하고, 상기 편집한 화상 데이터를 썸네일로 변환하는 단계;상기 변환된 화상 데이터 및 상기 텍스트를 대화방을 통해 다른 사용자와 공유하는 단계;를 포함하는 메시지 공유 방법.
- 제11항에 있어서,상기 사용자와 관련된 제1 객체를 추출하는 단계는상기 텍스트를 형태소 단위로 분리하고, 상기 텍스트 중에서 사용자의 감정을 나타내는 형용사를 추출하고, 상기 형용사를 기초로 상기 사용자가 가지고 있는 감정과 관련된 제1 객체를 추출하는, 메시지 공유 방법.
- 제11항에 있어서,상기 사용자와 관련된 제1 객체를 추출하는 단계는상기 화상 데이터를 분석하여 사람의 형상과 대응되는 영역을 추출하고, 상기 영역에 포함된 엣지(edge)를 통해 사용자의 감정을 결정하고, 상기 사용자의 감정과 관련된 제1 객체를 추출하는, 메시지 공유 방법.
- 제11항에 있어서,상기 사용자와 관련된 제1 객체를 추출하는 단계는상기 센서부를 통해 획득된 위치 또는 시간을 고려하여, 사용자 단말의 위치 또는 시간과 대응되는 제1 객체를 추출하는, 메시지 공유 방법.
- 제11항에 있어서,상기 화상 데이터를 변환하는 단계는얼굴 인식 알고리즘을 이용하여, 상기 화상 데이터에 포함된 사람 얼굴을 중심으로 상기 추출된 제1 객체를 추가하며, 상기 사람 얼굴을 중심으로 화상 데이터의 크기를 조정하는, 메시지 공유 방법.
- 제11항에 있어서,상기 공유하는 단계는상기 제1 객체를 고려하여, 상기 텍스트의 폰트, 폰트의 크기, 글자 색 중 적어도 하나를 변경하고, 상기 폰트가 변경된 텍스트를 대화방을 통해 다른 사용자와 공유하는, 메시지 공유 방법.
- 제11항에 있어서,상기 공유하는 단계는상기 변환된 화상 데이터 및 상기 텍스트를 결합한 복합 메시지를 생성하고, 상기 복합 메시지를 대화방을 통해 다른 사용자와 공유하는, 메시지 공유 방법.
- 제17항에 있어서,상기 공유하는 단계는상기 프로세서가 상기 텍스트 및 상기 화상 데이터를 분석하고, 상기 텍스트를 분석한 결과, 상기 화상 데이터를 분석한 결과 및 상기 센서부로부터 획득된 센싱값 중 적어도 하나를 이용하여 상기 텍스트를 이미지로 변환하고, 상기 변환된 텍스트 및 상기 변환된 화상 데이터를 결합한 복합 메시지를 생성하고, 상기 복합 메시지를 대화방을 통해 다른 사용자와 공유하는, 메시지 공유 방법.
- 제11항에 있어서,상기 공유하는 단계는상기 화상 데이터에 포함된 사람의 입모양을 분석하여 획득된 텍스트를 제2 객체로 추출하고, 상기 화상 데이터가 상기 제2 객체를 포함하도록 변경하고, 상기 변경된 화상 데이터 및 상기 텍스트를 대화방을 통해 다른 사용자와 공유하는, 메시지 공유 방법.
- 제11항에 있어서,상기 변환된 화상 데이터는각 사용자의 프로필 정보로 등록될 수 있는, 메시지 공유 방법.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/KR2016/009218 WO2018038277A1 (ko) | 2016-08-22 | 2016-08-22 | 대화방을 통해 각 사용자의 상태를 반영한 화상 데이터를 공유하는 메시지 공유 방법메시지 공유 방법 및 상기 방법을 실행시키기 위한 컴퓨터 프로그램 |
JP2019510910A JP6727413B2 (ja) | 2016-08-22 | 2016-08-22 | メッセージ共有方法及びコンピュータプログラム |
CN201680088655.6A CN109716712A (zh) | 2016-08-22 | 2016-08-22 | 通过聊天室共享反映各用户状态的图像数据的消息共享方法及执行该方法的计算机程序 |
CN202211263165.9A CN115766636A (zh) | 2016-08-22 | 2016-08-22 | 消息共享方法及计算机可读介质 |
KR1020197004928A KR102165271B1 (ko) | 2016-08-22 | 2016-08-22 | 대화방을 통해 각 사용자의 상태를 반영한 화상 데이터를 공유하는 메시지 공유 방법메시지 공유 방법 및 상기 방법을 실행시키기 위한 컴퓨터 프로그램 |
US16/283,052 US11025571B2 (en) | 2016-08-22 | 2019-02-22 | Message sharing method for sharing image data reflecting status of each user via chat room and computer program for executing same method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/KR2016/009218 WO2018038277A1 (ko) | 2016-08-22 | 2016-08-22 | 대화방을 통해 각 사용자의 상태를 반영한 화상 데이터를 공유하는 메시지 공유 방법메시지 공유 방법 및 상기 방법을 실행시키기 위한 컴퓨터 프로그램 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/283,052 Continuation US11025571B2 (en) | 2016-08-22 | 2019-02-22 | Message sharing method for sharing image data reflecting status of each user via chat room and computer program for executing same method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018038277A1 true WO2018038277A1 (ko) | 2018-03-01 |
Family
ID=61244989
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2016/009218 WO2018038277A1 (ko) | 2016-08-22 | 2016-08-22 | 대화방을 통해 각 사용자의 상태를 반영한 화상 데이터를 공유하는 메시지 공유 방법메시지 공유 방법 및 상기 방법을 실행시키기 위한 컴퓨터 프로그램 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11025571B2 (ko) |
JP (1) | JP6727413B2 (ko) |
KR (1) | KR102165271B1 (ko) |
CN (2) | CN109716712A (ko) |
WO (1) | WO2018038277A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102116309B1 (ko) * | 2018-12-17 | 2020-05-28 | 주식회사 인공지능연구원 | 가상 캐릭터와 텍스트의 동기화 애니메이션 출력 시스템 |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10271079B1 (en) | 2015-10-20 | 2019-04-23 | Halogen Networks, LLC | Live video streaming system and method |
US10203855B2 (en) * | 2016-12-09 | 2019-02-12 | Snap Inc. | Customized user-controlled media overlays |
KR102616403B1 (ko) * | 2016-12-27 | 2023-12-21 | 삼성전자주식회사 | 전자 장치 및 그의 메시지 전달 방법 |
KR102448382B1 (ko) * | 2018-01-22 | 2022-09-28 | 삼성전자주식회사 | 텍스트와 연관된 이미지를 제공하는 전자 장치 및 그 동작 방법 |
JP7013929B2 (ja) * | 2018-02-23 | 2022-02-01 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置及びプログラム |
US10616666B1 (en) * | 2018-02-27 | 2020-04-07 | Halogen Networks, LLC | Interactive sentiment-detecting video streaming system and method |
US10929155B2 (en) * | 2018-05-11 | 2021-02-23 | Slack Technologies, Inc. | System, method, and apparatus for building and rendering a message user interface in a group-based communication system |
US10891969B2 (en) * | 2018-10-19 | 2021-01-12 | Microsoft Technology Licensing, Llc | Transforming audio content into images |
KR20200134544A (ko) * | 2019-05-22 | 2020-12-02 | 라인플러스 주식회사 | 대화방의 컨텐츠 저작권을 보호하는 방법, 시스템, 및 비-일시적인 컴퓨터 판독가능한 기록 매체 |
KR20210041757A (ko) | 2019-10-08 | 2021-04-16 | 삼성전자주식회사 | 전자 장치 및 그 제어 방법 |
US11750546B2 (en) | 2019-12-31 | 2023-09-05 | Snap Inc. | Providing post-capture media overlays for post-capture processing in a messaging system |
US11164353B2 (en) | 2019-12-31 | 2021-11-02 | Snap Inc. | Layering of post-capture processing in a messaging system |
US11695718B2 (en) | 2019-12-31 | 2023-07-04 | Snap Inc. | Post-capture processing in a messaging system |
US11237702B2 (en) | 2019-12-31 | 2022-02-01 | Snap Inc. | Carousel interface for post-capture processing in a messaging system |
CN111464827A (zh) * | 2020-04-20 | 2020-07-28 | 玉环智寻信息技术有限公司 | 一种数据处理方法、装置、计算设备及存储介质 |
KR20210130583A (ko) * | 2020-04-22 | 2021-11-01 | 라인플러스 주식회사 | 인스턴트 메시징 애플리케이션을 통해 콘텐츠를 공유하는 방법 및 시스템 |
KR20210144443A (ko) | 2020-05-22 | 2021-11-30 | 삼성전자주식회사 | 인공지능 가상 비서 서비스에서의 텍스트 출력 방법 및 이를 지원하는 전자 장치 |
KR102619836B1 (ko) * | 2021-01-04 | 2023-12-29 | 주식회사 카카오 | 말풍선 배치 기법 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060121679A (ko) * | 2005-05-25 | 2006-11-29 | 오끼 덴끼 고오교 가부시끼가이샤 | 화상 합성 장치, 그리고 그 장치를 사용한 통신 단말기 및화상 커뮤니케이션 시스템, 그리고 그 시스템에 있어서의채팅 서버 |
KR100868638B1 (ko) * | 2007-08-07 | 2008-11-12 | 에스케이 텔레콤주식회사 | 영상 통화 말풍선 제공 시스템 및 방법 |
KR20090129580A (ko) * | 2008-06-13 | 2009-12-17 | (주)티아이스퀘어 | 음성/문자 인식을 이용한 멀티미디어 콘텐츠 합성 메시지전송 방법 및 장치 |
KR20140132977A (ko) * | 2013-05-09 | 2014-11-19 | 에스케이플래닛 주식회사 | 위치 정보를 고려한 사진 데이터 표시 방법, 이를 위한 장치 및 시스템 |
US20150281145A1 (en) * | 2012-10-22 | 2015-10-01 | Daum Kakao Corp. | Device and method for displaying image in chatting area and server for managing chatting data |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001245269A (ja) * | 2000-02-25 | 2001-09-07 | Sony Corp | コミュニケーション・データ作成装置及び作成方法、コミュニケーション・データ再生装置及び再生方法、並びに、プログラム記憶媒体 |
US7574016B2 (en) * | 2003-06-26 | 2009-08-11 | Fotonation Vision Limited | Digital image processing using face detection information |
EP1667031A3 (en) * | 2004-12-02 | 2009-01-14 | NEC Corporation | HTML-e-mail creation system |
JP4218637B2 (ja) * | 2004-12-28 | 2009-02-04 | 沖電気工業株式会社 | 情報端末装置 |
US20070160004A1 (en) | 2006-01-10 | 2007-07-12 | Ketul Sakhpara | Local Radio Group |
JP4775066B2 (ja) * | 2006-03-28 | 2011-09-21 | カシオ計算機株式会社 | 画像加工装置 |
JP4559459B2 (ja) * | 2006-09-12 | 2010-10-06 | 三星電子株式会社 | モバイルアドホックネットワークで通信するように動作可能なモバイル装置、及びその装置間のデータ交換セッションを確立する方法、及びコンピュータ読取り可能な媒体 |
US8416981B2 (en) * | 2007-07-29 | 2013-04-09 | Google Inc. | System and method for displaying contextual supplemental content based on image content |
US20090142001A1 (en) | 2007-11-30 | 2009-06-04 | Sanyo Electric Co., Ltd. | Image composing apparatus |
JP2009135720A (ja) * | 2007-11-30 | 2009-06-18 | Sanyo Electric Co Ltd | 画像合成装置 |
KR101678434B1 (ko) * | 2010-04-02 | 2016-12-06 | 엘지전자 주식회사 | 이동단말기 및 그 제어방법 |
JP5353835B2 (ja) * | 2010-06-28 | 2013-11-27 | ブラザー工業株式会社 | 情報処理プログラムおよび情報処理装置 |
US20130120429A1 (en) * | 2011-11-16 | 2013-05-16 | Nickolas S. Sukup | Method of representing emotion in a text message |
KR20140094878A (ko) * | 2013-01-23 | 2014-07-31 | 삼성전자주식회사 | 사용자 단말 및 사용자 단말에서 사용자 인식을 이용한 영상 처리 방법 |
KR102130796B1 (ko) * | 2013-05-20 | 2020-07-03 | 엘지전자 주식회사 | 이동 단말기 및 이의 제어방법 |
JP2015028686A (ja) * | 2013-07-30 | 2015-02-12 | カシオ計算機株式会社 | ソーシャルタイムラインを作成する方法、ソーシャル・ネットワーク・サービスシステム、サーバー、端末並びにプログラム |
CN103533241B (zh) * | 2013-10-14 | 2017-05-10 | 厦门美图网科技有限公司 | 一种智能滤镜的拍照方法 |
KR102306538B1 (ko) * | 2015-01-20 | 2021-09-29 | 삼성전자주식회사 | 콘텐트 편집 장치 및 방법 |
JP6152151B2 (ja) * | 2015-09-18 | 2017-06-21 | ヤフー株式会社 | 情報提供装置、情報提供方法及び情報提供プログラム |
CN105228013B (zh) * | 2015-09-28 | 2018-09-07 | 百度在线网络技术(北京)有限公司 | 弹幕信息处理方法、装置及弹幕视频播放器 |
-
2016
- 2016-08-22 WO PCT/KR2016/009218 patent/WO2018038277A1/ko active Application Filing
- 2016-08-22 CN CN201680088655.6A patent/CN109716712A/zh active Pending
- 2016-08-22 CN CN202211263165.9A patent/CN115766636A/zh active Pending
- 2016-08-22 JP JP2019510910A patent/JP6727413B2/ja active Active
- 2016-08-22 KR KR1020197004928A patent/KR102165271B1/ko active IP Right Grant
-
2019
- 2019-02-22 US US16/283,052 patent/US11025571B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060121679A (ko) * | 2005-05-25 | 2006-11-29 | 오끼 덴끼 고오교 가부시끼가이샤 | 화상 합성 장치, 그리고 그 장치를 사용한 통신 단말기 및화상 커뮤니케이션 시스템, 그리고 그 시스템에 있어서의채팅 서버 |
KR100868638B1 (ko) * | 2007-08-07 | 2008-11-12 | 에스케이 텔레콤주식회사 | 영상 통화 말풍선 제공 시스템 및 방법 |
KR20090129580A (ko) * | 2008-06-13 | 2009-12-17 | (주)티아이스퀘어 | 음성/문자 인식을 이용한 멀티미디어 콘텐츠 합성 메시지전송 방법 및 장치 |
US20150281145A1 (en) * | 2012-10-22 | 2015-10-01 | Daum Kakao Corp. | Device and method for displaying image in chatting area and server for managing chatting data |
KR20140132977A (ko) * | 2013-05-09 | 2014-11-19 | 에스케이플래닛 주식회사 | 위치 정보를 고려한 사진 데이터 표시 방법, 이를 위한 장치 및 시스템 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102116309B1 (ko) * | 2018-12-17 | 2020-05-28 | 주식회사 인공지능연구원 | 가상 캐릭터와 텍스트의 동기화 애니메이션 출력 시스템 |
Also Published As
Publication number | Publication date |
---|---|
CN109716712A (zh) | 2019-05-03 |
JP2019526861A (ja) | 2019-09-19 |
US20190190865A1 (en) | 2019-06-20 |
JP6727413B2 (ja) | 2020-07-22 |
US11025571B2 (en) | 2021-06-01 |
CN115766636A (zh) | 2023-03-07 |
KR20190026927A (ko) | 2019-03-13 |
KR102165271B1 (ko) | 2020-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018038277A1 (ko) | 대화방을 통해 각 사용자의 상태를 반영한 화상 데이터를 공유하는 메시지 공유 방법메시지 공유 방법 및 상기 방법을 실행시키기 위한 컴퓨터 프로그램 | |
US20200402304A1 (en) | Electronic device and method for managing custom object on basis of avatar | |
WO2019156332A1 (ko) | 증강현실용 인공지능 캐릭터의 제작 장치 및 이를 이용한 서비스 시스템 | |
WO2017142278A1 (en) | Apparatus and method for providing dynamic panorama function | |
WO2020130281A1 (en) | Electronic device and method for providing avatar based on emotion state of user | |
CN110968736A (zh) | 视频生成方法、装置、电子设备及存储介质 | |
WO2018174314A1 (ko) | 스토리영상 제작 방법 및 시스템 | |
WO2020153785A1 (ko) | 전자 장치 및 이를 이용한 감정 정보에 대응하는 그래픽 오브젝트를 제공하는 방법 | |
WO2019125060A1 (ko) | 전화번호 연관 정보를 제공하기 위한 전자 장치 및 그의 동작 방법 | |
WO2022154270A1 (ko) | 요약 영상 생성 방법 및 그 전자 장치 | |
WO2015126097A1 (en) | Interactive server and method for controlling the server | |
CN110379406B (zh) | 语音评论转换方法、系统、介质和电子设备 | |
CN111950255A (zh) | 诗词生成方法、装置、设备及存储介质 | |
WO2021085812A1 (ko) | 전자장치 및 그 제어방법 | |
WO2018174311A1 (ko) | 얼굴 인식 카메라의 동적 컨텐츠를 제공하는 방법 및 시스템 | |
WO2017116015A1 (ko) | 콘텐츠 인식 기술 기반 콘텐츠 자동 생성 방법 및 시스템 | |
WO2021149930A1 (en) | Electronic device and story generation method thereof | |
WO2022211509A1 (ko) | 컨텐츠 입력에 기초하여 스티커를 제공하는 전자 장치 및 방법 | |
WO2020045909A1 (en) | Apparatus and method for user interface framework for multi-selection and operation of non-consecutive segmented information | |
WO2012057561A2 (ko) | 인스턴트 메신저 서비스 제공시스템 및 그 제공방법, 및 통신 단말기 및 그 통신방법 | |
CN108255917B (zh) | 图像管理方法、设备及电子设备 | |
WO2019216484A1 (ko) | 전자 장치 및 그 동작방법 | |
JPWO2017051577A1 (ja) | 感情誘導システム、および感情誘導方法 | |
JP6707715B2 (ja) | 学習装置、推定装置、学習方法及びプログラム | |
WO2018169276A1 (ko) | 언어 정보를 처리하기 위한 방법 및 그 전자 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16914256 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20197004928 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019510910 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.06.2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16914256 Country of ref document: EP Kind code of ref document: A1 |