KR101800847B1 - Method for providing interactive multimedia contents - Google Patents

Method for providing interactive multimedia contents Download PDF

Info

Publication number
KR101800847B1
KR101800847B1 KR1020150188431A KR20150188431A KR101800847B1 KR 101800847 B1 KR101800847 B1 KR 101800847B1 KR 1020150188431 A KR1020150188431 A KR 1020150188431A KR 20150188431 A KR20150188431 A KR 20150188431A KR 101800847 B1 KR101800847 B1 KR 101800847B1
Authority
KR
South Korea
Prior art keywords
user
area
matching
client terminal
providing server
Prior art date
Application number
KR1020150188431A
Other languages
Korean (ko)
Other versions
KR20170078173A (en
Inventor
김윤호
Original Assignee
김윤호
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 김윤호 filed Critical 김윤호
Priority to KR1020150188431A priority Critical patent/KR101800847B1/en
Publication of KR20170078173A publication Critical patent/KR20170078173A/en
Application granted granted Critical
Publication of KR101800847B1 publication Critical patent/KR101800847B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25816Management of client data involving client authentication
    • G06K9/00013
    • G06K9/00067
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42201Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Neurosurgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method for providing interactive multimedia contents according to the present invention includes the steps of receiving information on a plurality of images having a hollow interior from a contents providing server to the client terminal; The image processing method according to any one of claims 1 to 3, wherein the information about the first borderline, which is a boundary between the image of the plurality of images selected by the first user and the selected image inside the first region and the second region, Receiving the content providing server; Receiving from the client terminal of the first user a first result value obtained by performing a touch drag or fingerprint recognition within the selected image outputted by the first user to the client terminal; The content providing server may be configured to allow the first user to place the first result value in any one of the first region or the second region based on the position and the range of the touch drag or fingerprint recognition performed by the first user, 1 matching step; And a first deletion step of deleting, by the content providing server, a portion of the first resultant value that is out of the first matching area matched in the first matching step.

Description

[0001] METHOD FOR PROVIDING INTERACTIVE MULTIMEDIA CONTENTS [0002]

The present invention relates to a method for providing interactive multimedia contents. More specifically, a plurality of users design a specific area on an image of a specific shape outputted to a client terminal, and a specific function is given to a predetermined area, thereby providing user's mutual understanding and practicality in utilization of the function And more particularly, to a method of providing interactive multimedia contents.

BACKGROUND ART [0002] With the recent development of the Internet and various communication devices, services provided through an online network such as the Internet are becoming variously diverse. In other words, users who access the Internet can freely enjoy various videos through online network, such as data search, stock trading, banking, game, chat, music, movie and drama.

However, such a job has a problem in that the client terminal can not satisfy users' needs because it only utilizes content transmitted from a specific server unilaterally. Accordingly, there is a need for a method of providing interactive multimedia contents that can generate interest and generate practicality by directly generating contents that can engage users and give sympathy between users. On the other hand, Korean Patent Publication No. 10-2010-0062070 exists.

It is an object of the present invention to provide an interactive multimedia contents providing interactive and user-friendly contents by inducing users' Quot; is provided.

According to another aspect of the present invention, there is provided a method for providing an interactive multimedia content, the method comprising: a client terminal capable of accessing an online network; a content providing server for providing predetermined multimedia contents to the client terminal; A method of providing multimedia content using a multimedia content generation system having a database server for storing multimedia content, the method comprising the steps of: receiving information on a plurality of images, ; The image processing method according to any one of claims 1 to 3, wherein the information about the first borderline, which is a boundary between the image of the plurality of images selected by the first user and the selected image inside the first region and the second region, Receiving the content providing server; Receiving from the client terminal of the first user a first result value obtained by performing a touch drag or fingerprint recognition within the selected image outputted by the first user to the client terminal; The content providing server may be configured to allow the first user to place the first result value in any one of the first region or the second region based on the position and the range of the touch drag or fingerprint recognition performed by the first user, 1 matching step; And a first deletion step of deleting, by the content providing server, a portion of the first resultant value that is out of the first matching area matched in the first matching step.

At this time, after the first deletion step, the first matching area is divided into the 1-1 matching area and the 1-2 matching area from the client terminal of the second user, and the image selected by the first user Receiving information on a second boundary, which is a boundary dividing the inside into a third area and a fourth area, by the content providing server; Receiving from the client terminal of the second user a second result value obtained by performing touch drag or fingerprint recognition within the image selected by the first user outputted to the client terminal of the second user; Wherein the content providing server is further configured to cause the second user to input the second result value to any one of the third region or the fourth region based on the position and range of the touch drag or fingerprint recognition performed by the second user, 2 matching; And a second erasing step of erasing, by the content providing server, a portion out of the second matching area matched in the second matching step of the second resultant value, And the fourth area includes only the area corresponding to the first matching area and the area that is not the first matching area.

In this case, the client terminal of the first user and the client terminal of the second user may be the same terminal.

In this case, after the second erasing step, the content providing server deletes a part of the first matching area matched in the first matching step and deletes the first matching area, And a step of extracting an intersection area overlapping with each other among the second user-designated areas which are left after deleting a part out of the matched second matching area in the matching step.

At this time, after the second step, the content providing server deletes a part of the image selected by the first user from the first matching area matched in the first matching step, A step of extracting a complementary region that is not included in any one of the first user specifying region, the second user specifying region, and the remaining region after deleting a portion out of the second matching region matched in the second matching step And the like.

In this case, after the second step, the content providing server deletes a part of the first matching area matched in the first matching step and deletes the first matching area, And extracting an intersecting region overlapping among a plurality of second user-designated regions, which are remaining regions, and extracting an intersecting region overlapping between the first and second matching regions from among the images selected by the first user, A second user-specified area that is a remaining area and a second user-specified area that is a remaining area after deleting a part outside the second matching area matched in the second matching step, And an extraction step of extracting a complemented region not included in any one of the regions is further performed.

In this case, in the step of receiving from the client terminal of the first user, the first result obtained by the first user performing the touch drag or fingerprint recognition inside the selected image outputted from the client terminal, The first resultant value may vary according to the time required for the touch dragging or the fingerprint recognition process or the strength of the force applied to the client terminal.

At this time, after the extraction step, the predetermined effect is provided to the intersection area or the rounding area by the content providing server. Or a setting effect provided from a client terminal of the first user and / or the second user.

In this case, after the extracting step, the first, second, third, fourth, 1-1, 1-2, first, A predetermined effect in any one of the second matching areas; Or a setting effect provided from a client terminal of the first user and / or the second user.

In this case, after the effect providing step, the content providing server outputs the image selected by the first user to the client terminal by the first user and / or the second user, The fingerprint of the first user and the fingerprint of the second user are overlaid on the intersection area of the first user and the first user and the first user and the second user overlap the fingerprint of the first user and the fingerprint of the second user, When the user touches or clicks the intersection area among the images of the selected image, the predetermined effect; Or a setting effect provided from the client terminal of the first user and / or the second user to the client terminal of the first user and / or the second user. have.

At this time, in the effect operation step, the content providing server outputs the image selected by the first user to the client terminal by the first user and / or the second user, A fifth area excluding the intersection area in the first matching area; A sixth area excluding the intersection area in the second matching area from among the images selected by the first user; And outputting a different effect or a different color or a different color density to the intersection region.

At this time, the predetermined effect; Or the setting effect provided from the client terminals of the first user and / or the second user may be any one of the present fortune, the compatibility of the first user and the second user, the specific voice, the specific image, Or a chat window between the first user and the second user, and a voice call or video call connection between the first user and the second user.

At this time, in the step of receiving information on a plurality of images having an internal hollow shape from the contents providing server, the information on the plurality of images may include at least one of a heart shape, a star shape, The shape of the animal, and the shape of the object.

At this time, from the client terminal of the first user, any one of the images of the plurality of images selected by the first user and a first boundary line that is a boundary dividing the selected image into the first area and the second area A color and a color of an outline of one of the images of the plurality of images selected by the first user from the client terminal of the first user, And the step of receiving information on the thickness of the content providing server is further performed.

In this case, the client terminal may be any one selected from a smart phone, a feature phone, a tablet PC, an IPTV, or a smart TV.

In this case, the content providing server may provide the multimedia contents in cooperation with a social network service (SNS).

According to another aspect of the present invention, there is provided a computer readable recording medium for performing the method for providing interactive multimedia contents according to the present invention.

According to the present invention, it is possible to invite interest and to provide practicality by directly generating contents that can induce users' participation and give sympathy between users.

1 is a schematic diagram of a multimedia content generation system for performing an interactive multimedia content providing method according to the present invention.
2 to 4 are flowcharts illustrating a method of providing interactive multimedia contents according to the present invention.
5 to 8 are views showing an embodiment of a method for providing interactive multimedia contents according to the present invention.

While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail.

It should be understood, however, that the invention is not intended to be limited to the particular embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.

The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, the terms "comprises" or "having" and the like are used to specify that there is a feature, a number, a step, an operation, an element, a component or a combination thereof described in the specification, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Hereinafter, the same reference numerals will be used for the same constituent elements in the drawings, and redundant explanations for the same constituent elements will be omitted.

1 is a schematic diagram of a multimedia content generation system for performing an interactive multimedia content providing method according to the present invention.

1, a multimedia content generation system 1000 includes a client terminal 400 capable of accessing an online network, a content providing server 100 providing predetermined multimedia contents to the client terminal, And a database server 200 for storing the multimedia contents. However, according to the embodiment, the content providing server 100 and the database server 200 may be constituted by one server. Therefore, the present invention is not limited to the configuration in which each server is configured independently.

The client terminal 400 may be any one of a smart phone, a feature phone, a tablet PC, an IPTV, or a smart TV.

Also, the content providing server 100 may provide the multimedia contents in cooperation with a social network service (SNS).

2 to 4 are flowcharts illustrating a method of providing interactive multimedia contents according to the present invention. 5 to 8 are views showing an embodiment of a method for providing interactive multimedia contents according to the present invention.

2 to 8, a method for providing interactive multimedia contents according to the present invention includes receiving information on a plurality of images having an interior hollow shape from the contents providing server, ); The image processing method according to any one of claims 1 to 3, wherein the information about the first borderline, which is a boundary between the image of the plurality of images selected by the first user and the selected image inside the first region and the second region, A step S110 of receiving the content providing server; (S120) receiving, from the client terminal of the first user, a first result value obtained by performing a touch drag or fingerprint recognition within the selected image output by the first user to the client terminal; The content providing server may be configured to allow the first user to place the first result value in any one of the first region or the second region based on the position and the range of the touch drag or fingerprint recognition performed by the first user, 1 matching (S130); And a first deletion step (S140) of deleting a part of the first resultant value out of the first matching area matched in the first matching step, by the content providing server.

In step S100, the information on the plurality of images may be an image of any one of a heart shape, a star shape, a human shape, an animal shape, and a shape of an object. In step S110, the first user selects a specific image. Between step S110 and step S120, the first user selects a specific image from the client terminal of the first user, The content providing server may further include information on the type, color, and thickness of the outline of one of the plurality of images.

5, the first user selects the heart-shaped image 10, and the type of the outline line of the heart-shaped image 10 is selected, , Color, and thickness.

In addition, the user can select the first boundary line 20 as shown in FIG.

In step S120, the first user may perform touch dragging or fingerprint recognition on the heart-shaped image output to the client terminal. In step S130, the first user may input the first result value into the first area 30 ) Or the second area 40, as shown in FIG.

As shown in FIG. 5, since the first user has input more parts in the first area 30, the first area 30 is matched to the first area 30. In step S140, the portion 11 that is out of the first region 30 is deleted. That is, the result input to the second area 40 is deleted because it is the area of the second user.

In step S140 and subsequent steps, the first matching area is divided into a first matching area and a first matching area, and an image selected by the first user is divided into a third area (S150) of receiving information on a second boundary, which is a boundary between the first region and the fourth region, from the content providing server; Receiving from the client terminal of the second user a second result value obtained by performing a touch drag or fingerprint recognition within the image selected by the first user outputted to the client terminal of the second user, S160); Wherein the content providing server is further configured to cause the second user to input the second result value to any one of the third region or the fourth region based on the position and range of the touch drag or fingerprint recognition performed by the second user, 2 matching (S170); And a second deletion step (S180) of deleting a part of the second result value out of the second matching area matched by the content providing server in the second matching step.

It should be understood that the second user is not necessarily limited to a person but may be a companion animal, wherein the second boundary line 12 is a part of the first matching area 30, And a first and a second matching area, and is a boundary line dividing an image selected by the first user into a third area and a fourth area.

That is, the second user is prevented from selecting the second boundary line in the second area. More specifically, if the second user selects the second boundary line in an area other than the first matching area, an intersection of the matching area of the first user and the second user is not generated .

5, in step S170, the second user inputs the value input in the second area to the third area in consideration of the location and the range of touch dragging or fingerprint recognition with the second boundary line, Is matched to any one of the fourth regions.

Here, the third area is configured only as the area corresponding to the first matching area, and the fourth area includes both the first matching area and the area not the first matching area. In FIG. 6, the third area corresponds to a reference numeral 70, and the fourth area corresponds to a reference numeral 60, wherein the second matching area is the fourth area.

In addition, a portion of the second resultant value that is out of the second matching area is deleted through step S180.

After the step S180, a portion of the first matching area that has been matched in the first matching step is deleted, and a second user matching area that is outside the second matching area that is matched in the second matching step (S190) of extracting an intersecting region overlapping among the second user-designated regions that are the remaining regions after the portion of the intersecting region is deleted and the intersecting region corresponds to the reference numeral 80 with reference to FIG.

According to an embodiment, after step S190, the predetermined effect is added to the intersection area; Or an effect providing step (S200) of giving a setting effect provided from a client terminal of the first user and / or the second user may be further performed, and after the step S200, The method of claim 1, wherein the user and / or the second user outputs an image selected by the first user to the client terminal, wherein the fingerprint of the first user and the fingerprint of the second user When the first user and the second user touch or click the intersection area among the images of the image selected by the first user outputted to the client terminal of the first user and the second user, effect; Or an effect operation step (S210) of providing the setting effect provided from the client terminal of the first user and / or the second user to the client terminal of the first user and / or the second user.

More specifically, in step S210, the content providing server outputs the image selected by the first user to the client terminal by the first user and / or the second user, A fifth area excluding the intersection area in the first matching area; A sixth area excluding the intersection area in the second matching area from among the images selected by the first user; And to output different effects or different colors or different color intensities to the intersection regions.

Also, the predetermined effect; Or the setting effect provided from the client terminals of the first user and / or the second user may be any one of the present fortune, the compatibility of the first user and the second user, the specific voice, the specific image, Or a chat window between the first user and the second user, and a voice call or a video call connection between the first user and the second user.

The functional operations described herein and the embodiments of the present subject matter may be embodied in a digital electronic circuit or computer software, firmware or hardware, including combinations of structures disclosed herein and their structural equivalents, or in a combination of any of the foregoing. Do.

Embodiments of the subject matter described herein may be implemented as one or more computer program products, in other words one or more modules for computer program instructions encoded on a type of program medium for execution by, or control over, the operation of the data processing apparatus Can be implemented. The type of program medium may be a propagated signal or a computer readable medium. A propagated signal is an artificially generated signal, such as a machine-generated electrical, optical or electromagnetic signal, generated to encode information for transmission to a suitable receiver device for execution by a computer. The computer-readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a combination of materials that affect the machine readable propagation type signal, or a combination of one or more of the foregoing.

A computer program (also known as a program, software, software application, script or code) may be written in any form of programming language, including compiled or interpreted language, a priori or procedural language, Components, subroutines, or other units suitable for use in a computer environment.

A computer program does not necessarily correspond to a file in the file system. The program may be stored in a single file provided to the requested program, or in multiple interactive files (e.g., a file storing one or more modules, subprograms, or portions of code) (E.g., one or more scripts stored in a markup language document).

A computer program may be deployed to run on multiple computers or on one computer, located on a single site or distributed across multiple sites and interconnected by a communications network.

Additionally, the logic flows and structural block diagrams described in this patent document describe corresponding actions and / or specific methods supported by corresponding functions and steps supported by the disclosed structural means, It can also be used to build software structures and algorithms and their equivalents.

The processes and logic flows described herein may be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.

A processor suitable for execution of a computer program (for example, the content providing server 100, the database server 200, or the inside of the client terminal 400) may be implemented by, for example, both a general purpose and a special purpose microprocessor, Of any one or more of the processors. Generally, a processor will receive instructions and data from read-only memory, random access memory, or both.

A core element of a computer is a processor for executing instructions and one or more memory devices for storing instructions and data. In addition, the computer is generally operatively coupled to receive data from, transfer data to, or perform both of the operations of, for example, one or more mass storage devices for storing data such as magnetic, magneto-optical disks, Or will include. However, the computer need not have such a device.

The description sets forth the best mode of the invention, and is provided to illustrate the invention and to enable those skilled in the art to make and use the invention. The written description is not intended to limit the invention to the specific terminology presented.

Thus, while the present invention has been described in detail with reference to the above examples, those skilled in the art will be able to make adaptations, modifications, and variations on these examples without departing from the scope of the present invention. In short, to achieve the intended effect of the present invention, it is not necessary to include all the functional blocks shown in FIG. 1 separately or to follow the sequence shown in FIGS. 2 to 4 in the order shown, It is to be noted that the present invention may fall within the technical scope of the present invention described.

1000: Multimedia content creation system
100: Content providing server
200: Database server
300: Online network
400: client terminal

Claims (17)

delete A system and method for providing multimedia content using a multimedia content generation system having a client terminal capable of accessing an online network, a content providing server for providing predetermined multimedia contents to the client terminal, and a database server for storing the multimedia contents, In the method,
Receiving, from the content providing server, information on a plurality of images having a hollow shape inside the client terminal;
The image processing method according to any one of claims 1 to 3, wherein the information about the first borderline, which is a boundary between the image of the plurality of images selected by the first user and the selected image inside the first region and the second region, Receiving the content providing server;
Receiving from the client terminal of the first user a first result value obtained by performing a touch drag or fingerprint recognition within the selected image outputted by the first user to the client terminal;
The content providing server may be configured to allow the first user to place the first result value in any one of the first region or the second region based on the position and the range of the touch drag or fingerprint recognition performed by the first user, 1 matching step; And
The content providing server proceeds to a first deletion step of deleting a portion of the first resultant value that is out of the first matching area matched in the first matching step,
After the first erasing step,
Dividing the first matching area into a first matching area and a first matching area and dividing an image selected by the first user into a third area and a fourth area from a client terminal of a second user The method comprising: receiving information on a second boundary line that is a boundary;
Receiving from the client terminal of the second user a second result value obtained by performing touch drag or fingerprint recognition within the image selected by the first user outputted to the client terminal of the second user;
Wherein the content providing server is further configured to cause the second user to input the second result value to any one of the third region or the fourth region based on the position and range of the touch drag or fingerprint recognition performed by the second user, 2 matching; And
And a second deletion step of deleting, by the content providing server, a part of the second resultant value that is out of the second matching area matched in the second matching step,
Wherein the third region comprises:
And the second matching area is configured only as an area corresponding to the first matching area,
Wherein the fourth region comprises:
Wherein the first matching area and the second matching area are both included in the first matching area and the first matching area.
delete The method of claim 2,
After the second erasing step,
By the content providing server,
A first user specifying area which is a remaining area after deleting a part of the first matching area matched in the first matching step,
Further comprising the step of extracting an intersection area overlapping with each other among the second user-designated areas which are left after deleting a part out of the second matching area matched in the second matching step .
The method of claim 2,
After the second step,
By the content providing server,
Wherein, among the images selected by the first user,
A first user specifying area which is a remaining area after deleting a part of the first matching area matched in the first matching step,
Further comprising the step of extracting a complemented region which is not included in any one of the second user-designated regions, which is a remaining region after deleting a portion out of the matched second matching region in the second matching step A method for providing interactive multimedia content.
[Claim 6 is abandoned due to the registration fee.] The method of claim 2,
After the second step,
By the content providing server,
A first user specifying area which is an area remaining after deleting a part which is out of the first matching area matched in the first matching step and an area which is left after deleting a part which is out of the second matching area matched in the second matching step, The intersection overlap areas overlapping each other are extracted from the second user-
Wherein, among the images selected by the first user,
A first user specifying area which is an area remaining after deleting a part which is out of the first matching area matched in the first matching step and an area which is left after deleting a part which is out of the second matching area matched in the second matching step, Wherein the step of extracting a complementary region that is not included in any one of the first and second user-specified regions is further performed.
The method of claim 2,
The method comprising: receiving, from the client terminal of the first user, a first result obtained by performing a touch drag or fingerprint recognition within the selected image output from the first user to the client terminal,
Wherein the first resultant value is a first value,
Wherein the time varying step varies depending on a time required for the touch dragging or fingerprint recognition or a strength of a force applied to the client terminal.
[8] has been abandoned due to the registration fee. The method of claim 6,
After the extraction step,
Wherein the content providing server, in the intersection region or the rounding region,
Predetermined effect; or
Further comprising an effect providing step of giving a setting effect provided from a client terminal of the first user and / or the second user.
The method of claim 6,
After the extraction step,
The content providing server may further include a content providing server for providing the content providing server with the content of the first area, the second area, the third area, the fourth area, the 1-1 area, the 1-2 area, the first matching area, ,
Predetermined effect; or
Further comprising an effect providing step of giving a setting effect provided from a client terminal of the first user and / or the second user.
The method of claim 8,
After the effecting step,
The content providing server includes:
Wherein the first user and / or the second user outputs an image selected by the first user to the client terminal,
A fingerprint of the first user and a fingerprint of the second user are overlaid on the intersection area of the image selected by the first user,
When the first user and the second user touch or click the intersection area among the inside of the image selected by the first user outputted to the client terminal,
The predetermined effect; or
And an effect operation step of providing a setting effect provided from a client terminal of the first user and / or the second user to a client terminal of the first user and / or the second user. / RTI >
The method of claim 10,
In the effect operation step,
The content providing server includes:
Wherein the first user and / or the second user outputs an image selected by the first user to the client terminal,
A fifth area excluding the intersection area in the first matching area from among the images selected by the first user;
A sixth area excluding the intersection area in the second matching area from among the images selected by the first user; And
Wherein the intersection area is output with a different effect or a different color or a different color density.
[12] has been abandoned due to the registration fee. The method of claim 8,
The predetermined effect; or
Wherein the setting effect provided from the client terminal of the first user and /
Outputting one of the fortunes of today, the compatibility of the first user and the second user, the specific voice, the specific image and the specific video,
A chat window between the first user and the second user, and a voice call or video call connection between the first user and the second user.
delete delete delete delete delete
KR1020150188431A 2015-12-29 2015-12-29 Method for providing interactive multimedia contents KR101800847B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150188431A KR101800847B1 (en) 2015-12-29 2015-12-29 Method for providing interactive multimedia contents

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150188431A KR101800847B1 (en) 2015-12-29 2015-12-29 Method for providing interactive multimedia contents

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020170139626A Division KR20170125304A (en) 2017-10-25 2017-10-25 Method for providing multimedia contents

Publications (2)

Publication Number Publication Date
KR20170078173A KR20170078173A (en) 2017-07-07
KR101800847B1 true KR101800847B1 (en) 2017-11-23

Family

ID=59353763

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150188431A KR101800847B1 (en) 2015-12-29 2015-12-29 Method for providing interactive multimedia contents

Country Status (1)

Country Link
KR (1) KR101800847B1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014222439A (en) * 2013-05-14 2014-11-27 ソニー株式会社 Information processing apparatus, part generating and using method, and program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014222439A (en) * 2013-05-14 2014-11-27 ソニー株式会社 Information processing apparatus, part generating and using method, and program

Also Published As

Publication number Publication date
KR20170078173A (en) 2017-07-07

Similar Documents

Publication Publication Date Title
CN111294663B (en) Bullet screen processing method and device, electronic equipment and computer readable storage medium
CN108108821B (en) Model training method and device
US10547571B2 (en) Message service providing method for message service linked to search service and message server and user terminal to perform the method
KR101821358B1 (en) Method and system for providing multi-user messenger service
CN106488252B (en) Live broadcast room list processing method and device
WO2019242222A1 (en) Method and device for use in generating information
CN109309844B (en) Video speech processing method, video client and server
US20170277703A1 (en) Method for Displaying Webpage and Server
KR20150130476A (en) Techniques for language translation localization for computer applications
KR101567555B1 (en) Social network service system and method using image
JP6607876B2 (en) Coordinating input method editor (IME) activity between virtual application client and server
CN109582904B (en) Published content modification method, device, server, terminal and storage medium
CN115937033A (en) Image generation method and device and electronic equipment
CN106774852B (en) Message processing method and device based on virtual reality
KR101800847B1 (en) Method for providing interactive multimedia contents
CN116010899A (en) Multi-mode data processing and pre-training method of pre-training model and electronic equipment
KR20170125304A (en) Method for providing multimedia contents
KR102516831B1 (en) Method, computer device, and computer program for providing high-definition image of region of interest using single stream
US11704885B2 (en) Augmented reality (AR) visual display to save
KR20170055345A (en) Social Network Service and Program using Cartoon Image Extraction and Transformation system and method using image
CN108449643B (en) Cross-application control method and device
CN110460512B (en) System message generation method, device, server and storage medium
CN112637677A (en) Bullet screen processing method and device, electronic equipment and storage medium
CN112584197B (en) Method and device for drawing interactive drama story line, computer medium and electronic equipment
CN110942306A (en) Data processing method and device and electronic equipment

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
N231 Notification of change of applicant
GRNT Written decision to grant