KR20120119244A - Method for producing contents, system thereof and terminal thereof - Google Patents
Method for producing contents, system thereof and terminal thereof Download PDFInfo
- Publication number
- KR20120119244A KR20120119244A KR1020110037038A KR20110037038A KR20120119244A KR 20120119244 A KR20120119244 A KR 20120119244A KR 1020110037038 A KR1020110037038 A KR 1020110037038A KR 20110037038 A KR20110037038 A KR 20110037038A KR 20120119244 A KR20120119244 A KR 20120119244A
- Authority
- KR
- South Korea
- Prior art keywords
- information
- terminal
- character
- face
- fitting
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
Abstract
Description
The present invention relates to a content production method, a system therefor, and a terminal therefor. More particularly, the present invention relates to a content production method for synthesizing a user's face or another person's face according to a character, and to a system and a terminal therefor. It is about.
With the development of mobile communication networks and the development of terminal specifications, mobile communication terminals have become a necessity of modern man and are evolving into total entertainment devices beyond the conventional simple communication devices or information providing devices.
In particular, the Internet is an open network configured to freely connect to other computers that want to access anywhere in the world using a common protocol called IP (Internet Protocol). It is used to convey information. As part of the business through the Internet, sites that provide various contents such as internet advertisements, internet broadcasting, online games, internet newspapers / magazines, search services, portal services, and e-commerce are rapidly increasing.
UCC (User Created Contents) refers to multimedia contents that users create. The multimedia contents produced by the user without commercial intention are shown online. As the information and communication fields such as the Internet, digital cameras, and smart phones have developed, the UCC has been spread even by non-specialists, producing faster and meaningful information than conventional media. This UCC is not only a so-called video, but also a video clip, as well as content produced in 2D or 3D animation rapidly spread. As such, users need to create multimedia content directly through authoring technology that users can freely express.
An object according to an embodiment of the present invention is to select a character according to a user's request to scan the background information, to recognize the face included in the image data to extract the face information, the fitting generated by combining the face information and the background information The present invention provides a method for producing a content that can be synthesized by applying information to a character, a system for the same, and a terminal for the same.
The content production system according to an embodiment of the present invention for achieving the above object receives the face information and the background information for at least one character from the terminal, generates a fitting information by combining the face information and the background information The service control unit controls to transmit the generated fitting information to the terminal, selects at least one character according to a user's request, scans background information on the character, and recognizes at least one face included in one image data. Extracting face information, transmitting face information and background information to a service device, receiving fitting information generated by matching face information and background information from a service device, and applying the received fitting information to a character to synthesize the terminal Characterized in that it comprises a.
According to an embodiment of the present invention, the terminal selects at least one character according to a request from a user and a terminal communication unit for transmitting and receiving data according to content creation, a display unit for providing data transmitted and received through the terminal communication unit, and a user request. Scan background information about the character, extract face information by recognizing at least one face included in one image data, generate fitting information by combining face information and background information, and apply fitting information to the selected character. It characterized in that it comprises a terminal control unit for controlling to apply and synthesize.
In addition, in the terminal according to the present invention, the background information is characterized in that the information corresponding to at least one or more of the hairstyle, costume style and accessories applied to the character.
In addition, in the terminal according to the present invention, the terminal control unit transmits the face information and the background information to the service device, receives the fitting information generated by matching the face information and the background information from the service device, and characterizes the received fitting information It is characterized by applying to synthesize.
In the terminal according to the present invention, the fitting information is generated by matching the position value and size of the face information with the background information.
The service apparatus according to an embodiment of the present invention is a service communication unit for transmitting and receiving data according to the content production with the terminal, the service storage unit for storing the data for the content production and the face information and background information for at least one character from the terminal. And a service control unit configured to receive, generate fitting information by combining face information and background information, and transmit the generated fitting information to the terminal.
In the service apparatus according to the present invention, the service communication unit, the service storage unit or the service control unit may be implemented as one or more servers operating on a cloud computing basis.
In accordance with another aspect of the present invention, there is provided a method of creating a content, wherein the terminal selects at least one character according to a user's request, the terminal scans background information on the character, and the terminal is included in one image data. Extracting face information by recognizing at least one face; generating fitting information by combining face information and background information; and synthesizing by fitting the fitting information to a selected character by the terminal; It is done.
In addition, in the content production method according to the present invention, the step of scanning, if the character is an animal, the terminal is at least one or more of the position or size of the whiskers, cravings, hair, body color, size of the neck and point of the animal The background information may be scanned.
In addition, in the content production method according to the present invention, the step of extracting is that the terminal analyzes the image data to extract the face information corresponding to at least one or more of the skin color for the face, the size of the neck and the position or size of the dot It features.
In the content providing method according to the present invention, the terminal selects at least one character according to a user's request, the terminal scans background information on the character, and the terminal includes at least the image data. Recognizing one face to extract face information, the terminal transmitting the face information and background information to the service device, the terminal receives the fitting information generated by matching the face information and background information from the service device And synthesizing the fitting information by applying the received fitting information to the character.
According to the present invention, the facial features can be naturally synthesized by expressing all the features of the character and the face to be synthesized as if they were real.
In addition, it can be usefully provided by providing a user interface so that the user can perform a character synthesis work on the character with a simple operation.
In addition, the synthesis result image may be naturally produced by applying a predetermined image processing technique and a graphic technique to the character and the image data.
In addition, by using information on the character and the image data in a database, the synthesis speed can be increased.
1 is a block diagram showing a content production system according to an embodiment of the present invention.
2 is a block diagram illustrating a configuration of a terminal according to an exemplary embodiment of the present invention.
3 is a block diagram illustrating a configuration of a service apparatus according to an exemplary embodiment of the present invention.
4 is a flowchart illustrating an operation of a terminal according to a content production method according to an embodiment of the present invention.
5A to 5D are exemplary diagrams for describing a content production method according to an embodiment of the present invention.
6 is a diagram illustrating a data flow of a terminal and a service apparatus according to an exemplary embodiment of the present invention.
7 is a flowchart illustrating an operation of a terminal interworking with a service apparatus according to an exemplary embodiment of the present invention.
8 is a flowchart illustrating an operation of a service apparatus interoperating with a terminal according to an exemplary embodiment of the present invention.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description and the accompanying drawings, detailed description of well-known functions or constructions that may obscure the subject matter of the present invention will be omitted. In addition, it should be noted that like elements are denoted by the same reference numerals as much as possible throughout the drawings.
The terms or words used in the specification and claims described below should not be construed as being limited to ordinary or dictionary meanings, and the inventors are appropriate as concepts of terms for explaining their own invention in the best way. It should be interpreted as meanings and concepts in accordance with the technical spirit of the present invention based on the principle that it can be defined. Therefore, the embodiments described in the present specification and the configurations shown in the drawings are merely the most preferred embodiments of the present invention, and not all of the technical ideas of the present invention are described. Therefore, It is to be understood that equivalents and modifications are possible.
Hereinafter, a terminal according to an embodiment of the present invention will be described as a representative example of a mobile communication terminal that can be connected to a communication network to produce a character, but the terminal is not limited to the mobile communication terminal, all information communication devices, multimedia terminals, wired terminals, It can be applied to various terminals such as fixed terminal and IP (Internet Protocol) terminal. Also, the terminal may be a mobile phone, a portable multimedia player (PMP), a mobile Internet device (MID), a smart phone, a desktop, a tablet PC, a notebook, And an information communication device, which can be advantageously used in a mobile terminal having various mobile communication specifications.
1 is a block diagram showing a content production system according to an embodiment of the present invention.
Referring to FIG. 1, a
The
The
The
The terminal 10 selects a character according to a user's request and scans background information on the character. In addition, the terminal 10 extracts face information by recognizing a face included in the image data. Thereafter, the terminal 10 generates fitting information by combining face information and background information, and synthesizes the fitting information by applying fitting information to the selected character.
The
Through this, the facial features can be naturally synthesized by expressing all the features of the character and the face to be synthesized. In addition, it can be usefully provided by providing a user interface so that the user can perform a character synthesis work on the character with a simple operation. In addition, the synthesis result image may be naturally produced by applying a predetermined image processing technique and a graphic technique to the character and the image data. In addition, by using information on the character and the image data in a database, the synthesis speed can be increased.
2 is a block diagram illustrating a configuration of a terminal according to an exemplary embodiment of the present invention.
Referring to FIG. 2, the terminal 10 according to an embodiment of the present invention includes a
The
The
In addition, when the terminal 10 according to the embodiment of the present invention interlocks with the
The
The
In order to more effectively perform the function of the terminal 10, the
The
The
The
The data area is an area where data generated according to use of the terminal 10 is stored. In particular, the data area according to an embodiment of the present invention stores all data for content creation. Here, the data area includes an
The
The
The wireless communication module is a component for transmitting and receiving data according to a wireless communication method, and when the terminal 10 uses wireless communication, data is transmitted using any one of a wireless network communication module, a wireless LAN communication module, and a wireless fan communication module. The
The wireless network communication module is for connecting to the
The WLAN communication module is for performing communication according to a WLAN, Wi-Fi, or WiMAX method. When the WLAN communication module receives data from the
The wireless fan communication module is for transmitting and receiving data according to a wireless fan (WPAN) method, and performs a shorter distance wireless communication than the wireless network communication module and the wireless LAN communication module. The wireless fan communication module may directly transmit and receive data between terminals. That is, data can be directly transmitted and received with other terminals through the wireless fan communication module. In addition, the wireless fan communication module may be connected to the
The wired communication module is for transmitting and receiving data by wire. The wired communication module may be connected to the
The
3 is a block diagram illustrating a configuration of a service apparatus according to an exemplary embodiment of the present invention.
Referring to FIG. 3, the
The
The
The
In addition, the
4 is a flowchart illustrating an operation of a terminal according to a content producing method according to an embodiment of the present invention, and FIGS. 5A to 5D are exemplary views for explaining a content producing method according to an embodiment of the present invention.
4, in the content production method according to an embodiment of the present invention, the
The
The
5A and 5B, when the
5C and 5D, when the
Through this, the facial features can be naturally synthesized by expressing all the features of the character and the face to be synthesized. In addition, it can be usefully provided by providing a user interface so that the user can perform a character synthesis work on the character with a simple operation. In addition, the synthesis result image may be naturally produced by applying a predetermined image processing technique and a graphic technique to the character and the image data. In addition, by using information on the character and the image data in a database, the synthesis speed can be increased.
6 is a diagram illustrating a data flow of a terminal and a service apparatus according to an exemplary embodiment of the present invention.
Referring to FIG. 6, referring to the data flow between the terminal 10 and the
The terminal 10 extracts face information by recognizing a face included in the image data in step S37. Here, the terminal 10 may acquire or obtain image data by photographing or downloading through the
If the face information of the image data and the background information of the character is received from the terminal 10 in step S41, the
The terminal 10 receives the fitting information generated by matching the face information and the background information from the
7 is a flowchart illustrating an operation of a terminal interworking with a service apparatus according to an exemplary embodiment of the present invention.
Referring to FIG. 7, the operation of the terminal 10 interworking with the
The terminal 10 acquires image data in step S57. In this case, the terminal 10 may acquire or obtain image data by shooting or downloading through the
In step S61, the terminal 10 transmits face information and background information to the
8 is a flowchart illustrating an operation of a service apparatus interoperating with a terminal according to an exemplary embodiment of the present invention.
Referring to FIG. 8, referring to the operation of the
The content production method according to the present invention may be implemented in software form readable through various computer means and recorded on a computer readable recording medium. Here, the recording medium may include program commands, data files, data structures, and the like, alone or in combination. Program instructions recorded on the recording medium may be those specially designed and constructed for the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts. For example, the recording medium may be magnetic media such as hard disks, floppy disks and magnetic tapes, optical disks such as Compact Disk Read Only Memory (CD-ROM), digital video disks (DVD), Magnetic-Optical Media, such as floppy disks, and hardware devices specially configured to store and execute program instructions, such as ROM, random access memory (RAM), flash memory, and the like. do. Examples of program instructions may include machine language code such as those generated by a compiler, as well as high-level language code that may be executed by a computer using an interpreter or the like. Such a hardware device may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.
It should be noted that the embodiments of the present invention disclosed in the present specification and drawings are only illustrative of specific examples for the purpose of understanding and are not intended to limit the scope of the present invention. It is apparent to those skilled in the art that other modifications based on the technical idea of the present invention can be carried out in addition to the embodiments disclosed herein.
In the present invention, various types of user devices such as a mobile communication terminal, a PMP, a PDA, a notebook computer, and an MP3 player can separate and synthesize a user's face or another's face according to a character when producing content. Through this, the facial features can be naturally synthesized by expressing all the features of the character and the face to be synthesized. In addition, it can be usefully provided by providing a user interface so that the user can perform a character synthesis work on the character with a simple operation. In addition, the synthesis result image may be naturally produced by applying a predetermined image processing technique and a graphic technique to the character and the image data. In addition, by using information on the character and the image data in a database, the synthesis speed can be increased.
10: terminal 11:
11b:
13: Display unit 14:
14b: character DB 15: audio processor 16: terminal communication unit
17: camera unit 20: service device 21: service control unit
22: service storage unit 23: service communication unit 30: communication network
100: content creation system
Claims (11)
Select at least one character according to a user's request, scan background information on the character, extract face information by recognizing at least one face included in one image data, and extract the face information and background information. The terminal for transmitting to the service apparatus, receiving fitting information generated by matching the face information and background information from the service apparatus, and applying the received fitting information to the character and synthesizing the received fitting information;
Content production system comprising a.
A display unit for providing data transmitted and received through the terminal communication unit to a screen; And
Select at least one character according to a user's request, scan background information on the character, extract face information by recognizing at least one face included in one image data, and extract the face information and background information. A terminal control unit which generates a fitting information by combining and controls to synthesize the fitting information by applying fitting information to the selected character;
And a second terminal.
The terminal characterized in that the information corresponding to at least one or more of the hairstyle, clothing style and accessories applied to the character.
Transmitting the face information and background information to the service device, receiving fitting information generated by matching the face information and background information from the service device, and applying the received fitting information to the character to synthesize the received information. Terminal.
The terminal is generated by matching the position value and size of the face information with the background information.
A service storage unit storing data for producing the content; And
A service controller configured to receive face information and background information of at least one character from the terminal, generate fitting information by combining the face information and background information, and transmit the generated fitting information to the terminal;
Service apparatus comprising a.
And the service communication unit, the service control unit, or the service storage unit are implemented as one or more servers operating on a cloud computing basis.
Scanning, by the terminal, background information about the character;
Extracting, by the terminal, face information by recognizing at least one face included in one image data;
Generating, by the terminal, fitting information by combining the face information and the background information; And
Synthesizing the terminal by applying fitting information to the selected character;
Content production method comprising a.
If the character is an animal, the terminal scans the background information corresponding to at least one or more of a beard, a raven, a hair, a body color, a size of a neck, and a position or size of a dot for the animal. Way.
And the terminal extracts face information corresponding to at least one of skin color, the size of an ear, and the position or size of a dot on the face by analyzing the image data.
Selecting, by the terminal, at least one character according to a user's request;
Scanning, by the terminal, background information about the character;
Extracting, by the terminal, face information by recognizing at least one face included in the image data;
Transmitting, by the terminal, the face information and the background information to a service device;
Receiving, by the terminal, fitting information generated by matching the face information with the background information from the service device; And
Synthesizing the received fitting information by applying the terminal to the character;
Content production method characterized in that it further comprises.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020110037038A KR20120119244A (en) | 2011-04-21 | 2011-04-21 | Method for producing contents, system thereof and terminal thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020110037038A KR20120119244A (en) | 2011-04-21 | 2011-04-21 | Method for producing contents, system thereof and terminal thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20120119244A true KR20120119244A (en) | 2012-10-31 |
Family
ID=47286534
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020110037038A KR20120119244A (en) | 2011-04-21 | 2011-04-21 | Method for producing contents, system thereof and terminal thereof |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20120119244A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787087A (en) * | 2016-03-14 | 2016-07-20 | 腾讯科技(深圳)有限公司 | Matching method and device for partners in costarring video |
CN108520508A (en) * | 2018-04-04 | 2018-09-11 | 掌阅科技股份有限公司 | User image optimization method, computing device and storage medium based on user behavior |
WO2018174311A1 (en) * | 2017-03-22 | 2018-09-27 | 스노우 주식회사 | Dynamic content providing method and system for facial recognition camera |
KR102198844B1 (en) * | 2019-06-26 | 2021-01-05 | 서울대학교 산학협력단 | the avatar mask up-loading service based on facial recognition |
-
2011
- 2011-04-21 KR KR1020110037038A patent/KR20120119244A/en not_active Application Discontinuation
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787087A (en) * | 2016-03-14 | 2016-07-20 | 腾讯科技(深圳)有限公司 | Matching method and device for partners in costarring video |
US10380427B2 (en) | 2016-03-14 | 2019-08-13 | Tencent Technology (Shenzhen) Company Limited | Partner matching method in costarring video, terminal, and computer readable storage medium |
US10628677B2 (en) | 2016-03-14 | 2020-04-21 | Tencent Technology (Shenzhen) Company Limited | Partner matching method in costarring video, terminal, and computer readable storage medium |
WO2018174311A1 (en) * | 2017-03-22 | 2018-09-27 | 스노우 주식회사 | Dynamic content providing method and system for facial recognition camera |
US11017567B2 (en) | 2017-03-22 | 2021-05-25 | Snow Corporation | Dynamic content providing method and system for face recognition camera |
CN108520508A (en) * | 2018-04-04 | 2018-09-11 | 掌阅科技股份有限公司 | User image optimization method, computing device and storage medium based on user behavior |
KR102198844B1 (en) * | 2019-06-26 | 2021-01-05 | 서울대학교 산학협력단 | the avatar mask up-loading service based on facial recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021082760A1 (en) | Virtual image generation method, device, terminal and storage medium | |
WO2020173329A1 (en) | Image fusion method, model training method, and related device | |
JP6662876B2 (en) | Avatar selection mechanism | |
US20220124140A1 (en) | Communication assistance system, communication assistance method, and image control program | |
KR20230156408A (en) | Activates hands-free mode to operate the electronic mirroring device | |
CN112394895B (en) | Picture cross-device display method and device and electronic device | |
US20230089566A1 (en) | Video generation method and related apparatus | |
WO2021115351A1 (en) | Method and device for making emoji | |
WO2022100304A1 (en) | Method and apparatus for transferring application content across devices, and electronic device | |
WO2022252866A1 (en) | Interaction processing method and apparatus, terminal and medium | |
US20230119849A1 (en) | Three-dimensional interface control method and terminal | |
WO2020150690A2 (en) | Systems and methods for providing personalized videos | |
KR20120119244A (en) | Method for producing contents, system thereof and terminal thereof | |
KR102058190B1 (en) | Apparatus for providing character service in character service system | |
US20220318303A1 (en) | Transmitting metadata via inaudible frequencies | |
US20220319061A1 (en) | Transmitting metadata via invisible light | |
WO2022042163A1 (en) | Display method applied to electronic device, and electronic device | |
KR20120037712A (en) | Imaginary beauty experience service system and method | |
US11874960B2 (en) | Pausing device operation based on facial movement | |
US11825276B2 (en) | Selector input device to transmit audio signals | |
US20220377309A1 (en) | Hardware encoder for stereo stitching | |
US20220373791A1 (en) | Automatic media capture using biometric sensor data | |
US20220206582A1 (en) | Media content items with haptic feedback augmentations | |
US20230107555A1 (en) | Facial Expression Editing Method and Electronic Device | |
US20230324714A1 (en) | Intelligent actuated temple attachments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
N231 | Notification of change of applicant | ||
WITN | Withdrawal due to no request for examination |