KR20120119244A - Method for producing contents, system thereof and terminal thereof - Google Patents

Method for producing contents, system thereof and terminal thereof Download PDF

Info

Publication number
KR20120119244A
KR20120119244A KR1020110037038A KR20110037038A KR20120119244A KR 20120119244 A KR20120119244 A KR 20120119244A KR 1020110037038 A KR1020110037038 A KR 1020110037038A KR 20110037038 A KR20110037038 A KR 20110037038A KR 20120119244 A KR20120119244 A KR 20120119244A
Authority
KR
South Korea
Prior art keywords
information
terminal
character
face
fitting
Prior art date
Application number
KR1020110037038A
Other languages
Korean (ko)
Inventor
나승원
Original Assignee
에스케이플래닛 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 에스케이플래닛 주식회사 filed Critical 에스케이플래닛 주식회사
Priority to KR1020110037038A priority Critical patent/KR20120119244A/en
Publication of KR20120119244A publication Critical patent/KR20120119244A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Abstract

PURPOSE: A contents manufacturing method, a system and a terminal thereof are provided to express all features of a face composed with a character as a real face, thereby naturally composing the face. CONSTITUTION: A service device(20) receives face information and at least one character about background information from a terminal(10). The service device generates fitting information by combining the background information and the face information. The service device controls the generated fitting information to be transmitted to the terminal. The terminal selects at least one character according to a user request. The terminal scans the background information about the character. [Reference numerals] (10) Terminal; (20) Service device; (30) Communication network

Description

Method for producing contents, system for same, and terminal for same

The present invention relates to a content production method, a system therefor, and a terminal therefor. More particularly, the present invention relates to a content production method for synthesizing a user's face or another person's face according to a character, and to a system and a terminal therefor. It is about.

With the development of mobile communication networks and the development of terminal specifications, mobile communication terminals have become a necessity of modern man and are evolving into total entertainment devices beyond the conventional simple communication devices or information providing devices.

In particular, the Internet is an open network configured to freely connect to other computers that want to access anywhere in the world using a common protocol called IP (Internet Protocol). It is used to convey information. As part of the business through the Internet, sites that provide various contents such as internet advertisements, internet broadcasting, online games, internet newspapers / magazines, search services, portal services, and e-commerce are rapidly increasing.

UCC (User Created Contents) refers to multimedia contents that users create. The multimedia contents produced by the user without commercial intention are shown online. As the information and communication fields such as the Internet, digital cameras, and smart phones have developed, the UCC has been spread even by non-specialists, producing faster and meaningful information than conventional media. This UCC is not only a so-called video, but also a video clip, as well as content produced in 2D or 3D animation rapidly spread. As such, users need to create multimedia content directly through authoring technology that users can freely express.

An object according to an embodiment of the present invention is to select a character according to a user's request to scan the background information, to recognize the face included in the image data to extract the face information, the fitting generated by combining the face information and the background information The present invention provides a method for producing a content that can be synthesized by applying information to a character, a system for the same, and a terminal for the same.

The content production system according to an embodiment of the present invention for achieving the above object receives the face information and the background information for at least one character from the terminal, generates a fitting information by combining the face information and the background information The service control unit controls to transmit the generated fitting information to the terminal, selects at least one character according to a user's request, scans background information on the character, and recognizes at least one face included in one image data. Extracting face information, transmitting face information and background information to a service device, receiving fitting information generated by matching face information and background information from a service device, and applying the received fitting information to a character to synthesize the terminal Characterized in that it comprises a.

According to an embodiment of the present invention, the terminal selects at least one character according to a request from a user and a terminal communication unit for transmitting and receiving data according to content creation, a display unit for providing data transmitted and received through the terminal communication unit, and a user request. Scan background information about the character, extract face information by recognizing at least one face included in one image data, generate fitting information by combining face information and background information, and apply fitting information to the selected character. It characterized in that it comprises a terminal control unit for controlling to apply and synthesize.

In addition, in the terminal according to the present invention, the background information is characterized in that the information corresponding to at least one or more of the hairstyle, costume style and accessories applied to the character.

In addition, in the terminal according to the present invention, the terminal control unit transmits the face information and the background information to the service device, receives the fitting information generated by matching the face information and the background information from the service device, and characterizes the received fitting information It is characterized by applying to synthesize.

In the terminal according to the present invention, the fitting information is generated by matching the position value and size of the face information with the background information.

The service apparatus according to an embodiment of the present invention is a service communication unit for transmitting and receiving data according to the content production with the terminal, the service storage unit for storing the data for the content production and the face information and background information for at least one character from the terminal. And a service control unit configured to receive, generate fitting information by combining face information and background information, and transmit the generated fitting information to the terminal.

In the service apparatus according to the present invention, the service communication unit, the service storage unit or the service control unit may be implemented as one or more servers operating on a cloud computing basis.

In accordance with another aspect of the present invention, there is provided a method of creating a content, wherein the terminal selects at least one character according to a user's request, the terminal scans background information on the character, and the terminal is included in one image data. Extracting face information by recognizing at least one face; generating fitting information by combining face information and background information; and synthesizing by fitting the fitting information to a selected character by the terminal; It is done.

In addition, in the content production method according to the present invention, the step of scanning, if the character is an animal, the terminal is at least one or more of the position or size of the whiskers, cravings, hair, body color, size of the neck and point of the animal The background information may be scanned.

In addition, in the content production method according to the present invention, the step of extracting is that the terminal analyzes the image data to extract the face information corresponding to at least one or more of the skin color for the face, the size of the neck and the position or size of the dot It features.

In the content providing method according to the present invention, the terminal selects at least one character according to a user's request, the terminal scans background information on the character, and the terminal includes at least the image data. Recognizing one face to extract face information, the terminal transmitting the face information and background information to the service device, the terminal receives the fitting information generated by matching the face information and background information from the service device And synthesizing the fitting information by applying the received fitting information to the character.

According to the present invention, the facial features can be naturally synthesized by expressing all the features of the character and the face to be synthesized as if they were real.

In addition, it can be usefully provided by providing a user interface so that the user can perform a character synthesis work on the character with a simple operation.

In addition, the synthesis result image may be naturally produced by applying a predetermined image processing technique and a graphic technique to the character and the image data.

In addition, by using information on the character and the image data in a database, the synthesis speed can be increased.

1 is a block diagram showing a content production system according to an embodiment of the present invention.
2 is a block diagram illustrating a configuration of a terminal according to an exemplary embodiment of the present invention.
3 is a block diagram illustrating a configuration of a service apparatus according to an exemplary embodiment of the present invention.
4 is a flowchart illustrating an operation of a terminal according to a content production method according to an embodiment of the present invention.
5A to 5D are exemplary diagrams for describing a content production method according to an embodiment of the present invention.
6 is a diagram illustrating a data flow of a terminal and a service apparatus according to an exemplary embodiment of the present invention.
7 is a flowchart illustrating an operation of a terminal interworking with a service apparatus according to an exemplary embodiment of the present invention.
8 is a flowchart illustrating an operation of a service apparatus interoperating with a terminal according to an exemplary embodiment of the present invention.

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description and the accompanying drawings, detailed description of well-known functions or constructions that may obscure the subject matter of the present invention will be omitted. In addition, it should be noted that like elements are denoted by the same reference numerals as much as possible throughout the drawings.

The terms or words used in the specification and claims described below should not be construed as being limited to ordinary or dictionary meanings, and the inventors are appropriate as concepts of terms for explaining their own invention in the best way. It should be interpreted as meanings and concepts in accordance with the technical spirit of the present invention based on the principle that it can be defined. Therefore, the embodiments described in the present specification and the configurations shown in the drawings are merely the most preferred embodiments of the present invention, and not all of the technical ideas of the present invention are described. Therefore, It is to be understood that equivalents and modifications are possible.

Hereinafter, a terminal according to an embodiment of the present invention will be described as a representative example of a mobile communication terminal that can be connected to a communication network to produce a character, but the terminal is not limited to the mobile communication terminal, all information communication devices, multimedia terminals, wired terminals, It can be applied to various terminals such as fixed terminal and IP (Internet Protocol) terminal. Also, the terminal may be a mobile phone, a portable multimedia player (PMP), a mobile Internet device (MID), a smart phone, a desktop, a tablet PC, a notebook, And an information communication device, which can be advantageously used in a mobile terminal having various mobile communication specifications.

1 is a block diagram showing a content production system according to an embodiment of the present invention.

Referring to FIG. 1, a content production system 100 according to an embodiment of the present invention includes a terminal 10, a service device 20, and a communication network 30.

The terminal 10 may be connected to the service apparatus 20 through the communication network 30. Here, the terminal 10 may connect to the service device 20 in various ways according to a protocol supporting the connection to the communication network 30. In particular, the terminal 10 selects a character according to a user's request, and scans background information on the selected character. At this time, the terminal 10 confirms the background information representing the characteristics of the character, including hair, clothes, and accessories such as glasses and hats. For example, when the character is an animal character, the background information may further include a whisker, a raven, a fur, a neck and a dot.

The terminal 10 extracts face information by recognizing a face included in the image data. At this time, the terminal 10 checks the face included in the image data, and checks the position value and size of the face information corresponding to the pupil, eyes, chin, ears, lips, weight, eyebrows and ears of the identified face. Here, the terminal 10 may display a virtual line in association with the confirmed position value. Then, the terminal 10 transmits face information and background information to the service device 20.

The terminal 10 receives fitting information generated by matching face information and background information from the service apparatus 20, and applies the received fitting information to a character to synthesize the received fitting information. Here, the terminal 10 generates fitting information by matching the position value and size of the face information with the background information.

The terminal 10 selects a character according to a user's request and scans background information on the character. In addition, the terminal 10 extracts face information by recognizing a face included in the image data. Thereafter, the terminal 10 generates fitting information by combining face information and background information, and synthesizes the fitting information by applying fitting information to the selected character.

The service device 20 transmits / receives data according to character production with the terminal 10 through the communication network 30. In particular, the service device 20 receives face information and background information on one character from the terminal 10. The service device 20 generates fitting information by combining face information and background information. Here, the service device 20 may check face information and background information and generate fitting information by matching them. At this time, the service device 20 matches the background information according to the position value and the size of the face information. That is, the fitting information includes information on the time when the face information is recognized, the position value and size of the face information, and the data size or shape of the face information. In addition, the fitting information includes all data generated by combining or changing face information and background information for character synthesis. Thereafter, the service device 20 transmits the generated fitting information to the terminal 10.

Through this, the facial features can be naturally synthesized by expressing all the features of the character and the face to be synthesized. In addition, it can be usefully provided by providing a user interface so that the user can perform a character synthesis work on the character with a simple operation. In addition, the synthesis result image may be naturally produced by applying a predetermined image processing technique and a graphic technique to the character and the image data. In addition, by using information on the character and the image data in a database, the synthesis speed can be increased.

2 is a block diagram illustrating a configuration of a terminal according to an exemplary embodiment of the present invention.

Referring to FIG. 2, the terminal 10 according to an embodiment of the present invention includes a terminal controller 11, an input unit 12, a display unit 13, a terminal storage unit 14, an audio processor 15, and a terminal communication unit ( 16) and a camera unit 17. In this case, the terminal controller 11 includes a face recognition module 11a, a background recognition module 12b, and a content creation module 11c, and the terminal storage unit 14 includes an image DB 14a and a character DB 14b. It includes.

The terminal controller 11 may be a process device that drives an operating system (OS) and each component. For example, the terminal control unit 11 may be a central processing unit (CPU). When the terminal 10 is powered on, the terminal controller 11 moves the operating system from the auxiliary storage device to the main storage device, performs booting to drive the operating system, and performs necessary signal control. do. In particular, the terminal controller 11 according to an embodiment of the present invention selects a character according to a user's request, and scans background information on the selected character. Here, the background information may be information such as a hairstyle, a costume style, and accessories applied to the character. That is, the terminal controller 11 checks the background information representing the characteristics of the character, including hair, clothes, and accessories such as glasses and hats. Meanwhile, when the character selected by the user is an animal, the terminal controller 11 may scan background information corresponding to a beard, a raven, a hair, a body color, a size of a neck, and a position or size of a dot. At this time, the terminal control unit 11 checks the background information corresponding to the beard, the thirst, the hair, the neck and the point.

The terminal controller 11 recognizes a face included in the image data and extracts face information. Here, the terminal controller 11 analyzes the image data and extracts face information corresponding to the skin color of the face, the size of the neck and the position or size of the dot. At this time, the terminal controller 11 checks the face included in the image data and checks the position value and size of the face information corresponding to the pupil, eyes, chin, ears, lips, weight, eyebrows and ears of the identified face. . Thereafter, the terminal controller 11 generates the fitting information by combining the face information and the background information, and synthesizes the fitting information by applying the fitting information to the selected character. Here, the terminal controller 11 may check the face information and the background information and match them to generate fitting information. In this case, the terminal controller 11 matches the background information according to the position value and the size of the face information. That is, the fitting information includes information on the time when the face information is recognized, the position value and size of the face information, and the data size or shape of the face information. In addition, the fitting information includes all data generated by combining or changing face information and background information for character synthesis.

In addition, when the terminal 10 according to the embodiment of the present invention interlocks with the service apparatus 20, the terminal controller 11 selects a character according to a user's request and scans background information on the selected character. Here, the background information may be information such as a hairstyle, a costume style, and accessories applied to the character. Meanwhile, when the character selected by the user is an animal, the terminal controller 11 may scan background information corresponding to a beard, a raven, a hair, a body color, a size of a neck, and a position or size of a dot.

The terminal controller 11 recognizes a face included in the image data and extracts face information. Here, the terminal controller 11 analyzes the image data and extracts face information corresponding to the skin color of the face, the size of the neck and the position or size of the dot. Then, the terminal controller 11 transmits face information and background information to the service device 20.

The terminal controller 11 receives fitting information generated by matching face information and background information from the service device 20, and synthesizes the received fitting information by applying the received fitting information to a character.

In order to more effectively perform the function of the terminal 10, the terminal controller 11 includes a face recognition module 11a, a background recognition module 11b, and a character production module 11c. In particular, the face recognition module 11a extracts face information by recognizing a face included in image data captured or downloaded through the camera unit 17. Here, the face recognition module 11a may extract the face information corresponding to the skin color of the face, the size of the neck and the position or size of the spot by analyzing the image data. In addition, the background recognition module 11b scans the background information on the character selected at the user's request. Here, the background recognition module 11b may check background information such as a hairstyle, a costume style, and accessories applied to the character. In this case, when the character is an animal, the background recognition module 11b scans background information corresponding to the beard, the raven, the hair, the body color, the size of the neck and the position or the size of the dot for the animal. In addition, the character production module 11c may generate fitting information by combining face information and background information, and generate a new character by applying the generated fitting information to the character.

The input unit 12 receives various information such as numeric and character information, and transmits a signal input in connection with various function settings and function control of the terminal 10 to the terminal control unit 11. The input unit 12 may include at least one of a keypad and a touchpad that generates an input signal according to a user's touch or operation. At this time, the input unit 12 may be configured in the form of a single touch panel (or a touch screen) together with the display unit 13 to simultaneously perform input and display functions. The input unit 12 may be any type of input device that can be developed in addition to an input device such as a keyboard, a keypad, a mouse, a joystick, and the like. In particular, the input unit 12 transmits a signal input for character production to the terminal control unit 11.

The display unit 13 displays information on a series of operation states, operation results, and the like that occur while the functions of the terminal 10 are performed. In addition, the display unit 13 can display menus of the terminal 10 and user data input by the user. Here, the display unit 13 may be a liquid crystal display (LCD), a thin film transistor (TFT) LCD, an organic light emitting diode (OLED), a light emitting diode (LED), an active matrix organic LED (AMOLED) display) and a three-dimensional display (three-dimensional display). In this case, the display unit 13 may be configured in the form of a touch screen. As such, when the display unit 13 is formed in the form of a touch screen, the display unit 13 may perform some or all of the functions of the input unit 12. In particular, the display unit 13 according to an embodiment of the present invention displays an overall progression process according to character production. In addition, the display unit 13 displays on the screen a character produced by combining image data for face information collection, a character screen for background information collection, and a combination of face information and background information.

The terminal storage unit 14 is a device for storing data, and includes a main memory device and an auxiliary memory device, and stores an application program necessary for the functional operation of the terminal 10. The terminal storage unit 14 may largely include a program area and a data area. Here, when the terminal 10 activates each function in response to a user's request, the terminal 10 executes corresponding application programs under the control of the terminal controller 11 to provide each function. In particular, the program area according to an embodiment of the present invention stores an operating system for booting the terminal 10, a program for extracting face information, a program for scanning background information, a program for producing a character, and the like.

The data area is an area where data generated according to use of the terminal 10 is stored. In particular, the data area according to an embodiment of the present invention stores all data for content creation. Here, the data area includes an image DB 14a and a character DB 14b. For example, the image DB 14a stores image data photographed or downloaded through the camera unit 17. In addition, the character DB 14b stores an image of an actual character for character production.

The audio processor 15 transmits an audio signal input from a speaker SPK or a microphone MIC for reproducing and outputting an audio signal to the terminal controller 11. The audio processor 15 may convert an analog signal input through a microphone into a digital format and transmit the converted audio signal to the terminal controller 11. In addition, the audio processor 15 may convert an audio signal of a digital format output from the terminal controller 11 into an analog signal and output the analog signal through a speaker. In particular, the audio processor 15 may output an effect sound or a warning sound generated according to character production.

The terminal communication unit 16 performs a function for transmitting and receiving data through the service device 20 and the communication network 30. Here, the terminal communication unit 16 includes RF transmitting means for up-converting and amplifying the frequency of the transmitted signal, and RF receiving means for low-noise-amplifying and down-converting the received signal. The terminal communication unit 16 may include at least one of a wireless communication module (not shown) and a wired communication module (not shown). The wireless communication module includes a wireless network communication module, a wireless local area network (WLAN), a wireless fidelity or a WiMAX, a worldwide interoperability for microwave access (WLAN) communication module, and a wireless fan (WPAN) wireless communication module. It may include at least one.

The wireless communication module is a component for transmitting and receiving data according to a wireless communication method, and when the terminal 10 uses wireless communication, data is transmitted using any one of a wireless network communication module, a wireless LAN communication module, and a wireless fan communication module. The service device 20 may transmit or receive the service.

The wireless network communication module is for connecting to the communication network 30 through a base station to transmit and receive data. When the wireless communication module receives data from the terminal control unit 11, the wireless communication module may access the communication network 30 through the base station and transmit the data to the service device 20. In addition, the wireless network communication module may access the communication network 30 through a base station, receive data from the service apparatus 20, and provide the received data to the terminal controller 11.

The WLAN communication module is for performing communication according to a WLAN, Wi-Fi, or WiMAX method. When the WLAN communication module receives data from the terminal controller 11, the WLAN communication module may access the communication network 30 through an access point (AP) and transmit data to the service device 20. In addition, the WLAN communication module may provide the received data to the terminal controller 11 when the WLAN communication module receives the data from the service apparatus 20 by accessing the communication network 30 through an access point.

The wireless fan communication module is for transmitting and receiving data according to a wireless fan (WPAN) method, and performs a shorter distance wireless communication than the wireless network communication module and the wireless LAN communication module. The wireless fan communication module may directly transmit and receive data between terminals. That is, data can be directly transmitted and received with other terminals through the wireless fan communication module. In addition, the wireless fan communication module may be connected to the communication network 30 directly or through a multi-hop, and may be connected to the communication network 30 through the gateway to transmit and receive data. The wireless fan communication unit may illustrate communication according to Bluetooth, infrared communication (IrDA), ZigBee, and the like.

The wired communication module is for transmitting and receiving data by wire. The wired communication module may be connected to the communication network 30 through a wire to transmit or receive data to the service device 20. That is, the terminal 10 may be connected to the communication network 30 by using a wired communication module, and may transmit and receive data with the service device 20 through the communication network 30. In particular, the terminal communication unit 16 according to an embodiment of the present invention transmits face information and background information to the service device 20 for character production. In addition, the terminal communication unit 16 may receive the fitting information from the service device 20 and transmit the fitting information to the terminal control unit 11.

The camera unit 17 collects image data of the user's image. The camera unit 17 photographs an image photographed through a lens, and converts the photographed optical signal into an electrical signal (not shown), and converts an analog image signal photographed from the camera sensor into digital data. And a signal processor (not shown). Here, the camera sensor may be a charge coupled device (CCD) sensor or a complementary metal oxide semiconductor (CMOS) image sensor, and the signal processor may be implemented as a digital signal processor (DSP), but is not limited thereto. The camera unit 17 may be activated when an input signal for using a camera function is received. Then, the camera unit 17 transmits the collected image data to the terminal control unit 11. In order to effectively perform such a function, the camera unit 17 may include an image processor (not shown). In this case, the image processor performs a function of generating screen data for displaying an image signal output from the camera sensor. In this case, the image processor processes the image signal output from the camera sensor in units of frames and outputs the frame image data corresponding to the display characteristics and the size of the display unit 13. In addition, the image processor includes an image codec and performs a function of compressing the frame image data displayed on the display unit 13 in a set manner or restoring the compressed frame image data to the original frame image data. The image codec may be a Joint Photographic Coding Experts Group (JPEG) codec or a Moving Picture Experts Group (MPEG) codec.

3 is a block diagram illustrating a configuration of a service apparatus according to an exemplary embodiment of the present invention.

Referring to FIG. 3, the service apparatus 20 according to an exemplary embodiment of the present invention includes a service control unit 21, a service storage unit 22, and a service communication unit 23.

The service controller 21 receives the face information on the image data and the background information on the character from the terminal 10. The service device 20 generates fitting information by combining face information and background information. Here, the service controller 21 generates fitting information by matching the position value and size of the face information with the background information. Thereafter, the service device 20 transmits the generated fitting information to the terminal 10.

The service storage unit 22 stores all data for character production.

The service communication unit 23 functions to transmit and receive data on the character production with the terminal 10.

In addition, the service device 20 configured as described above may be implemented as one or more servers operating in a server-based computing-based manner or in a cloud-based manner. In particular, data for character production using a cloud computing device may be provided through a cloud computing function that may be permanently stored in a cloud computing device on the Internet. Here, cloud computing utilizes Internet technologies in digital terminals such as desktops, tablet computers, laptops, netbooks, and smartphones to virtualize information technology (IT) resources such as hardware (servers, storage, networks, etc.) and software. It refers to a technology that provides services on demand (database, security, web server, etc.), services, and data. In the present invention, all data for content creation is stored in a cloud computing device on the Internet, and can be used anytime and anywhere through the terminal 10.

4 is a flowchart illustrating an operation of a terminal according to a content producing method according to an embodiment of the present invention, and FIGS. 5A to 5D are exemplary views for explaining a content producing method according to an embodiment of the present invention.

4, in the content production method according to an embodiment of the present invention, the terminal control unit 11 executes a character production mode to produce a character in step S11. In addition, the terminal controller 11 selects a character according to a user's request in step S13. Here, the character may be a person, an animal, a virtual character, or the like. When the character is selected, the terminal controller 11 scans the background information on the character selected in step S15. For example, the background information may be information such as a hairstyle, a costume style, and accessories applied to the character. At this time, the terminal control unit 11 confirms the background information representing the characteristics of the character, including hair, clothes, and accessories such as glasses and hats. In addition, the background information may further include a whisker, a raven, a hair, a body color, a size of a neck and a position or size of a dot when the character is an animal.

The terminal controller 11 acquires image data in step S17. In this case, the terminal controller 11 may acquire or obtain image data by photographing or downloading through the camera unit 17. In addition, the terminal controller 11 recognizes a face included in the image data acquired in step S19 and extracts face information. Here, the terminal controller 11 analyzes the image data and extracts face information corresponding to the skin color of the face, the size of the neck and the position or size of the dot. At this time, the terminal control unit 11 confirms the position value of the face included in the image data, the position value and size of the face information corresponding to the pupil, eye, chin, ear, lips, weight, eyebrows and ears of the identified face. Check it.

The terminal control unit 11 checks the face information and the synthesis information in step S21 to determine whether or not synthesis is possible. In this case, when it is possible to synthesize, the terminal controller 11 generates fitting information by combining face information and background information in step S23. Here, the terminal controller 11 generates fitting information by matching the position value and size of the face information with the background information. Then, the terminal controller 11 synthesizes by applying the fitting information to the character selected in step S25. On the other hand, if synthesis is impossible, the terminal controller 11 scans the background information on the character and performs the process of extracting face information from the image data.

5A and 5B, when the character 501 according to an embodiment of the present invention is a human, the terminal controller 11 scans background information about the character and extracts face information extracted from image data selected by the user. May be synthesized to generate a new character 503. At this time, the terminal control unit 11 scans the background information corresponding to the skin color of the face of the person, the size of the neck and the position or size of the dot.

5C and 5D, when the character 505 according to an embodiment of the present invention is an animal, the terminal controller 11 scans background information about the character and extracts it from image data selected by the user. The synthesized face information may be synthesized to generate a new character 507. At this time, the terminal control unit 11 scans the background information corresponding to the beard, the ravenous, the hair, the body color, the size of the neck and the position or size of the spot for the animal.

Through this, the facial features can be naturally synthesized by expressing all the features of the character and the face to be synthesized. In addition, it can be usefully provided by providing a user interface so that the user can perform a character synthesis work on the character with a simple operation. In addition, the synthesis result image may be naturally produced by applying a predetermined image processing technique and a graphic technique to the character and the image data. In addition, by using information on the character and the image data in a database, the synthesis speed can be increased.

6 is a diagram illustrating a data flow of a terminal and a service apparatus according to an exemplary embodiment of the present invention.

Referring to FIG. 6, referring to the data flow between the terminal 10 and the service apparatus 20 according to an embodiment of the present disclosure, the terminal 10 executes a character production mode for producing a character in step S31. . In addition, the terminal 10 selects a character according to a user's request, and scans background information on the selected character (S33 to S35).

The terminal 10 extracts face information by recognizing a face included in the image data in step S37. Here, the terminal 10 may acquire or obtain image data by photographing or downloading through the camera unit 17. Thereafter, the terminal 10 transmits face information and background information to the service device 20 in step S39.

If the face information of the image data and the background information of the character is received from the terminal 10 in step S41, the service device 20 generates the fitting information by combining the face information and the background information. Here, the service device 20 generates fitting information by matching the position value and size of the face information with the background information. Thereafter, the service device 20 transmits the fitting information generated in step S43 to the terminal 10.

The terminal 10 receives the fitting information generated by matching the face information and the background information from the service apparatus 20 in operation S45, and synthesizes the received fitting information by applying the received fitting information to the character.

7 is a flowchart illustrating an operation of a terminal interworking with a service apparatus according to an exemplary embodiment of the present invention.

Referring to FIG. 7, the operation of the terminal 10 interworking with the service apparatus 20 according to an exemplary embodiment of the present invention will be described. In operation S51, the terminal 10 executes a character production mode to produce a character. In operation S53, the terminal 10 selects a character according to a user's request. Here, the character may be a person, an animal, a virtual character, or the like. When the character is selected, the terminal 10 scans the background information on the character selected in step S55. For example, the background information may be information such as a hairstyle, a costume style, and accessories applied to the character. In this case, the terminal 10 checks the background information indicating the characteristics of the character including hair, clothes, and accessories as the background information on the selected character. In addition, when the character is an animal, the background information may be a beard, a raven, a hair, a body color, a size of a neck and a position or a size of a dot. At this time, the terminal 10 checks the background information corresponding to the beard, the raven, the hair, the neck and the point.

The terminal 10 acquires image data in step S57. In this case, the terminal 10 may acquire or obtain image data by shooting or downloading through the camera unit 17. In addition, the terminal 10 extracts face information by recognizing a face included in the image data acquired in step S59. Here, the terminal 10 analyzes the image data and extracts face information corresponding to the skin color of the face, the size of the neck and the position or size of the dot. At this time, the terminal 10 checks the face included in the image data, and checks the position value and size of the face information corresponding to the pupil, eyes, chin, ears, lips, weight, eyebrows and ears of the identified face.

In step S61, the terminal 10 transmits face information and background information to the service device 20. Then, the terminal 10 determines whether fitting information is received from the service device 20 in step S63. In this case, when fitting information is received, the terminal 10 applies the fitting information received in step S65 to the character and synthesizes it. That is, the terminal 10 may check the position value of the fitting information generated from the service device 20 and synthesize a character to correspond to the identified position value.

8 is a flowchart illustrating an operation of a service apparatus interoperating with a terminal according to an exemplary embodiment of the present invention.

Referring to FIG. 8, referring to the operation of the service apparatus 20 interoperating with the terminal 10 according to an embodiment of the present disclosure, the service apparatus 20 may perform face information for character production from the terminal 10 in step S71. And background information. When face information and background information are received, the service device 20 generates fitting information by combining face information and background information in step S73. Here, the service device 20 may generate the fitting information by matching the position value and size of the face information with the background information. Thereafter, the service device 20 transmits the fitting information generated in step S75 to the terminal 10.

The content production method according to the present invention may be implemented in software form readable through various computer means and recorded on a computer readable recording medium. Here, the recording medium may include program commands, data files, data structures, and the like, alone or in combination. Program instructions recorded on the recording medium may be those specially designed and constructed for the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts. For example, the recording medium may be magnetic media such as hard disks, floppy disks and magnetic tapes, optical disks such as Compact Disk Read Only Memory (CD-ROM), digital video disks (DVD), Magnetic-Optical Media, such as floppy disks, and hardware devices specially configured to store and execute program instructions, such as ROM, random access memory (RAM), flash memory, and the like. do. Examples of program instructions may include machine language code such as those generated by a compiler, as well as high-level language code that may be executed by a computer using an interpreter or the like. Such a hardware device may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.

It should be noted that the embodiments of the present invention disclosed in the present specification and drawings are only illustrative of specific examples for the purpose of understanding and are not intended to limit the scope of the present invention. It is apparent to those skilled in the art that other modifications based on the technical idea of the present invention can be carried out in addition to the embodiments disclosed herein.

In the present invention, various types of user devices such as a mobile communication terminal, a PMP, a PDA, a notebook computer, and an MP3 player can separate and synthesize a user's face or another's face according to a character when producing content. Through this, the facial features can be naturally synthesized by expressing all the features of the character and the face to be synthesized. In addition, it can be usefully provided by providing a user interface so that the user can perform a character synthesis work on the character with a simple operation. In addition, the synthesis result image may be naturally produced by applying a predetermined image processing technique and a graphic technique to the character and the image data. In addition, by using information on the character and the image data in a database, the synthesis speed can be increased.

10: terminal 11: terminal control unit 11a: face recognition module
11b: background recognition module 11c: character production module 12: input unit
13: Display unit 14: Terminal storage unit 14a: Image DB
14b: character DB 15: audio processor 16: terminal communication unit
17: camera unit 20: service device 21: service control unit
22: service storage unit 23: service communication unit 30: communication network
100: content creation system

Claims (11)

A service device configured to receive face information and background information of at least one character from a terminal, generate fitting information by combining the face information and background information, and transmit the generated fitting information to the terminal; And
Select at least one character according to a user's request, scan background information on the character, extract face information by recognizing at least one face included in one image data, and extract the face information and background information. The terminal for transmitting to the service apparatus, receiving fitting information generated by matching the face information and background information from the service apparatus, and applying the received fitting information to the character and synthesizing the received fitting information;
Content production system comprising a.
A terminal communication unit configured to transmit and receive data according to content creation with the service device;
A display unit for providing data transmitted and received through the terminal communication unit to a screen; And
Select at least one character according to a user's request, scan background information on the character, extract face information by recognizing at least one face included in one image data, and extract the face information and background information. A terminal control unit which generates a fitting information by combining and controls to synthesize the fitting information by applying fitting information to the selected character;
And a second terminal.
The method of claim 2, wherein the background information
The terminal characterized in that the information corresponding to at least one or more of the hairstyle, clothing style and accessories applied to the character.
The method of claim 2, wherein the terminal control unit
Transmitting the face information and background information to the service device, receiving fitting information generated by matching the face information and background information from the service device, and applying the received fitting information to the character to synthesize the received information. Terminal.
The method of claim 2, wherein the fitting information
The terminal is generated by matching the position value and size of the face information with the background information.
Service communication unit for transmitting and receiving data according to the terminal and content production;
A service storage unit storing data for producing the content; And
A service controller configured to receive face information and background information of at least one character from the terminal, generate fitting information by combining the face information and background information, and transmit the generated fitting information to the terminal;
Service apparatus comprising a.
The method according to claim 6,
And the service communication unit, the service control unit, or the service storage unit are implemented as one or more servers operating on a cloud computing basis.
Selecting, by the terminal, at least one character according to a user's request;
Scanning, by the terminal, background information about the character;
Extracting, by the terminal, face information by recognizing at least one face included in one image data;
Generating, by the terminal, fitting information by combining the face information and the background information; And
Synthesizing the terminal by applying fitting information to the selected character;
Content production method comprising a.
The method of claim 8, wherein the scanning step
If the character is an animal, the terminal scans the background information corresponding to at least one or more of a beard, a raven, a hair, a body color, a size of a neck, and a position or size of a dot for the animal. Way.
The method of claim 8, wherein the extracting step
And the terminal extracts face information corresponding to at least one of skin color, the size of an ear, and the position or size of a dot on the face by analyzing the image data.
9. The method of claim 8,
Selecting, by the terminal, at least one character according to a user's request;
Scanning, by the terminal, background information about the character;
Extracting, by the terminal, face information by recognizing at least one face included in the image data;
Transmitting, by the terminal, the face information and the background information to a service device;
Receiving, by the terminal, fitting information generated by matching the face information with the background information from the service device; And
Synthesizing the received fitting information by applying the terminal to the character;
Content production method characterized in that it further comprises.
KR1020110037038A 2011-04-21 2011-04-21 Method for producing contents, system thereof and terminal thereof KR20120119244A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110037038A KR20120119244A (en) 2011-04-21 2011-04-21 Method for producing contents, system thereof and terminal thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110037038A KR20120119244A (en) 2011-04-21 2011-04-21 Method for producing contents, system thereof and terminal thereof

Publications (1)

Publication Number Publication Date
KR20120119244A true KR20120119244A (en) 2012-10-31

Family

ID=47286534

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110037038A KR20120119244A (en) 2011-04-21 2011-04-21 Method for producing contents, system thereof and terminal thereof

Country Status (1)

Country Link
KR (1) KR20120119244A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787087A (en) * 2016-03-14 2016-07-20 腾讯科技(深圳)有限公司 Matching method and device for partners in costarring video
CN108520508A (en) * 2018-04-04 2018-09-11 掌阅科技股份有限公司 User image optimization method, computing device and storage medium based on user behavior
WO2018174311A1 (en) * 2017-03-22 2018-09-27 스노우 주식회사 Dynamic content providing method and system for facial recognition camera
KR102198844B1 (en) * 2019-06-26 2021-01-05 서울대학교 산학협력단 the avatar mask up-loading service based on facial recognition

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787087A (en) * 2016-03-14 2016-07-20 腾讯科技(深圳)有限公司 Matching method and device for partners in costarring video
US10380427B2 (en) 2016-03-14 2019-08-13 Tencent Technology (Shenzhen) Company Limited Partner matching method in costarring video, terminal, and computer readable storage medium
US10628677B2 (en) 2016-03-14 2020-04-21 Tencent Technology (Shenzhen) Company Limited Partner matching method in costarring video, terminal, and computer readable storage medium
WO2018174311A1 (en) * 2017-03-22 2018-09-27 스노우 주식회사 Dynamic content providing method and system for facial recognition camera
US11017567B2 (en) 2017-03-22 2021-05-25 Snow Corporation Dynamic content providing method and system for face recognition camera
CN108520508A (en) * 2018-04-04 2018-09-11 掌阅科技股份有限公司 User image optimization method, computing device and storage medium based on user behavior
KR102198844B1 (en) * 2019-06-26 2021-01-05 서울대학교 산학협력단 the avatar mask up-loading service based on facial recognition

Similar Documents

Publication Publication Date Title
WO2021082760A1 (en) Virtual image generation method, device, terminal and storage medium
WO2020173329A1 (en) Image fusion method, model training method, and related device
JP6662876B2 (en) Avatar selection mechanism
US20220124140A1 (en) Communication assistance system, communication assistance method, and image control program
KR20230156408A (en) Activates hands-free mode to operate the electronic mirroring device
CN112394895B (en) Picture cross-device display method and device and electronic device
US20230089566A1 (en) Video generation method and related apparatus
WO2021115351A1 (en) Method and device for making emoji
WO2022100304A1 (en) Method and apparatus for transferring application content across devices, and electronic device
WO2022252866A1 (en) Interaction processing method and apparatus, terminal and medium
US20230119849A1 (en) Three-dimensional interface control method and terminal
WO2020150690A2 (en) Systems and methods for providing personalized videos
KR20120119244A (en) Method for producing contents, system thereof and terminal thereof
KR102058190B1 (en) Apparatus for providing character service in character service system
US20220318303A1 (en) Transmitting metadata via inaudible frequencies
US20220319061A1 (en) Transmitting metadata via invisible light
WO2022042163A1 (en) Display method applied to electronic device, and electronic device
KR20120037712A (en) Imaginary beauty experience service system and method
US11874960B2 (en) Pausing device operation based on facial movement
US11825276B2 (en) Selector input device to transmit audio signals
US20220377309A1 (en) Hardware encoder for stereo stitching
US20220373791A1 (en) Automatic media capture using biometric sensor data
US20220206582A1 (en) Media content items with haptic feedback augmentations
US20230107555A1 (en) Facial Expression Editing Method and Electronic Device
US20230324714A1 (en) Intelligent actuated temple attachments

Legal Events

Date Code Title Description
N231 Notification of change of applicant
WITN Withdrawal due to no request for examination