AU2021269326B2 - Device and method for automatically creating cartoon image based on input sentence - Google Patents

Device and method for automatically creating cartoon image based on input sentence Download PDF

Info

Publication number
AU2021269326B2
AU2021269326B2 AU2021269326A AU2021269326A AU2021269326B2 AU 2021269326 B2 AU2021269326 B2 AU 2021269326B2 AU 2021269326 A AU2021269326 A AU 2021269326A AU 2021269326 A AU2021269326 A AU 2021269326A AU 2021269326 B2 AU2021269326 B2 AU 2021269326B2
Authority
AU
Australia
Prior art keywords
sentence
character
processor
cartoon image
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
AU2021269326A
Other versions
AU2021269326A1 (en
Inventor
Ho Sop CHOI
Gyu Cheol Kim
Ho Young Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toonsquare Corp
Original Assignee
Toonsquare Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toonsquare Corp filed Critical Toonsquare Corp
Publication of AU2021269326A1 publication Critical patent/AU2021269326A1/en
Application granted granted Critical
Publication of AU2021269326B2 publication Critical patent/AU2021269326B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Machine Translation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to a method for automatically creating a cartoon image based on an input sentence, performed by a device, the method including: recognizing a sentence; identifying word(s) included in the sentence; identifying a type of punctuation mark located at a start point and an end point of the sentence; determining the sentence as one of a general expression sentence, a dialogue expression sentence, and an emotional expression sentence based on the type of the identified punctuation mark; automatically creating a cartoon image based on a word included in the general expression sentence; understanding a subject of the dialogue expression sentence or the emotional expression sentence; and inserting the dialogue expression sentence or the emotional expression sentence in a form of a speech bubble on a character corresponding to the understood subject among at least one character in the cartoon image. 1/34 a) ' 0) o C 0-0 C)) L) 0 C L 0o L 00 0 E H 0 : 0 C 0 0 0 C) 0 0 Cd 0)

Description

1/34
a)
' 0)
o C
0-0
C)) L)
0
C
0o L L 00
0
E H 0 :
0 C
0
0
0 C) 0
Cd 0
0)
DEVICE AND METHOD FOR AUTOMATICALLY CREATING CARTOON IMAGE BASED ON INPUT SENTENCE CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefit of priority from
Korean Patent Application No. 10-2021-0099674, filed on July
29, 2021 in the Korean Intellectual Property Office, the
disclosure of which is incorporated herein in its entirety by
reference.
BACKGROUND
1. Technical Field
[0002] The present disclosure relates to a device and method
for automatically creating a cartoon image based on aninput
sentence.
2. Description of Related Art
[0003] Exports of the domestic content industry increasedby
8.1% last year, exceeding $10 billion for the first time in
history. In particular, the global turnover of Korean webtoons
exceeded 1 trillion won for the first time, along with the
brisk entry of webtoon platforms, Naver and Kakao, into
overseas markets. As such, many people of various age groups
are looking for webtoons, and many people visit academies to
hand draw webtoons, rather than just watching webtoons. However,
it is not easy to hand draw a webtoon for those who are not
art majors or have limited natural talents in drawing.
[0004] Therefore, in order to address such content creator limitations, conventionally, a character service technology has been provided in which a user directly selects pre-made facial feature icons (for example, hair, eyes, nose, mouth, facial contours, and facial expressions) and beauty style icons
(for example, hair style, accessories, and clothes), and
creates a character by combining the selected icons.
[0005] In such character service technology, in order to create
a cartoon image desired by a user, background setting,character
setting, and brightness setting need to be worked onone by one
in most cases.
[0006] In addition, such character service technology maycause
limitations when creating characters expressing various
motions. For example, when a character expresses a motion of
bending waist, a blank may occur depending on the movement of
waist joints, and thus the character may be expressed
unnaturally.
[0007] Accordingly, there is a need for a method that caneasily
create a cartoon image desired by a user without the user
having to manipulate cartoon image elements in one by one
fashion.
[0008] In addition, when a character expressing various motions
is created, a method capable of creating a character enabling
natural motion by removing a blank generated accordingto the
movement of joint portions of the character is required.
[0008A] Any discussion of documents, acts, materials,
devices, articles or the like which has been included in the
present specification is not to be taken as an admission that
any or all of these matters form part of the prior art base or
were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each of the appended claims.
3. Related art document
[0009] Patent document
[0010] Korean Patent Application Publication No. 10-2020
0025062, March 10, 2020
SUMMARY
[0011] An aspect of the present disclosure is directed to
automatically creating a cartoon image according to a sentence
input by a user.
[0012] In addition, an aspect of the present disclosure is
directing to creating a joint-bridge in a blank area formed on
joint portions or connection portions of a character.
[0013] In addition, an aspect of the present disclosure is
directed to creating a joint-bridge by rotating a portion of
a character to create a joint-bridge at a preset angle, based
on a rotation axis.
[0014] The aspects of the present disclosure are not limited
to those mentioned above, and other aspects not mentioned
herein will be clearly understood by those skilled in the art
from the following description.
[0015] A method for automatically creating a cartoon image
based on at least one input sentence, performed by a device
including a communication unit, a memory, and a processor,
wherein the communication unit is adapted to obtain the at
least one sentence input through a website or application
provided to an external device according to the present
disclosure, may include: recognizing a sentence when the at least one sentence is input; identifying each of at least one word included in the recognized sentence; identifyinga type of punctuation mark located at least one of a start point and an end point of the recognized sentence; determiningthe sentence as any one of a general expression sentence, a dialogue expression sentence, and an emotional expression sentence based on the type of the identified punctuation mark; automatically creating a cartoon image based on a word included in the general expression sentence when the sentence is the general expression sentence, wherein the memory includes a plurality of processes for automatically creating the cartoon image based on the at least one input sentence, and wherein the cartoon image includes at least one character having at least one theme; understanding a subject of the dialogue expression sentence orthe emotional expression sentence when the sentence is the dialogue expression sentence or the emotional expression sentence; and inserting the dialogue expression sentence or the emotional expression sentence in a form of a speech bubbleon a character corresponding to the understood subject among at least one character in the cartoon image through the processor.
[0016] In the recognition of the sentence by the processor,
when theat least one sentence is a plurality of sentences, each
of the words included in each of the plurality of sentences is
identified, a sentence component of each of the identified
words is understood, and a correlation between the plurality
of sentences based on the understood sentence component is
understood, so that the plurality of sentences may be grouped
into at least one paragraph.
[0017] In the identification of the type of punctuation
mark, a type of the punctuation mark located at at least one
of a start point and an end point of at least one sentence
included in the grouped paragraph may be identified.
[0018] Next, in the determination of the sentence, when
the identified punctuation mark is located only at an end point,
the recognized sentence may be determined to be the general
expression sentence; when the type of the identified
punctuation mark is double quotation marks, the recognized
sentence may be determined to be the dialogue expression
sentence; and when the type of the identified punctuation mark
is single quotation marks, the recognized sentence may be
determined to be the emotional expression sentence.
[0019] In addition, the determination of the sentence may
further determine whether the sentence is an emotional
expression sentence based on the type of the identified
punctuation mark.
[0020] Accordingly, in the automatic creation, a face ofa
character corresponding to the understood subject with respect
to the emotional expression sentence may be displayedby zooming
in at a preset magnification or by changing a shapeof the face.
[0021] In addition, in the determination of the sentence,when
the identified punctuation mark is located at an end pointand
the type of the identified punctuation mark is any one ofan
exclamation mark and a question mark, the recognized sentence
may be determined to be the emotional expression sentence.
[0022] Alternatively, in the determination of the sentence,
when a preset emoticon or abbreviation is located in the
recognized sentence, the recognized sentence may be determined to be the emotional expression sentence.
[0023] Next, in the automatic creation, when the word
represents any one of a subject, an object, and a complement,
the character having a theme corresponding to the word may be
created; when the word represents a place, a background of the
cartoon image may be created based on the word; and when the
word represents time, brightness of the cartoon image may be
determined based on the word.
[0024] In addition, the automatic creation may determine
a verb related to the created character in the general
expression sentence, and create the character to represent a
motion corresponding to the determined verb.
[0025] In addition, the automatic creation may determinea size
and a location of the created character and a size anda
location of the speech bubble based on an object arrangement
algorithm.
[0026] When user's manipulation information for the cartoon
image is input, the object arrangement algorithm may build a
learning data set by matching the manipulation information
with the cartoon image, and be machine learned based on the
built learning data set.
[0027] In addition, a device includes a communication unitfor
obtaining at least one sentence, and a processor accordingto
the present disclosure. The processor may be configured to:
recognize the at least one sentence; identify each of at least
one word included in the recognized sentence; identify a type
of punctuation mark located at at least one of a start point
andan end point of the recognized sentence; determine the
sentenceas any one of a general expression sentence, a dialogue
expression sentence, and an emotional expression sentence based on the type of the identified punctuation mark; automatically create a cartoon image based on a word included in the general expression sentence when the sentence is the general expression sentence, wherein a memory includes a plurality of processes for automatically creating the cartoon image based on the at least one input sentence, and wherein the cartoon image is created to include at least one character having at least onetheme; understand a subject of the dialogue expression sentenceor the emotional expression sentence when the sentence is the dialogue expression sentence or the emotional expression sentence through the processor; and insert the dialogue expression sentence or the emotional expression sentence in a form of a speech bubble on a character corresponding to the understood subject among at least one character in the cartoon image; and[0028] When recognizing the sentence, when the at least one sentence is a plurality of sentences, the processor may identify each of words included in each of the plurality of sentences, understand a sentence component of each of the identified words, and understand a correlation between the plurality of sentences based on the understood sentence component, so that the plurality of sentences may be grouped into at least one paragraph, and when identifying the type of punctuation mark, the processor may identify the type of the punctuation mark located at at least oneof a start point and an end point of at least one sentence included in the grouped paragraph.
[0029] Next, when determining the sentence, in the case where
the identified punctuation mark is located only at an end
point, the processor may determine the recognized sentenceto be
the general expression sentence; in the case where the type of the identified punctuation mark is double quotation marks, the processor may determine the recognized sentence tobe the dialogue expression sentence; and in the case where thetype of the identified punctuation mark is single quotation marks, the processor may determine the recognized sentence tobe the emotional expression sentence.
[0030] In addition, when determining the sentence, the
processor may further determine whether the sentence is an
emotional expression sentence based on the type of the
identified punctuation mark. When automatically creating the
cartoon image, the processor may display a face of a character
corresponding to the understood subject with respect to the
emotional expression sentence by zooming in at a preset
magnification or by changing a shape of the face.
[0031] In addition, when determining the sentence, in thecase
where the identified punctuation mark is located at anend
point and the type of the identified punctuation mark isany
one of an exclamation mark and a question mark, the
processor may determine the recognized sentence to be the
emotional expression sentence. Alternatively, in the case
where a preset emoticon or abbreviation is located in the
recognized sentence, the processor may determine the
recognized sentence to be the emotional expression sentence.
[0032] In addition, when automatically creating thecartoon
image, in the case where the word represents any oneof a
subject, an object, and a complement, the character havinga
theme corresponding to the word may be created; in the case
where the word represents a place, a background of the cartoon
image may be created based on the word; and in the case where
the word represents time, brightness of the cartoon image may be determined based on the word.
[0033] In addition, when automatically creating the cartoon
image, the processor may determine a verb related to the
created character in the general expression sentence, and
create the character to represent a motion corresponding to
the determined verb.
[0034] In addition, when automatically creating the cartoon
image, the processor may determine a size and a location of
the created character and a size and a location of
the speech bubble based on an object arrangement algorithm.
When user's manipulation information for the cartoon image is
input, the object arrangement algorithm may build a learning
data set by matching the manipulation information with the
cartoon image, and be machine learned based on the built
learning data set.
[0035] In addition, when automatically creating the
cartoon image, the processor may create and display central
joint-bridge data connecting joint portions separated into a
first element and a second element of the when creating the
character, create and display first-direction joint-bridge
data connecting the first element and the second element of
the character in a first direction or second-direction joint
bridge data connecting the first element and the second element
of the character in a second direction, receive a selection of
a first element motion design or a second element motion design
corresponding to each of the central joint-bridge data, the
first direction joint-bridge data, or the second-direction
joint-bridge data from a user terminal, and match the character.
[0036] In addition, the joint-bridge may be disposed to
overlap a blank area between the first element and the second element of the character.
[0037] In addition, when creating and displaying the second
direction joint-bridge data, the processor may create and
display the first direction joint-bridge data and the second
direction joint-bridge data by rotating the character based on
a rotation axis.
[0038] In addition, another method for implementing the
present disclosure, another system, and a computer-readable
recording medium for recording a computer program for executing
the method may be further provided.
[0039] According to the present disclosure, by automatically
creating a cartoon image according to a sentenceinput by a user,
even a person without drawing skills can easily obtain a
desired cartoon image.
[0040] In addition, according to the present disclosure,
the movement of joint portions or connection portions of a
character is made natural by creating a joint-bridge in a blank
area formed on the joint portions or connection portions of
the character.
[0041] In addition, according to the present disclosure, it is
possible to express a character having a natural joint-bridge
by creating a joint-bridge by rotating a portion of thecharacter
to create a joint-bridge at a preset angle, based ona rotation
axis.
[0042] The advantages of the present disclosure are not limited
to those mentioned above, and other advantages not mentioned
herein will be clearly understood by those skilled in the art
from the following description.
BRIEF DESCRIPTION OF THE DRAWINGS
[0043] FIG. 1 is a diagram illustrating a device for
automatically creating a cartoon image based on an input
sentence according to the present disclosure.
[0044] FIGS. 2A to 2C are exemplary diagrams illustratingan
operation performed by a processor of a device for
automatically creating a cartoon image based on an input
sentence according to the present disclosure.
[0045] FIGS. 3A to 3L are diagrams illustrating a UI that automatically creates and provides a cartoon image based on an
input sentence according to the present disclosure.
[0046] FIG. 4 is a flowchart illustrating a process of
automatically creating a cartoon image based on an input
sentence according to the present disclosure.
[0047] FIG. 5 is a flowchart illustrating a process of creating
a dynamically variable character according to the present
disclosure.
[0048] FIG. 6 is a flowchart illustrating a process of creating
central joint-bridge data according to the present disclosure.
[0049] FIG. 7 is an exemplary diagram illustrating that auser
selects a desired character from among a plurality of
characters of a character creation service page according to
the present disclosure.
[0050] FIG. 8 is an exemplary diagram illustrating that a
character selected by a user according to the presentdisclosure
is shown on a character creation screen.
[0051] FIG. 9 is an exemplary view illustrating upper body
centered joint-bridge data and lower body-centered joint-bridge
data of a character selected by a user according to thepresent
disclosure.
[0052] FIG. 10 is an exemplary diagram illustrating first
direction joint-bridge data and second-direction joint-bridge
data according to the present disclosure.
[0053] FIG. 11 is a flowchart illustrating a process of
creating first-direction joint-bridge data or second-direction
joint-bridge data according to the present disclosure.
[0054] FIG. 12 is an exemplary view illustrating rotating a character at a preset angle, based on a rotation axis
according to the present disclosure.
[0055] FIG. 13 is an exemplary diagram illustrating first
direction joint-bridge data and second-direction joint-bridge
data created when a character is tilted from the front to the
right and left according to the present disclosure.
[0056] FIG. 14 is an exemplary diagram illustrating first
direction joint-bridge data and second-direction joint-bridge
data that are created when a character is tilted from the side
to the right and left according to the present disclosure.
[0057] FIG. 15 is an exemplary diagram illustrating first
direction joint-bridge data and second-direction joint-bridge
data that are created when a character is tilted from the rear
to the right and left according to the present disclosure.
[0058] FIG. 16 is a flowchart illustrating a process of
matching a character after a first element motion design or a
second element motion design according to the present
disclosure are chosen.
[0059] FIG. 17 is an exemplary diagram illustrating a plurality
of upper body motion designs according to the presentdisclosure.
[0060] FIG. 18 is an exemplary diagram illustrating a plurality
of lower body motion designs according to the presentdisclosure.
[0061] FIG. 19 is an exemplary diagram illustrating a plurality of upper and lower body motion designs according tothe present disclosure.
[0062] FIG. 20 is an exemplary diagram illustrating a
plurality of left arm motion designs according to the present disclosure.
[0063] FIG. 21 is an exemplary diagram illustrating a plurality
of leg motion designs according to the present disclosure.
DETAILED DESCRIPTION
[0064] The advantages and features of the present
disclosure and methods of achieving them will be apparent from
the embodiments that will be described in detail with reference
to the accompanying drawings. It should be noted, however,
that the present disclosure is not limited to the following
embodiments, and may be implemented in various different forms.
Rather the embodiments are provided so that this disclosure
will be thorough and complete and will fully convey the scope
of the present disclosure to those skilled in the technical
field to which the present disclosure pertains, and the present
disclosure will only be defined by the appended claims.
[0065] Terms used in the specification are used to
describe embodiments of the present disclosure and are not
intended to limit the scope of the present disclosure. In the
specification, the terms of a singular form may include plural
forms unless otherwise specified. The expressions "comprise"
and/or "comprising" used herein indicate the existence of one
or more other elements other than stated elements but do not
exclude presence of additional elements. Like reference
denotations refer to like elements throughout the
specification. As used herein, the term "and/or" includes each and all combinations of one or more of the mentioned elements.
It will be understood that, although the terms "first," "second," etc., may be used herein to describe various elements,
these elements should not be limited by these terms. These
terms are only used to distinguish one element from another
element. Accordingly, a first element mentioned below could be
termed a second element without departing from the technical
ideas of the present disclosure.
[0066] Unless otherwise defined, all terms (including
technical and scientific terms) used herein have the same
meaning as commonly understood by those skilled in the
technical field to which the present disclosure pertains. It
will be further understood that terms, such as those defined
in commonly used dictionaries, should not be interpreted in an
idealized or overly formal sense unless expressly so defined
herein.
[0067] Hereinafter, the embodiments of the present disclosure
will be described in detail with reference to the accompanying
drawings.
[0068] FIG. 1 is a diagram illustrating a device 10 for
automatically creating a cartoon image based on an input
sentence according to the present disclosure.
[0069] Hereinafter, the device 10 for automatically creating
a cartoon image based on an input sentence accordingto the
present disclosure will be described with reference toFIG. 1.
[0070] When at least one sentence is input, the device 10may
recognize the sentence and identify each of at least one word
included in the recognized sentence.
[0071] In addition, the device 10 may identify a type of
punctuation mark located at least one of a start point and an end point of the recognized sentence, and determine the sentence as any one of a general expression sentence, a dialogue expression sentence, and an emotional expression sentence based on the type of the identified punctuation mark.
[0072] The device 10 may automatically create a cartoon image
based on a word included in the general expression sentence
when the sentence is the general expression sentence.[0073]
The device 10 may understand a subject of the dialogue
expression sentence or the emotional expression sentence when
the sentence is the dialogue expression sentence or the
emotional expression sentence.
[0074] The device 10 may insert the dialogue expression
sentence or the emotional expression sentence in a form of a
speech bubble on the character corresponding to the understood
subject among at least one character in the cartoon image.
[0075] The device 10 automatically creates a cartoon image
according to a sentence input by a user, so that even a person
without drawing skills can easily obtain a desired cartoon
image.
[0076] In other words, the device 10 may analyze a character,
a place, a situation, an atmosphere, and an emotionthrough the
words in the input sentence, and may convert the cartoon image
(corresponding cut) to various objects (person, animal,
background, pose, facial expression).
[0077] The device 10 may include all of a variety of devices
capable of providing a result to a user by performingarithmetic
processing.
[0078] In other words, the device 10 may be in a form
including at least one of a computer, a server, a mobile terminal, and a smart phone.
[0079] The device 10 may be in the form of a computer.
More specifically, the computer may include all of a variety
of devices capable of providing a result to a user by
performing arithmetic processing.
[0080] For example, the computer may be a smart phone, a tablet
PC, a cellular phone, a mobile terminal of a personal
communication service phone (PCS phone), a
synchronous/asynchronous international mobile
telecommunication-2000 (IMT-2000), a palm personal computer
(palm PC), and a personal digital assistant (PDA) or the like,
as well as a desktop PC or a laptop computer. In addition,
when a head mounted display (HMD) device includes a computing
function, the HMD device may be a computer.
[0081] In addition, the computer may correspond to a server
that receives a request from a client and performs information
processing.
[0082] The device 10 may include a communication unit 110,
a memory 120, and a processor 130. Here, the device 10 may
include fewer or more components than those illustrated in FIG.
1.
[0083] The communication unit 110 may include one or more
modules that enables wireless communication between the device
1 and an external device (not shown), between the device 10
and an external server (not shown), or between the device 10
and a communication network (not shown).
[0084] The external device (not shown) may be a user
terminal. The user terminal may be any one of a digital device
such as a cellular phone including a display unit, an input unit, and a communication function, a smart phone, a PDA, a
portable multimedia player (PMP), a tablet PC, a personal computer (for example, a desktop computer, and a notebook computer), a workstation, a PDA, or a web pad.
[0085] In addition, when the external server (not shown)
receives an application download request of a service that
automatically creates a cartoon image based on a sentence input
from the external device (not shown), user authentication may
be performed. When user authentication is completed, the
external device (not shown) may transmit the requested
application of the service.
[0086] In addition, a communication network (not shown)
may transmit/receive various information between the device
10, the external device (not shown), and an external server
(not shown). For the communication network, various types of
communication networks may be used. For example, the
communication network may use wireless communication methods,
such as wireless LAN (WLAN), Wi-Fi, Wibro, Wimax and High
Speed Downlink Packet Access (HSDPA) methods, or wired
communication methods, such as Ethernet, xDSL (ADSL and VDSL),
Hybrid Fiber Coax (HFC), Fiber To The Curb (FTTC) and Fiber To
The Home (FTTH) methods.
[0087] The communication network (not shown) is notlimited
to the communication methods presented above, and mayinclude
all types of communication methods widely known or tobe
developed in the future in addition to the above communication
methods.
[0088] In addition, the communication unit 110 may include
one or more modules for connecting the device 10 to one or more networks.
[0089] Such the communication unit 110 may obtain at leastone
sentence. In more detail, the communication unit 110 may obtain the at least one sentence input through a website or application provided to an external device (not shown), for example, a user terminal (not shown).
[0090] The website or application may be to provide a service
for automatically creating a cartoon image based on aninput
sentence.
[0091] The memory 120 may store data supporting various
functions of the device 10. The memory 120 may store a
plurality of application programs (or applications) driven in
the device 10, data for operation of the device 10, and
commands. At least some of these applications may exist for
basic functions of the device 10. The application program may
be stored in the memory 120, installed on the device 10, and
driven by the processor 130 to perform an operation (or
function) of the device 10.
[0092] In addition, the memory 120 may include a plurality
of processes for automatically creating a cartoon image based
on an input sentence according to the present disclosure. The
plurality of processes will be described later when an
operation of the processor 130 is described.
[0093] In addition to the operation related to theapplication
program, the processor 130 may generally control the overall
operation of the device 10. The processor 130 mayprovide or
process appropriate information or functions to a user by
processing signals, data, and information input or
output through the above-described components or by driving an application program stored in the memory 120.
[0094] In addition, the processor 130 may control at leastsome
of the components discussed with reference to FIG. 1 in order
to drive an application program stored in the memory 120.
Furthermore, in order to drive the application program, the
processor 130 may operate at least two or more of the
components included in the device 10 in combination with each
other.
[0095] In addition, the processor 130 may provide a website
or application for automatically creating a cartoon image based
on an input sentence.
[0096] Accordingly, an external device (not shown), for
example, a user terminal (not shown) may use a website provided
by the device 10 through the Internet or download the
application provided by the device 10 from the external server
(not shown), for example, a download server, for use.
[0097] When at least one sentence is input, the processor
130 may recognize the sentence.
[0098] When at least one sentence is input through a user
interface (UI) of the website or the application based on a
first process among a plurality of processes, the processor
130 may recognize the sentence.
[0099] When the at least one sentence is a plurality of
sentences, the processor 130 may identify each of the words
included in each of the plurality of sentences, and understand
a sentence component of each of the identified words.
[00100] Here, the sentence component may be elements that
need to be prepared to form a single sentence. In other words,
the sentence component may be an element that plays a certain
role while composing the sentence.
[00101] For example, the sentence component may include a
main component, a subcomponent, and an independent component.
The main component may include a subject, a predicate, an
object, and a complement. The subcomponent may include an adjective and an adverb. The independent component may include an independent word.
[00102] In addition, the processor 130 may group the plurality
of sentences into at least one paragraph bydetermining the
correlation between the plurality of sentencesbased on the
understood sentence components.
[00103] Accordingly, the processor 130 may automatically group
the paragraph based on the sentence components even whena
plurality of sentences such as article contents or book
contents are input through the UI, thereby easily converting
contents incomprehensible into sentences into cartoon images
to be provided to a user.
[00104] The processor 130 may identify each of at least oneword
included in the recognized sentence.
[00105] The processor 130 may identify each of the at leastone
word included in the recognized sentence based on the second
process among the plurality of processes.
[00106] As an example, when the sentence 'John takes a fastwalk
along the trail' is recognized, at least one word includedin the
sentence 'John', trail', 'fast', "takes a walk' may be
identified, respectively.
[00107] In other words, the at least one word may be
classified into a person, a place, an element, an effect, and
a time, and the processor 130 may create a cartoon image based
on the classified word.
[00108] The processor 130 may identify a type of punctuation
mark located at least one of a start point and anend point of
the recognized sentence.
[00109] The processor 130 may identify a type of punctuation
mark located at least one of a start point and anend point of the recognized sentence based on a third processamong a plurality of processes.
[00110] For example, the processor 130 may identify double
quotation marks or single quotation marks located at a start
point and an end point of the recognized sentence, and identify
at least one of a period, an exclamation point, a question
mark, and an ellipsis, located at the end point of the
recognized sentence.
[00111] In addition, when the at least one sentence is a
plurality of sentences, the processor 130 may identify a type
of punctuation mark located at least one of a start point and
an end point of at least one sentence included in the grouped
paragraph.
[00112] The processor 130 may determine the type of the
sentence based on the type of the identified punctuation mark.
[00113] The processor 130 may determine the sentence as anyone
of a general expression sentence, a dialogue expression
sentence, and an emotional expression sentence based on the
type of the identified punctuation mark based on the fourth
process among the plurality of processes.
[00114] Specifically, when the identified punctuation mark
is located only at an end point, the processor 130 may
determine the recognized sentence as the general expression
sentence.
[00115] For example, when the period, which is the
identified punctuation mark, of the first sentence (for example,
Chulsoo is sitting on the sofa watching TV.) is located only
at an end point, the first processor 130 may determine the
first sentence as the general expression sentence.
[00116] In addition, when the type of the identified punctuation mark is double quotation marks, the processor 130 may determine the recognized sentence as the dialogue expression sentence.
[00117] For example, when the type of the identified
punctuation mark of the second sentence (for example, "Isn't
the weather too hot today?") is a double quotation mark, the
processor 130 may determine the second sentence as the dialogue
expression sentence.
[00118] In addition, when the type of the identified
punctuation mark is a single quotation mark, the processor 130
may determine the recognized sentence as an emotional
expression sentence.
[00119] For example, when the type of the identified
punctuation mark of the third sentence (for example, 'I'm
bored') is a single quotation mark, the processor 130 may
determine the third sentence as an emotional expression
sentence.
[00120] In addition, when the identified punctuation mark
is located at an end point and the type of the identified
punctuation mark is any one of an exclamation mark and a
question mark, the processor 130 may determine the recognized
sentence as an emotional expression sentence.
[00121] For example, since the identified punctuation mark
of the fourth sentence (for example, hungry!!!) is located at
the end point, and the type of the identified punctuation mark
is an exclamation mark, the processor 130 may determine the
fourth sentence as an emotional expression sentence.
[00122] Alternatively, when a preset emoticon orabbreviation
is located in the recognized sentence, the processor 130 may
determine the recognized sentence as an emotional expression sentence.
[00123] For example, the processor 130 may determine the fifth
sentence as an emotional expression sentence because the
first emoticon (for example, TrTr) is located in the fifth sentence (for example, I am tired TT)
.
[00124] In addition, when the at least one sentence is a
plurality of sentences, the processor may extract a specific
sentence in which the subject is changed among the sentence
components in the at least one sentence, or a specific sentence
in which at least one of a predicate, an adjective, and an
adverb among the sentence components includes an opposite word
based on an artificial intelligence model for at least one
sentence included last among the grouped paragraph.
[00125] The processor 130 may create the extracted specific
sentence as the last cartoon image.
[00126] Accordingly, the processor 130 may create the last
cartoon image among at least one or more cartoon images as the
specific sentence, thereby having a reversal and adding
curiosity about the subsequent cartoon.
[00127] The artificial intelligence model may include a
Recurrent Neural Network (RNN).
[00128] Such an RNN is a deep learning technique that is
effective for learning the sequence through a structure in
which a specific part is repeated, and allows the state value
of the previous state to enter the input of the next
computation and affect the result (this is because when
recognizing words, sentences, and images, it is necessary to
refer to the preceding words, letters, and frames to recognize
them).
[00129] In addition, the RNN may be mainly used to recognize sequential information such as speech and letters.
[00130] However, the joint prediction model according to
the present disclosure is not limited to CNN and RNN, and may
be formed of neural networks having various structures.
[00131] When the sentence is the general expression
sentence, the processor 130 may automatically create a cartoon
image based on the general expression sentence.
[00132] When the sentence is the general expression sentence
based on a fifth process among a plurality of processes, the
processor 130 may automatically create a cartoonimage based on
a word included in the general expression sentence.
[00133] The cartoon image may include at least one character
having at least one theme.
[00134] Specifically, when the word represents any one of
a subject, an object, and a complement, the processor 130 may
create the character having a theme corresponding to the word.
[00135] In addition, when the word represents a place, the
processor 130 may create a background of the cartoon image
based on the word.
[00136] Specifically, when the word represents a place (or
location), the processor 130 may display the image as an image
representing the place in the background of the cartoon image.
[00137] In more detail, the processor 130 may search for
images related to the place text keyword on the web through a
preset web browser or a search function, and select an image
having the most identicalness among the searched images.
[00138] In addition, the processor 130 may display an image
having a location that matches or is closest to a location of
a place matching the word among location information included
in the meta information of the searched images on the background.
[00139] In addition, when two or more words represent
respective places, the processor 130 may first determine
whether the relationship between the two places represents a
path through which the two places need to be moved to a specific
location.
[00140] As an example, the processor 130 may determine whether
the relationship between the two places represents thepath when
the Gangnam Station at Exit 1 and the Kukkiwon Intersection in
the example sentence (~from Gangnam Station atExit 1 to the
Kukkiwon Intersection-) represent the respectiveplaces.
[00141] Then, as a result of the determination, when the
relationship between the two places represents the path, the
processor 130 may search for a moving path to the two places,
create a path image representing the searched moving path, and
display it as a background.
[00142] In addition, the processor 130 may search for first
and second images representing each of the two places, first
display an image of a place closest to the user's current
location as a background, and then display an image in the
following places in a slideshow after a preset time.
[00143] Here, the current location of the user may be created
based on a GPS signal of a user terminal (not shown) used by
the user or a current location directly input by the user.
[00144] Alternatively, after dividing the background of the
cartoon image into first and second areas, the processor 130
may sequentially display the first image and the second image
in each of the divided areas.
[00145] Alternatively, the processor 130 may preferentially
display an image of a place with or without a visit history among two places as a background based on location information collected for a preset period whenever the user terminal (not shown) moves.
[00146] In addition, when the word represents time, the
processor 130 may determine the brightness of the cartoon image
based on the word.
[00147] Specifically, when the word represents a specific time,
the processor 130 may display the image with the brightness
corresponding to the specific time at the location
corresponding to the background image.
[00148] For example, when the location corresponding to the
background image is the Han River Park, the processor 130 makes
the brightness the brightest (or darkest) when the specific
time is 1:00 PM, and makes the brightness the darkest (or
brightest) when the specific time is 10:00 PM.
[00149] In addition, when two or more words each represent
time, the processor 130 may display the image with a first
brightness corresponding to the first time of the first word
closest to the current time.
[00150] Then, after the first time has elapsed, theprocessor
130 may display the image with the first brightnessas the
second brightness corresponding to the second time of the
second word.
[00151] In addition, when two or more words represent a
specific time and a specific place, in this case, since it is
most related to the schedule, and thus the processor 130 may
display the cartoon image representing the specific time and
specific place as the background of the corresponding date of
the calendar application.
[00152] In addition, the processor 130 may determine a verb related to the created character in the general expression sentence, and create the character to represent a motion corresponding to the determined verb.
[00153] In addition, the processor 130 may determine the
size and location of the created character and the size and
location of the speech bubble based on the object arrangement
algorithm.
[00154] In addition, the processor 130 may determine a size,
location, arrangement, and overlapping degree of an object so
as to conform to the intention of the writer of the input
sentence by using the object arrangement algorithm.
[00155] When the user's manipulation information for the
cartoon image is input, the object arrangement algorithm builds
a learning data set by matching the manipulation information
with the cartoon image, and be machine learned based on the
built learning data set.
[00156] Accordingly, the processor 130 may apply the object
arrangement algorithm to each user.
[00157] In addition, the processor 130 may build a learningdata
set of the object arrangement algorithm according to theuser's
needs by matching the manipulation information with thecartoon
image, so that due to the accumulation of the data set,a cartoon
image in a direction desired by the user may be directly
created in the future.
[00158] The manipulation information may include at least
one of first information, second information, third
information, and fourth information.
[00159] The first information is information about the sizeand
location change of the character, left-right inversion, and
up-and-down inversion, the second information is information about the change in the expression of the characterand the change in the motion of the character, the third information is information about the addition or deletion of the character or the object and the change of the background,and the fourth information is information may be information about the change of the perspective of the character or the object.
[00160] The processor 130 may receive the dialogue of the
speech bubble and correction information of the object from
the user with respect to the cartoon image arranged through
the object arrangement algorithm.
[00161] The correction information is again utilized as
learning data for advancing the object arrangement algorithm.
Even when the same text is written next time, the expression of
the object or the atmosphere of the cut may be slightly
different.
[00162] In addition, the processor 130 may continuously advance
the object arrangement algorithm by collecting previously
published professional webtoons and learning the composition
of a cut composed by an actual professional writer.[00163] In
addition, the processor 130 may understand a subject for an
emotional expression sentence and display the face of the
character corresponding to the understood subjectby zooming in
at a preset magnification or by changing the shape of the face.
[00164] For example, the processor 130 may change the shapeof
the face to be slightly or wide smile depending on the degree
of happy emotion, or change the degree of frown face according
to the degree of bad emotion.
[00165] The processor 130 may understand a subject of the
dialogue expression sentence or the emotional expression
sentence.
[00166] When the sentence is the dialogue expressionsentence
or the emotional expression sentence based on a sixthprocess
among a plurality of processes, the processor 130 mayunderstand
the subject of the dialogue expression sentence orthe emotional
expression sentence.
[00167] The processor 130 may insert the dialogue expression
sentence or the emotional expression sentence in the form of
a speech bubble on the character corresponding tothe understood
subject.
[00168] The processor 130 may insert the dialogue
expression sentence or the emotional expression sentence in the
form of a speech bubble on the character corresponding tothe
understood subject among at least one character in the cartoon
image based on a seventh process among a plurality ofprocesses.
[00169] Additionally, when the word represents a place (or
location), the processor 130 may display the image as an image
representing the place in the background of the cartoon image.
[00170] In more detail, the processor 130 may search for
images related to the place text keyword on the web through a
preset web browser or a search function, and select an image
having the most identicalness among the searched images.
[00171] In addition, the processor 130 may display an image
having a location that matches or is closest to a location of
a place matching the word among location information included
in the meta information of the searched images on the
background.
[00172] In addition, when two or more words represent
respective places, the processor 130 may first determine
whether the relationship between the two places represents a
path by which the two places need to be moved to a specific location.
[00173] As an example, the processor 130 may determine whether
the relationship between the two places represents thepath when
the Gangnam Station at Exit 1 and the Kukkiwon Intersection in
the example sentence (~from Gangnam Station atExit 1 to the
Kukkiwon Intersection-) represent the respectiveplaces.
[00174] Then, as a result of the determination, when the
relationship between the two places represents the path, the
processor 130 may search for a moving path to the two places,
create a path image representing the searched moving path, and
display it as a background.
[00175] In more detail, the processor 130 may search for
first and second images representing each of the two places,
first display an image of a place closest to the user's current
location as a background, and then perform display an image in
the following places in a slideshow after a preset time.
[00176] Alternatively, after dividing the background of the
cartoon image into first and second areas, the processor 130
may sequentially display the first image and the second image
in each of the divided areas.
[00177] When the word represents a specific time, the processor
130 may display the image with the brightness corresponding to
the specific time at the location corresponding to the
background image.
[00178] For example, when the location corresponding to the
background image is the Han River Park, the processor 130 makes
the brightness the brightest (or darkest) when the specific
time is 1:00 PM, and makes the brightness the darkest (or
brightest) when the specific time is 10:00 PM.
[00179] In addition, when two or more words each represent time, the processor 130 may display the image with a first brightness corresponding to the first time of the first word closest to the current time.
[00180] Then, after the first time has elapsed, the
processor 130 may display the image with the first brightness
as the second brightness corresponding to the second time of the second word.
[00181] In addition, when two or more words represent a
specific time and a specific place, in this case, since it is
most related to the schedule, and thus the processor 130 may
display the cartoon image representing the specific time and
specific place as the background of the corresponding date of
the calendar application.
[00182] FIGS. 2A to 2C are exemplary diagrams illustratingan
operation performed by the processor 130 of the device for
automatically creating a cartoon image based on an input
sentence according to the present disclosure.
[00183] Referring to FIG. 2A, the processor 130 may providea
user with a UI provided by the website or application througha
user terminal (not shown).
[00184] The UI may include a first area 201 in which a plurality
of icons having a plurality of functions are located,a second
area 202 in which the cartoon image is displayed, anda third
area 203 in which a page preview of the cartoon imageand a
preview icon are located to sequentially view the cartoonimage.
[00185] When a user inputs the AI button displayed on the UI,
the processor 130 may switch the screen of FIG. 2A to thescreen
of FIG. 2B.
[00186] Referring to FIG. 2B, when two sentences are input,the
processor 130 may recognize the two sentences (for example,first
sentence: Sarah is sitting in the living room watching TV.
Second sentence: "I don't want to do anything"), and identify
each of at least one word included in the two
sentences.
[00187] In other words, the at least one word may be classified
into a person, a place, an element, an effect, anda time, and the processor 130 may create a cartoon image basedon the classified word.
[00188] In addition, the processor 130 may determine the two
sentences as one of a general expression sentence, a dialogue
expression sentence, and an emotional expression sentence
based on the type of the identified punctuation mark.[00189]
Specifically, the processor 130 may determine the first
sentence as the general expression sentence because a period
of the punctuation mark of the first sentence is locatedat the
end point.
[00190] In addition, when the type of the punctuation markof
the second sentence is double quotation marks, the processor
130 may determine the second sentence as the dialogueexpression
sentence.
[00191] Referring to FIG. 2C, when the first sentence is the
general expression sentence, the processor 130 may
automatically create a cartoon image based on a word included
in the general expression sentence.
[00192] The cartoon image may include at least one character
having at least one theme.
[00193] In addition, when the second sentence is the dialogue
expression sentence, the processor 130 may understandthe
subject of the second sentence which is the dialogue expression
sentence.
[00194] The subject of the second sentence may be understood
as 'Sarah', who is the subject of the first sentence.
[00195] Accordingly, the processor 130 may insert the second sentence, which is the dialogue expression sentence, in the form of a speech bubble on the character of 'Sera' corresponding to the understood subject among at least one character in the cartoon image.
[00196] FIGS. 3A to 3L are diagrams illustrating a UI that
automatically creates and provides a cartoon image based on an
input sentence according to the present disclosure.
[00197] Referring to FIG. 3A, the UI may include a first area
201 in which a plurality of icons having a plurality of
functions are located, a second area 202 in which the cartoon
image is displayed, and a third area 203 in which a page
preview of the cartoon image and a preview icon are located to
sequentially view the cartoon image.
[00198] The first area 201 may include an integrated searchicon,
a template icon, a character icon, a text icon, a speechbubble
icon, an element icon, an effect icon, a background icon, a
photo icon, a my photo icon, a drawing icon, and an item icon.
[00199] Referring to FIG. 3B, when the integrated search icon
is input by the user and the word 'person' is input by the
user, the processor 130 may display sample information fora
plurality of icons for the word 'person.'
[00200] Referring to FIG. 3C, when the template icon is input
by the user, the processor 130 may display information on at
least one or more pre-stored sample templates in the first
area 201 and receive a specific sample template from theuser.
In this case, the specific sample template may be displayed on
the first area 201 and the second area 202.
[00201] Specifically, in the first area 201, at least one sample
cartoon image included in the specific sample templatemay be displayed in a preview format, and in the second area202, a sample cartoon image selected from among the specificsample templates may be displayed on the second area 202. [00202] The pre-stored sample template may be in the formof a template in which content, font, background, and colorare all editable.
[00203] In addition, when receiving a search word for the
sample template information through a detailed search from the
user in the first area 201, the processor 130 may extract at
least one piece of sample template information matching the
search word into the memory 120 and display the extracted
information on the first area 201.
[00204] Referring to FIG. 3D, when the character icon is input
by the user, the processor 130 may display at least oneor more
pre-stored character information on the first area 201,and when
receiving a specific character input from the user, the
specific character may be displayed on the second area 202.
[00205] When receiving a search word for the character through
a detailed search from the user in the first area 201,the
processor 130 may extract at least one character matchingthe
search word into the memory 120 and display the extracted
character on the first area 201.
[00206] Referring to FIG. 3E, when the text icon is input by
the user, the processor 130 may display at least one or more
pre-stored text shape information on the first area 201.When
a specific text shape input by the user is received, the
specific text shape may be displayed on the second area 202.
[00207] When receiving a search word for the text shape
information through a detailed search from the user in the first
area 201, the processor 130 may extract at least one piece of
text shape information matching the search word intothe memory
120 and display the extracted information on the first area 201.
[00208] Referring to FIG. 3F, when the speech bubble icon is
input, the processor 130 may display at least one piece ofpre
stored speech bubble information on the first area 201. When
a specific speech bubble from the user is received, the
specific speech bubble may be displayed on the second area 202.
[00209] The processor 130 may receive and display aspecific
character, an emoticon, a sentence, and an image in the
specific speech bubble from the user.
[00210] In addition, when receiving a search word for the
speech bubble information through a detailed search from the
user in the first area 201, the processor 130 may extract at
least one piece of speech bubble information matching the
search word into the memory 120 and display the extracted
information on the first area 201.
[00211] Referring to FIG. 3G, when the element icon is input,
the processor 130 may display at least one piece of pre-stored
element information on the first area 201. When a specific
element is received from the user, the specific element may be
displayed on the second area 202.
[00212] The processor 130 may change and display a size, color,
and location of the specific element displayed on the second
area 202 according to the user's input.
[00213] In addition, when receiving a search word for theelement
information through a detailed search from the user in the first
area 201, the processor 130 may extract at least onepiece of
element information matching the search word into thememory 120
and display the extracted information on the firstarea 201.
[00214] Referring to FIG. 3H, when the effect icon is input,the
processor 130 may display at least one piece of pre-stored element information on the first area 201. When a specific effect is received from the user, the specific effect may be displayed on the second area 202.
[00215] The processor 130 may change and display a size, color,
and location of the specific effect displayed on the second
area 202 according to the user's input.
[00216] In addition, when receiving a search word for the
effect information through a detailed search from the user in
the first area 201, the processor 130 may extract at least one
piece of effect information matching the search word into the
memory 120 and display the extracted information on the first
area 201.
[00217] Referring to FIG. 31, when the background icon is
input, the processor 130 may display at least one piece of
pre-stored background information on the first area 201. When
a specific background is received from the user, the specific
background may be displayed on the second area 202.
[00218] When receiving a search word for the background
information through a detailed search from the user in the
first area 201, the processor 130 may extract at least one
piece of background information matching the search word into
the memory 120 and display the extracted information on the
first area 201.
[00219] Referring to FIG. 3J, when the photo icon is input,the
processor 130 may display at least one piece of pre-storedphoto
information on the first area 201. When a specific photois
received from the user, the specific photo may be displayedon
the second area 202.
[00220] When receiving a search word for the photo information through a detailed search from the user in the first area 201, the processor 130 may extract at least one piece of photo information matching the search word into the memory 120 and display the extracted information on the firstarea 201.
[00221] Referring to FIG. 3K, when the my photo icon is input,
the processor 130 may display at least one piece of pre-stored
my photo information on the first area 201. When aspecific my
photo is received from the user, the specific my photo may be
displayed on the second area 202.
[00222] Referring to FIG. 3L, when the drawing icon is input,
the processor 130 may display at least one piece of pre-stored
drawing information on the first area 201. When a specific
drawing tool is received from the user, details of the drawing
tool (for example, pen thickness, pen color, pen type, and
eraser size) may be displayed on a new screen.
[00223] When the processor 130 receives a detailed settingof
the drawing tool from the user on the new screen, the processor
130 may display the input content input by the userbased on
the drawing tool for which the setting has been completed on
the second area 202 in real time.
[00224] As described above, the processor 130 may correct the
sentence-based cartoon image input by the user through the UI
illustrated in FIGS. 3A to 3L in a desired direction according
to the user's input.
[00225] In addition, the processor 130 may create and provide
a higher quality cartoon image by automatically collectively
correcting the cartoon image through the UI without the user's
input.
[00226] Such correction may be preset through the UI before
creating the cartoon image, or may be performed during the creation of the cartoon image.
[00227] FIG. 4 is a flowchart illustrating a process of
automatically creating a cartoon image based on an input
sentence according to the present disclosure. Hereinafter, the
operation of the processor 130 may be performed by the device
10.
[00228] When at least one sentence is input, the processor
130 may recognize the sentence (S401).
[00229] Specifically, the processor 130 may recognize the
sentence when at least one sentence is input through a UI of
a website or application that provides a service for
automatically creating a cartoon image based on an input
sentence.
[00230] When the at least one sentence is a plurality of
sentences, the processor 130 may identify each of the words
included in each of the plurality of sentences, and understand
a sentence component of each of the identified words.
[00231] In addition, the processor 130 may group the plurality
of sentences into at least one paragraph bydetermining the
correlation between the plurality of sentences
based on the understood sentence components.
[00232] Accordingly, the processor 130 may automatically group
the paragraph based on the sentence components even whena
plurality of sentences such as article contents or book
contents are input through the UI, thereby easily converting
contents incomprehensible into sentences into cartoon images
to be provided to a user.
[00233] The processor 130 may identify each of at least oneword
included in the recognized sentence (S402).
[00234] In other words, the at least one word may be classified into a person, a place, an element, an effect, anda time, and the processor 130 may create a cartoon image basedon the classified word.
[00235] The processor 130 may identify a type of punctuation
mark located at least one of a start point and anend point of
the recognized sentence (S403).
[00236] For example, the processor 130 may identify double
quotation marks or single quotation marks located at a start
point and an end point of the recognized sentence, and identify
at least one of a period, an exclamation point, a question
mark, and an ellipsis, located at the end point of the
recognized sentence.
[00237] In addition, when the at least one sentence is a
plurality of sentences, the processor 130 may identify a type
of punctuation mark located at least one of a start point and
an end point of at least one sentence included in the grouped
paragraph.
[00238] The processor 130 may determine the sentence as any one
of a general expression sentence, a dialogue expression
sentence, and an emotional expression sentence based on the
type of the identified punctuation mark (S404).
[00239] Specifically, when the identified punctuation markis
located only at an end point, the processor 130 may determine
the recognized sentence as the general expression sentence.
[00240] In addition, when the type of the identified
punctuation mark is double quotation marks, the processor 130
may determine the recognized sentence as the dialogue
expression sentence. When the type of the identified
punctuation mark is single quotation marks, the recognized sentence may be determined to be an emotional expression sentence.
[00241] In addition, when the identified punctuation mark is
located at an end point and the type of the identified
punctuation mark is any one of an exclamation mark and a
question mark, the processor 130 may determine the recognized
sentence as an emotional expression sentence.
[00242] In addition, when a preset emoticon or abbreviation
is located in the recognized sentence, the processor 130 may
determine the recognized sentence as an emotional expression
sentence.
[00243] In addition, when the at least one sentence is a
plurality of sentences, the processor may extract a specific
sentence in which the subject is changed among the sentence
components in the at least one sentence, or a specific sentence
in which at least one of a predicate, an adjective, and an
adverb among the sentence components includes an opposite word
based on an artificial intelligence model for at least one
sentence included last among the grouped paragraph.
[00244] The processor 130 may create the extracted specific
sentence as the last cartoon image.
[00245] Accordingly, the processor 130 may create the last
cartoon image among at least one or more cartoon images as the
specific sentence, thereby having a reversal and adding
curiosity about the subsequent cartoon.
[00246] When the sentence is the general expression sentence,
the processor 130 may automatically create a cartoonimage based
on a word included in the general expression sentence (S405).
[00247] The cartoon image may include at least one character
having at least one theme.
[00248] Specifically, when the word represents any one of
a subject, an object, and a complement, the processor 130 may
create the character having a theme corresponding to the word.
[00249] In addition, when the word represents a place, the
processor 130 may create a background of the cartoon image
based on the word.
[00250] In addition, when the word represents time, the
processor 130 may determine the brightness of the cartoon image
based on the word.
[00251] In addition, the processor 130 may determine a verb
related to the created character in a general expression
sentence, and create the character to represent a motion
corresponding to the determined verb.
[00252] In addition, the processor 130 may determine the size
and location of the created character and the size and location
of the speech bubble based on the object arrangement algorithm.
[00253] When the user's manipulation information for the
cartoon image is input, the object arrangement algorithm may
build a learning data set by matching the manipulation
information with the cartoon image, and be machine learned
based on the built learning data set.
[00254] Accordingly, the processor 130 may apply the object
arrangement algorithm to each user.
[00255] In addition, the processor 130 may build a learningdata
set of the object arrangement algorithm according to theuser's
needs by matching the manipulation information with thecartoon
image, so that due to the accumulation of the data set,a cartoon
image in a direction desired by the user may be directly
created in the future.
[00256] The manipulation information may include at least
one of first information, second information, third
information, and fourth information.
[00257] The first information is information about the sizeand
location change of the character, left-right inversion, and
up-and-down inversion, the second information is information
about the change in the expression of the characterand the change
in the motion of the character, the third information is
information about the addition or deletion of the character or
the object and the change of the background,and the fourth
information is information may be information about the change
of the perspective of the character or the object.
[00258] In addition, the processor 130 may understand a
subject for an emotional expression sentence and display the
face of the character corresponding to the understood subject by
zooming in at a preset magnification or by changing the shape
of the face.
[00259] For example, the processor 130 may change the shapeof
the face to be slightly or wide smile depending on the degree
of happy emotion, or change the degree of frown face according
to the degree of bad emotion.
[00260] When the sentence is the dialogue expressionsentence
or the emotional expression sentence, the processor
130 may understand the subject of the dialogue expression
sentence or the emotional expression sentence.
[00261] The processor 130 may insert the dialogue expression
sentence or the emotional expression sentence in the form of
a speech bubble on the character corresponding tothe understood
subject among at least one character in the cartoon image
(S407) .
[00262] FIG. 4 illustrates that operations S401 to S407 are
sequentially executed, but this is merely illustrative of the
technical idea of this embodiment. It is possible for those of
ordinary skill in the technical field to which this embodiment
belongs to apply various modifications and variations to
executing by changing the order described in FIG. 4 or
executing one or more of operations S401 to S407 in parallel
within a range that does not deviate from the essential
characteristics of this embodiment. FIG. 4 does not limit a
time series sequence.
[00263] Hereinafter, a process in which the device 10 according
to the present disclosure creates a dynamically variable
character with respect to a character in the created cartoon
image will be described.
[00264] The device 10 may create a joint-bridge in a blankarea
generated at a joint or a connection portion of the character.
[00265] Thereafter, the device 10 may display the created
character by receiving the motion design of the character from
the user through a website or an application.
[00266] The website or the application may provide a
dynamically variable character creation service included in a
service for automatically creating a cartoon image based on an
input sentence.
[00267] Accordingly, the device 10 may create a joint- bridge
for each joint portion of the character for connectingthe joint portion of the character to have the effect of allowing the
movement of the joint portion of the character naturally.
[00268] In addition, the device 10 may create a joint
bridge by rotating a portion of a character to create a joint bridge at a preset angle, based on a rotation axis, thereby capable of expressing a character having a natural joint-bridge.
[00269] In addition, the device 10 may support access to a
website to use the dynamically variable character creation
service, or may support a self-produced application to support
the dynamically variable character creation service.
[00270] Hereinafter, a detailed operation through the
processor 130 of the device 10 will be described.
[00271] The processor 130 may separate the joint portion
corresponding to the character selected from the user terminal
(not shown) into a first element and a second element, and
create and display central joint-bridge data connecting the
first element and the second element in a straight line. The
joint-bridge may be disposed to overlap a blank area between
the first element and the second element of the character.
[00272] The processor 130 may create and display first
direction joint-bridge data connecting the first element and
the second element of the character in a first direction or
second-direction joint-bridge data connecting the first
element and the second element of the character in a second
direction.
[00273] In addition, the processor 130 may create and display
the first-direction joint-bridge data and the second-direction
joint-bridge data by rotating the character based ona rotation
axis.
[00274] The processor 130 receives a selection of a first
element motion design or a second element motion design
corresponding to each of the central joint-bridge data, the
first direction joint-bridge data, or the second-direction
joint-bridge data from a user terminal (not shown), and match the character to the second element motion design.
[00275] The memory 120 may store a plurality of characters,
a plurality of first element motion designs, and a plurality
of second element motion designs.
[00276] The operation of the device 10 according to the present
disclosure described above shares the same contents, differing
only from the process of creating a dynamically variable
character to be described with reference to FIGS. 5 to 18 and
the category of the present disclosure. The detailswill be
described later with reference to FIGS. 5 to 18.
[00277] The user terminal (not shown) may receive the motion
design of the character from a user, display the character
created by the device 10 and provide the character to the user.
[00278] The user may use the corresponding service by
installing and running the application in the user terminal
(not shown). The corresponding service may be a dynamically
variable character creation service included in a service for
automatically creating a cartoon image based on an input
sentence.
[00279] Alternatively, the user may access the website through
the user terminal (not shown) and use the correspondingservice.
[00280] In addition, the user terminal (not shown) used by
the user in an embodiment of the present disclosure is
typically applicable to a computer, and anything that includes
a display unit, an input unit, and a communication function,
such as smartphones, tablet PCs, and laptops, may be applicable.
[00281] For example, the user terminal may (not shown) be
any one of a digital device such as a cellular phone including
a display unit, an input unit, and a communication function,
a smart phone, a PDA, a PMP, a tablet PC, a personal computer
(for example, a desktop computer, and a notebook computer), a
workstation, a PDA, or a web pad.
[00282] FIG. 5 is a flowchart illustrating a process of
creating a dynamically variable character according to the
present disclosure. Hereinafter, the operation of the
processor 130 may be performed by the device 10
.
[00283] The processor 130 may create and display central joint
bridge data connecting joint portions separated into a first
element and a second element of a character selected from the
user terminal (not shown) (S510).
[00284] Here, the first element may be an upper body of the
character's body, upper arms of the arms, and a thigh of the
legs, and the second element may be a lower body of the
character's body, lower arms of the arms, and a calf of the
legs.
[00285] In addition, the first element and the second element
may include elements such as one end and the other endof
clothing covering the skin and joint portion, one end and the
other end of a button, and one end and the other end of a
zipper, as well as the skin and joints corresponding to the
first element and the second element.
[00286] Accordingly, the processor 130 may create the central
joint-bridge data for the joint portions or the connection
portions separated into the first element and the second
element according to the movement of the character, thereby
enabling natural movement of the character.
[00287] Details of creating the central joint-bridge data
in operation S510 will be described with reference to FIGS. 6
to 9.
[00288] FIG. 6 is a flowchart illustrating a process of creating central joint-bridge data according to the present disclosure.
[00289] FIG. 7 is an exemplary diagram illustrating that auser
selects a desired character from among a plurality of
characters of a character creation service page according to
the present disclosure.
[00290] FIG. 8 is an exemplary diagram illustrating that a
character selected by a user according to the presentdisclosure
is shown on a character creation screen.
[00291] FIG. 9 is an exemplary view illustrating upper body
centered joint-bridge data and lower body-centered joint-bridge
data of a character selected by a user according to thepresent
disclosure.
[00292] Referring to FIG. 6, in creating the central joint
bridge data in operation S510, the processor 130 may receive
a selection of one character from a user terminal (not shown)
among a plurality of pre-stored characters (S512).
[00293] For example, referring to FIG. 7, when a character
creation service application is driven in a user terminal (not
shown), the processor 130 may activate a character creation
service page as illustrated in FIG. 7 on the UI.
[00294] In the character creation service page, a plurality
of characters may be provided in a character selection field
71 in the first area 201.
[00295] The user may select a first character 700 that is
a desired character from among a plurality of characters
through a user terminal (not shown).
[00296] Referring to FIG. 8, when the user selects the first
character 700 through a user terminal (not shown), the
processor 130 may display the first character 700 on an artboard 42, which is a character creation screen of the second area 202.
[00297] The processor 130 may create and display upper body
centered joint-bridge data corresponding to the upper body and
lower body-centered joint-bridge data corresponding to the
lower body for a body that is separated into the upper body,
which is the first element, and the lower body, which isthe
second element, of the character (S514).
[00298] For example, referring to FIG. 9, the second character
900 selected by the user through a user terminal (not shown)
may be displayed separately from an upper body that is a first
element and a lower body that is a second element.
[00299] Accordingly, when the second character 900 tilts the
upper body to the left as illustrated in FIG. 9, a blank
90 may be generated as the upper body is tilted.
[00300] When a blank is generated for a portion in which the
joint of the second character 900 moves, the second character
900 may be displayed unnaturally. Therefore, in orderto display
the second character 900 naturally, a joint-bridgeconnecting
the blank 90 may be required.
[00301] Referring to FIG. 9, the processor 130 may create and
display the lower body-centered joint-bridge data 910 or the
upper body-centered joint-bridge data 920 to connect the upper
body, which is the first element, and the lower body, which is
the second element, of the second character 900 selected
through a user terminal (not shown).
[00302] The upper body-centered joint-bridge data 910 may
express the waist portion in a top 901 of the second character
900 as a joint-bridge for the waist portion among the joint
portions of the second character 900, so that the second character 900 may fill the blank 90 that may occur when the body is tilted in the left direction.
[00303] In addition, the lower body-centered joint-bridge data
920 may express bottoms 902 that rise up to the waistportion
in the top 901 of the second character 900 as a joint-bridge for
the waist portion among the joint portions of the second
character 900, so that the second character 900 may fill the
blank 90 that may occur when the body is tilted in the left
direction.
[00304] The processor 130 may receive a selection of oneof
the upper body-centered joint-bridge data and the lower body
centered joint-bridge data from the user terminal (not shown)
to create and display the central joint-bridge data (S516).
[00305] The processor 130 may create and display first
direction joint-bridge data connecting the first element and
the second element of the character in a first direction or
second-direction joint-bridge data connecting the first
element and the second element of the character in a second
direction (S520).
[00306] Details of creating and displaying the first- direction
joint-bridge data or the second-direction joint- bridge data
in operation S520 will be described with referenceto FIGS. 10
to 15.
[00307] FIG. 10 is an exemplary diagram illustrating first
direction joint-bridge data and second-direction joint-bridge
data according to the present disclosure.
[00308] FIG. 11 is a flowchart illustrating a process of
creating first-direction joint-bridge data or second-direction
joint-bridge data according to the present disclosure.
[00309] FIG. 12 is an exemplary view illustrating rotating a character at a preset angle, based on a rotation axis according to the present disclosure.
[00310] FIG. 13 is an exemplary diagram illustrating first
direction joint-bridge data and second-direction joint-bridge
data created when a character is tilted from the front to the
right and left according to the present disclosure.
[00311] FIG. 14 is an exemplary diagram illustrating first
direction joint-bridge data and second-direction joint-bridge
data that are created when a character is tilted from the side
to the right and left according to the present disclosure.
[00312] FIG. 15 is an exemplary diagram illustrating first
direction joint-bridge data and second-direction joint-bridge
data that are created when a character is tilted from the rear
to the right and left according to the present disclosure.
[00313] For example, referring to FIG. 10, when the first
character 700 tilts its upper body 10 degrees to the right,
which is the first direction, the first-direction joint-bridge
data 710 may be created and displayed.
[00314] In addition, referring to FIG. 10, when the first
character 700 tilts its upper body 10 degrees to the left,
which is the second direction, the second-direction joint
bridge data 720 may be created and displayed.
[00315] The processor 130 may display the rotation axis onthe
character creation screen and rotate the character at a preset
angle, based on the rotation axis (S522).
[00316] For example, referring to FIG. 12, the artboard 72,
which is a character creation screen, is divided into three
equal parts and displayed, and the first screen 73 may display
the side of the lower body including the lower body-centered
joint-bridge data 910.
[00317] In addition, the second screen 74 may display the front
of the lower body including the lower body-centered joint-bridge
data 910, and the third screen 75 may display therear of the lower
body including the lower body-centered joint-bridge data 910.
[00318] The processor 130 displays the rotation axis 1200 on
each of the first screen 73, the second screen 74, and the
third screen 75 where the lower body of the character is
located, and the character 900 may be rotated by -10 and 10
degrees.
[00319] Since the processor 130 rotates the lower body of the
character at a preset angle, based on the rotation axis 1200,
the lower body-centered joint-bridge data 910 for the waist
portion of the character 900 may be naturally adjusted and
located.
[00320] The processor 130 may create the first-direction joint
bridge data and the second-direction joint-bridge data
corresponding to each of the front, side, and rear surfaces of
the character rotated according to the preset angle with
respect to the rotation axis 1200 (S524).
[00321] The processor 130 may create the first-direction joint
bridge data and the second-direction joint-bridge data to
correspond to the upper body or the lower body based on the
central joint-bridge data created by being selected from the
user terminal (not shown) in operation S516.
[00322] As an example, referring to FIG. 13, when the upperbody
is tilted by 10 degrees to the right in a first directionfrom
the front of the character 900 with respect to the rotation
axis 1200, the processor 130 may create the first-direction
joint-bridge data 1301.
[00323] In addition, when the upper body is tilted by 10
degrees to the left in a second direction from the front of
the character 900 with respect to the rotation axis 1200, the
processor 130 may create and display the second-direction
joint-bridge data 1302.
[00324] Accordingly, when the character 900 is tilted fromthe
front to the right and left, the first-direction joint- bridge
data 1301 and the second-direction joint-bridge data 1302 may
be matched to appear naturally.
[00325] The first-direction joint-bridge data 1301 and the
second-direction joint-bridge data 1302 may be a joint-bridge
for a top worn by the character 900 as a joint-bridge for an
upper body of the body of the character 900.
[00326] In addition, referring to FIG. 14, when the upper
body is tilted by 10 degrees to the right in a first direction
from the side of the character 900 with respect to the rotation
axis 1200, the processor 130 may create the first-direction
joint-bridge data 1101.
[00327] In addition, when the upper body is tilted by 10
degrees to the left in a second direction from the side of the
character 900 with respect to the rotation axis 1200, the
processor 130 may create and display the second-direction
joint-bridge data 1102.
[00328] Accordingly, when the character 900 is tilted fromthe
side to the right and left, the first-direction joint- bridge
data 1401 and the second-direction joint-bridge data 1402 may
be matched to appear naturally.
[00329] In addition, referring to FIG. 15, when the upper
body is tilted by 10 degrees to the right in a first direction
from the rear of the character 900 with respect to the rotation axis 1200, the processor 130 may create the first-direction joint-bridge data 1501.
[00330] In addition, when the upper body is tilted by 10
degrees to the left in a second direction from the rear of the
character 900 with respect to the rotation axis 1200, the
processor 130 may create and display the second-direction
joint-bridge data 1502.
[00331] Accordingly, when the character 900 is tilted fromthe
rear to the right and left, the first-direction joint- bridge
data 1501 and the second-direction joint-bridge data 1502 may
be matched to appear naturally.
[00332] The processor 130 may receive a selection of a first
element motion design or a second element motion design
corresponding to each of the central joint-bridge data, the
first-direction joint-bridge data, or the second-direction
joint-bridge data from a user terminal (not shown) and match
the same to the character (S530).
[00333] The first element motion design may be one of designs
corresponding to a plurality of motions of an upper body of
the character's body, upper arms of the arms, and a thigh of
the legs.
[00334] In addition, the second element motion design may be
one of designs corresponding to a plurality of motions ofa
lower body of the character's body, lower arms of the arms,and
a calf of the legs.
[00335] Details of the selection of the first element motion
design or the second element motion design in operationS530 and
matching with the character will be described with reference
to FIGS. 16 to 18.
[00336] FIG. 16 is a flowchart illustrating a process of matching a character after a first element motion design or asecond element motion design according to the present disclosure are chosen.
[00337] FIG. 17 is an exemplary diagram illustrating a
plurality of upper body motion designs according to the present
disclosure.
[00338] FIG. 18 is an exemplary diagram illustrating a
plurality of lower body motion designs according to the present
disclosure.
[00339] The processor 130 may receive a selection of an upper
body motion design from among the first element motion designs
corresponding to each of the upper body-centered joint-bridge
data, the first direction joint-bridge data, or the second
direction joint-bridge data from a user terminal (not shown)
to match the character with the upper body motion design(S532).
[00340] As an example, referring to FIG. 17, a user may select
a first upper body motion design 1701 corresponding toeach of
the upper body-centered joint-bridge data, the first direction
joint-bridge data, or the second-direction joint- bridge data
among a plurality of upper body motion designs 17from the user
terminal (not shown).
[00341] The processor 130 may receive a selection of a lowerbody
motion design from among the second element motion designs
corresponding to each of the lower body-centered joint-bridge
data, the first direction joint-bridge data, or the second
direction joint-bridge data from a user terminal (not shown)
to match the character with the lower body motion design (S534).
00342] As an example, referring to FIG. 18, a user mayselect
a first lower body motion design 1801 corresponding to each of
the lower body-centered joint-bridge data, the first direction joint-bridge data, or the second-direction joint- bridge data among a plurality of lower body motion designs 18 from the user terminal (not shown).
[00343] FIG. 19 is an exemplary diagram illustrating a
plurality of upper and lower body motion designs according to
the present disclosure. FIG. 20 is an exemplary diagram
illustrating a plurality of left arm motion designs according
to the present disclosure. FIG. 21 is an exemplary diagram
illustrating a plurality of leg motion designs according to
the present disclosure.
[00344] Referring to FIG. 19, when a user inputs the 'front'
button through the user terminal (not shown) among the 'front,'
'side,' 'rear,' and 'integrated' buttons provided in the upper
center of the character creation service page, the processor
130 may display each design of the left arm, right arm, and
leg from the front of the character.
[00345] In the character creation service page, a character
created according to a user's input may be displayed in the
center.
[00346] For example, when a user selects the first left arm
design 1901 from among the plurality of left arm designs
through the user terminal (not shown), the processor 130 may
change the left arm from the overall design of the character
displayed in the center to the first left arm design 1901
selected by the user and display the same.
[00347] In addition, when a user selects the first right arm
design 1902 from among the plurality of right arm designs
through the user terminal (not shown), the processor 130 may
change the right arm from the overall design of the character
displayed in the center to the first right arm design 1902 selected by the user and display the same.
[00348] In addition, when a user selects the first leg design
1903 from among the plurality of leg designs through the user
terminal (not shown), the processor 130 may change the leg
from the overall design of the character displayed inthe center
to the first leg design 1903 selected by the user and display
the same.
[00349] Referring to FIG. 20, when a user inputs the 'front'
button through the user terminal (not shown) among the 'front,'
'side,' 'rear,' and 'integrated' buttons provided in the upper
left of the character creation service page, the processor
130 may display the entire design of the character from the
front.
[00350] In addition, when a user inputs the 'left arm' button
through the user terminal (not shown) among the 'left arm,'
'right arm,' and 'leg' buttons provided in the upper right of
the character creation service page, the processor 130 may
display a plurality of left arm designs correspondingto the
left arm of the character from the front.
[00351] When a user selects the second left arm design 2001from
among the plurality of left arm designs through the user
terminal (not shown), the processor 130 may change the left
arm from the overall design of the character displayed on the
left screen to the second left arm design 2001 selected by the
user and display the same.
[00352] Referring to FIG. 21, when a user inputs the 'front'
button through the user terminal (not shown) among the 'front',
'side', 'rear', and 'integrated' buttons provided in the upper
left of the character creation service page, the processor
130 may display the entire design of the character from the front.
[00353] In addition, when a user inputs the 'leg' button
through the user terminal (not shown) among the 'left arm,'
'right arm,' and 'leg' buttons provided in the upper right of
the character creation service page, the processor 130 may
display a plurality of leg designs corresponding to the leg of
the character from the front.
[00354] When a user selects the second leg design 2101 from
among the plurality of leg designs through the user terminal
(not shown), the processor 130 may change the leg from the
overall design of the character displayed on the left screen
to the second leg design 2101 selected by the user and display
the same.
[00355] Hereinabove, FIGS. 5, 6, 11, and 16 illustrate thateach
of a plurality of operations are sequentially executed, but
this is merely illustrative of the technical idea of this
embodiment. It is possible for those of ordinary skill in the
technical field to which this embodiment belongs to apply
various modifications and variations to executing by changing
the order described in FIGS. 5, 6, 11, and 16 or executing one
or more of operations among a plurality of operations in
parallel within a range that does not deviate from the
essential characteristics of this embodiment. FIGS. 5, 6, 11,
and 16 do not limit a time series sequence.
[00356] The above-mentioned method according to the present
disclosure may be implemented with a program (or an application)
to be executed in combination with a computer, which is
hardware, and may be stored in a medium. Here, the computer
may be the device 10 described above.
[00357] For the computer to read the program and execute the methods implemented with the program, the above-mentioned program may include a code encoded into a computer language such as C, C++, Java, or a machine language readable through a device interface of the computer by a processor (CPU) of the computer. Such a code may include a functional code associated with a function and the like defining functions necessary for executing the methods and may include a control code associated with an execution procedure necessary for the processor of the computer to execute the functions according to a predetermined procedure.
[00358] Further, such a code may further include a code
associated with memory reference about whether additional
information or media necessary for the processor of the
computer to execute the functions is referred at any location
(address number) of an internal or external memory of the
computer.
[00359] Further, if it is necessary for the processor of the
computer to communicate with any computer or server locatedin a
remote place to execute the functions, the code may further
include a communication related code about howcommunication is
performed with any computer or server located in a remote place
using a communication module of the computer and whether to
transmit and receive any information or media upon
communication.
[00360] The operations of a method or algorithm described
in connection with the embodiments of the present disclosure
may be embodied directly in hardware, in a software module
executed by hardware, or in a combination thereof. The software
module may reside on a Random Access Memory (RAM), a Read Only
Memory (ROM), an Erasable Programmable ROM (EPROM), an
Electrically Erasable Programmable ROM (EEPROM), a Flash
memory, a hard disk, a removable disk, a CD-ROM, or a computer
readable recording medium in any form well known in the
technical field to which the present disclosure pertains.
[00361] Although the embodiments of the present disclosure
have been described with reference to the attached drawings,
those skilled in the technical field to which the present
disclosure pertains will understand that the present
disclosure may be practiced in other detailed forms without
departing from the technical spirit or essential features of
the present disclosure. Therefore, it should be understood
that the above-described embodiments are exemplary in all
aspects rather than being restrictive.
[00362] Description of Reference Numerals
10: DEVICE
110: COMMUNICATION UNIT
120: MEMORY
130: PROCESSOR

Claims (14)

1. A method for automatically creating a cartoon image based
on at least one input sentence, performed by a device including a
communication unit, a memory, and a processor, wherein the
communication unit is adapted to obtain the at least one sentence
input through a website or application provided to an external
device, the method comprising:
recognizing a sentence when the at least one sentence is
input;
identifying each of at least one word included in the
recognized sentence;
identifying a type of punctuation mark located at at least
one of a start point and an end point of the recognized sentence;
determining the sentence as any one of a general expression
sentence, a dialogue expression sentence, and an emotional
expression sentence based on the type of the identified punctuation
mark;
automatically creating a cartoon image based on a word
included in the general expression sentence when the sentenceis the
general expression sentence, wherein the memory includes a
plurality of processes for automatically creating the cartoon image
based on the at least one input sentence, and wherein the cartoon
image includes at least one character having at least one theme;
understanding a subject of the dialogue expression sentence
or the emotional expression sentence when the sentenceis the dialogue
expression sentence or the emotional expressionsentence; and
inserting the dialogue expression sentence or the emotional expression sentence in a form of a speech bubble on a character corresponding to the understood subject among at least one character in the cartoon image through the processor; wherein in the recognition of the sentence by the processor, when the at least one sentence is a plurality of sentences, each of the words included in each of the plurality of sentences is identified, a sentence component of each of the identified words is understood, and a correlation between the plurality of sentences based on the understood sentence component is understood, so that the plurality of sentences are grouped into at least one paragraph, wherein in the identification of the type of punctuation mark, a type of punctuation mark located at at least one of a start point and an end point of at least one sentence includedin the grouped paragraph is identified by the processor, wherein the processor is configured to: when the identified punctuation mark is located only at an end point, determine the recognized sentence to be the general expression sentence; when the type of the identified punctuation mark is double quotation marks, determine the recognized sentence to be the dialogue expression sentence, and insert the recognized dialogue expression sentence into the speech bubble, wherein the recognized dialogue expression sentence is a portion of the at least one sentence and located between one quotation mark of the double quotation marks and the other quotation mark of the double quotation marks; and when the type of the identified punctuation mark is single quotation marks, determine the recognized sentence to be the emotional expression sentence, create a face of a character corresponding to the understood subject with respect to the emotional expression sentence, and display the face by zooming in at a preset magnification or by changing a shape of the face.
2. The method of claim 1, wherein in the determination of the
sentence,
when the identified punctuation mark is located at an end
point and the type of the identified punctuation mark is any one
of an exclamation mark and a question mark, the recognized sentence
is determined to be the emotional expression sentence, and
when a preset emoticon or abbreviation is located in the
recognized sentence, the recognized sentence is determined tobe the
emotional expression sentence.
3. The method of claim 1 or claim 2, wherein in the automatic
creation,
when the word represents any one of a subject, an object,and a
complement, the character having a theme corresponding to the word
is created;
when the word represents a place, a background of the cartoon
image is created based on the word; and
when the word represents time, brightness of the cartoonimage
is determined based on the word.
4. The method of any one of claims 1 to 3, wherein the automatic
creation determines a verb related to the created character in the general expression sentence, and creates the character to represent a motion corresponding to the determined verb.
5. The method of any one of claims 1 to 4, wherein:
the automatic creation determines a size and a locationof the
created character and a size and a location of the speech bubble
based on an object arrangement algorithm; and
when user's manipulation information for the cartoon image is
input, the object arrangement algorithm builds a learning data set
by matching the manipulation information with the cartoon image,
and is machine learned based on the built learning data set.
6. A device for automatically creating a cartoon image
based on at least one input sentence, the device comprising:
a communication unit for obtaining the at least one sentence;
and
a processor, wherein the processor is configured to:
recognize the at least one sentence;
identify each of at least one word included in the
recognized sentence;
identify a type of punctuation mark located at at least one
of a start point and an end point of the recognized sentence;
determine the sentence as any one of a general expression
sentence, a dialogue expression sentence, and an emotional
expression sentence based on the type of the identified
punctuation mark;
automatically create a cartoon image based on a word
included in the general expression sentence when the sentence is the general expression sentence, wherein a memory includes a plurality of processes for automatically creating the cartoon image based on the at least one input sentence, and wherein the cartoon image is created to include at least one character having at least one theme; understand a subject of the dialogue expression sentence or the emotional expression sentence when the sentence is the dialogue expression sentence or the emotional expression sentence through the processor; and insert the dialogue expression sentence or the emotional expression sentence in a form of a speech bubble on a character corresponding to the understood subject among at least one character in the cartoon image; and wherein when recognizing the sentence, when the at least one sentence is a plurality of sentences, the processor is configured to: identify each of the words included in each of the plurality of sentences; understand a sentence component of each of the identified words; and understand a correlation between the plurality of sentences based on the understood sentence component, so that the plurality of sentences are grouped into at least one paragraph, wherein in the identification of the type of punctuation mark, the processor is configured to identify a type of punctuation mark located at at least one of a start point and an end point of at least one sentence included in the grouped paragraph, and wherein the processor is configured to: when the identified punctuation mark is located only at an end point, determine the recognized sentence to be the general expression sentence; when the type of the identified punctuation mark is double quotation marks, determine the recognized sentence to be the dialogue expression sentence, and insert the recognized dialogue expression sentence into the speech bubble, wherein the recognized dialogue expression sentence is a portion of the at least one sentence and located between one quotation mark of the double quotation marks and the other quotation mark of the double quotation marks; and when the type of the identified punctuation mark is single quotation marks, determine the recognized sentence to be the emotional expression sentence, create a face of a character corresponding to the understood subject with respect to the emotional expression sentence, and display the face by zooming in at a preset magnification or by changing a shape of the face.
7. The device of claim 6, wherein when determining the
sentence, the processor is configured to:
determine the recognized sentence as the emotional expression
sentence when the identified punctuation mark is located at an end
point and the type of the identified punctuation mark is any one
of an exclamation mark and a question mark; or
determine the recognized sentence as the emotional expression sentence when a preset emoticon or abbreviation islocated in the recognized sentence.
8. The device of claim 6 or claim 7, wherein when
automatically creating the cartoon image, the processor is
configured to:
create the character having a theme corresponding to theword
when the word represents any one of a subject, an object,and a
complement;
create a background of the cartoon image based on the word
when the word represents a place; and
determine brightness of the cartoon image based on the word
when the word represents time.
9. The device of any one of claims 6 to 8, wherein when
automatically creating the cartoon image, the processor is
configured to:
determine a verb related to the created character in the
general expression sentence; and
create the character to represent a motion correspondingto the
determined verb.
10. The device of any one of claims 6 to 9, wherein when
automatically creating the cartoon image, the processor is
configured to:
determine a size and a location of the created character and
a size and a location of the speech bubble based on an object
arrangement algorithm; and wherein when user's manipulation information for the cartoon image is input, the object arrangement algorithm buildsa learning data set by matching the manipulation information with the cartoon image, and is machine learned based on the built learning data set.
11. The device of any one of claims 6 to 10, wherein when
automatically creating the cartoon image, the processor is
configured to:
create and display central joint-bridge data connectingjoint
portions separated into a first element and a second element of the
character when the character is created;
create and display first-direction joint-bridge data
connecting the first element and the second element of the character
in a first direction or second-direction joint- bridge data
connecting the first element and the second elementof the character
in a second direction; and
receive a selection of a first element motion design ora
second element motion design corresponding to each of the central
joint-bridge data, the first direction joint-bridge data, or the
second-direction joint-bridge data from a user terminal, and match
the character.
12. The device of claim 11, wherein the joint-bridge is
disposed to overlap a blank area between the first element andthe
second element of the character.
13. The device of claim 11 or claim 12, wherein in the
creating and displaying of the second-direction joint-bridge data, the processor creates and displays the first-direction joint-bridge data and the second-direction joint-bridge data by rotating the character based on a rotation axis.
14. A program stored in a computer-readable recording medium
in combination with a hardware device to execute the method for
automatically creating a cartoon image based on an input sentence
of any one of claims 1 to 5.
AU2021269326A 2021-07-29 2021-11-16 Device and method for automatically creating cartoon image based on input sentence Active AU2021269326B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KRKR10-2021-0099674 2021-07-29
KR1020210099674A KR20230018027A (en) 2021-07-29 2021-07-29 Apparatus and Method for Automatically Generating Cartoon Images based on Inputted Sentences

Publications (2)

Publication Number Publication Date
AU2021269326A1 AU2021269326A1 (en) 2023-02-16
AU2021269326B2 true AU2021269326B2 (en) 2023-11-23

Family

ID=85025206

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021269326A Active AU2021269326B2 (en) 2021-07-29 2021-11-16 Device and method for automatically creating cartoon image based on input sentence

Country Status (3)

Country Link
KR (1) KR20230018027A (en)
AU (1) AU2021269326B2 (en)
CA (1) CA3138790C (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102608266B1 (en) * 2023-04-04 2023-11-30 주식회사 크림 Method and apparatus for generating image
KR102597074B1 (en) * 2023-05-19 2023-11-01 주식회사 툰스퀘어 (Toonsquare) Image generating device for generating 3d images corresponding to user input sentences and operation method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6069622A (en) * 1996-03-08 2000-05-30 Microsoft Corporation Method and system for generating comic panels
JP2008305171A (en) * 2007-06-07 2008-12-18 Sony Corp Automatic comic creator, automatic comic creating method, and automatic comic creating program
US20160027198A1 (en) * 2014-07-28 2016-01-28 PocketGems, Inc. Animated audiovisual experiences driven by scripts

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102558953B1 (en) 2018-08-29 2023-07-24 주식회사 케이티 Apparatus, method and user device for prividing customized character

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6069622A (en) * 1996-03-08 2000-05-30 Microsoft Corporation Method and system for generating comic panels
JP2008305171A (en) * 2007-06-07 2008-12-18 Sony Corp Automatic comic creator, automatic comic creating method, and automatic comic creating program
US20160027198A1 (en) * 2014-07-28 2016-01-28 PocketGems, Inc. Animated audiovisual experiences driven by scripts

Also Published As

Publication number Publication date
AU2021269326A1 (en) 2023-02-16
CA3138790A1 (en) 2023-01-29
KR20230018027A (en) 2023-02-07
CA3138790C (en) 2024-04-16

Similar Documents

Publication Publication Date Title
KR102577514B1 (en) Method, apparatus for text generation, device and storage medium
JP7073241B2 (en) Improved font recognition by dynamically weighting multiple deep learning neural networks
CN111191078B (en) Video information processing method and device based on video information processing model
CN110750959B (en) Text information processing method, model training method and related device
AU2021269326B2 (en) Device and method for automatically creating cartoon image based on input sentence
WO2019100319A1 (en) Providing a response in a session
CN109886180A (en) For overlapping the user interface of handwritten text input
KR101754093B1 (en) Personal records management system that automatically classify records
CN105580384A (en) Actionable content displayed on a touch screen
CN111985243B (en) Emotion model training method, emotion analysis device and storage medium
CN111444725B (en) Statement generation method, device, storage medium and electronic device
CN110610180A (en) Method, device and equipment for generating recognition set of wrongly-recognized words and storage medium
WO2021237227A1 (en) Method and system for multi-language text recognition model with autonomous language classification
Havelka et al. Age of acquisition in naming Japanese words
CN110634172A (en) Generating slides for presentation
CN111968624A (en) Data construction method and device, electronic equipment and storage medium
EP3961474A2 (en) Device and method for automatically creating cartoon image based on input sentence
JP2003196593A (en) Character recognizer, method and program for recognizing character
CN109710751A (en) Intelligent recommendation method, apparatus, equipment and the storage medium of legal document
Hu et al. Recognition of Kuzushi-Ji with deep learning method a case study of Kiritsubo chapter in the Tale of Genji
Gamage et al. Sinhala Sign Language Translation through Immersive 3D Avatars and Adaptive Learning
KR102660366B1 (en) Sign language assembly device and operation method thereof
Dapitilla Perin et al. EskayApp: An Eskaya-Latin Script OCR Transliteration e-Learning Android Application using Supervised Machine Learning
US20240070390A1 (en) Generating suggestions using extended reality
Riaz et al. Echoes in Silence: A Technological Leap for Pakistan Sign Language Translation and Recognition

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)