WO2017035971A1 - Method and device for generating emoticon - Google Patents

Method and device for generating emoticon Download PDF

Info

Publication number
WO2017035971A1
WO2017035971A1 PCT/CN2015/096392 CN2015096392W WO2017035971A1 WO 2017035971 A1 WO2017035971 A1 WO 2017035971A1 CN 2015096392 W CN2015096392 W CN 2015096392W WO 2017035971 A1 WO2017035971 A1 WO 2017035971A1
Authority
WO
WIPO (PCT)
Prior art keywords
hand
graphic
drawn
user
filling
Prior art date
Application number
PCT/CN2015/096392
Other languages
French (fr)
Chinese (zh)
Inventor
郝冀宣
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Publication of WO2017035971A1 publication Critical patent/WO2017035971A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Definitions

  • the present invention relates to the field of input technologies, and in particular, to a method and apparatus for generating facial characters.
  • Emoticons are an important way for users to express emotions in the process of text input. They can make the input process more interesting and vivid.
  • the text is a kind of expression. Because the characters can vividly express the user's intention in the input scene, they are widely used. use.
  • a method for generating a face text comprising the steps of:
  • a generating apparatus for generating a face text comprising the following means:
  • An identification device configured to identify a graphic feature of the hand-drawn graphic
  • a first determining means for determining one or more filling elements corresponding to the graphic features
  • the upper screen device is configured to fill the hand-drawn graphic with the one or more filling elements according to the graphic feature, and generate corresponding facial characters for performing an upper screen.
  • the present invention has the following advantages:
  • the user can also select the filling element that constitutes the character text, and generate the corresponding face text on the screen based on the filling element, thereby further improving the user's input and use experience;
  • the pre-processing is performed on the pre-processed hand-drawn graphics, and the accuracy of the graphic feature recognition is improved, so that the generated facial text and the user input are hand-drawn.
  • the graphics are more matched, which further enhances the user experience;
  • FIG. 1 shows a schematic diagram of a generating device for generating a face text according to an aspect of the present invention
  • FIG. 2 is a diagram showing an effect of generating a face text according to an embodiment of the present invention
  • FIG. 3 is a diagram showing an effect of generating a face text according to another embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a generating device for generating a face text according to a preferred embodiment of the present invention
  • FIG. 5 is a schematic diagram of a generating apparatus for generating a face text according to still another preferred embodiment of the present invention.
  • FIG. 6 is a flow chart showing the process of generating a face text according to another aspect of the present invention.
  • FIG. 7 is a flow chart showing a method for generating a face text according to a preferred embodiment of the present invention.
  • FIG. 8 is a flow chart showing the process of generating face characters according to another preferred embodiment of the present invention.
  • computer device also referred to as “computer” in the context, is meant an intelligent electronic device that can perform predetermined processing, such as numerical calculations and/or logical calculations, by running a predetermined program or instruction, which can include a processor and The memory is executed by the processor to execute a predetermined process pre-stored in the memory to execute a predetermined process, or is executed by hardware such as an ASIC, an FPGA, a DSP, or the like, or a combination of the two.
  • predetermined processing such as numerical calculations and/or logical calculations
  • predetermined program or instruction which can include a processor and The memory is executed by the processor to execute a predetermined process pre-stored in the memory to execute a predetermined process, or is executed by hardware such as an ASIC, an FPGA, a DSP, or the like, or a combination of the two.
  • the computer device includes a user device and a network device.
  • the user equipment Including but not limited to personal computers, notebook computers, tablets, smart phones, PDAs, etc.
  • the network devices include but are not limited to a single network server, a server group composed of multiple network servers, or a large number of cloud computing based
  • a cloud composed of a computer or a network server, wherein cloud computing is a type of distributed computing, a super virtual computer composed of a group of loosely coupled computers.
  • the computer device can be operated separately to implement the present invention, and can also access the network and implement the present invention by interacting with other computer devices in the network.
  • the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
  • the user equipment, the network equipment, the network, and the like are merely examples, and other existing or future possible computer equipment or networks, such as those applicable to the present invention, are also included in the scope of the present invention. It is included here by reference.
  • the generating device 1 shows a schematic diagram of a generating apparatus for generating a face text in accordance with an aspect of the present invention.
  • the generating device 1 includes an obtaining device 101, an identifying device 102, a first determining device 103, and an upper screen device 104.
  • the generating device 1 can, for example, be located in a computer device, which comprises a user device and a network device.
  • the generating device 1 communicates with the user equipment through the network, acquires a hand-drawn graphic input by the user in the user device, identifies graphic features of the hand-drawn graphic, and determines corresponding to the graphic feature.
  • the one or more filling elements according to the graphic feature, filling the hand-drawn graphic with the one or more filling elements, generating a corresponding color text to return to the user equipment, and the user equipment Screen.
  • the acquiring device 101 acquires a hand-drawn graphic input by a user. Specifically, the user inputs a hand-drawn graphic by interacting with the user equipment. For example, when the user equipment is a mobile terminal such as a touch screen mobile phone, the user slides on the screen by using a finger in the hand-drawn input area of the touch screen mobile phone. , touch screen pen painting, etc., input a certain hand-painted graphics; When the user equipment is a desktop computer, a notebook computer, or the like, the user inputs a certain hand-drawn graphic in the hand-drawn input area of the device by sliding the mouse or sliding the finger on the touch panel.
  • the user equipment is a mobile terminal such as a touch screen mobile phone
  • touch screen pen painting, etc. input a certain hand-painted graphics
  • the user equipment is a desktop computer, a notebook computer, or the like
  • the user inputs a certain hand-drawn graphic in the hand-drawn input area of the device by sliding the mouse or sliding the finger on the touch panel
  • the hand-drawn input area triggers display, for example, when the user clicks on a corresponding button in the input application.
  • the obtaining device 101 acquires a hand-drawn graphic input by the user through an interaction with the user device, such as one or more calls to an application program interface (API) provided by the input application, for example, acquiring a motion track of the stylus pen through the sensor to obtain The hand-drawn graphic input by the user; or the obtaining means 101 obtains the hand-drawn graphic by, for example, mapping the content hand-drawn by the user.
  • API application program interface
  • Identification device 102 identifies graphical features of the hand-drawn graphics. Specifically, the identification device 102 performs feature extraction and selection on the hand-drawn graphics acquired by the acquisition device 101 to identify graphic features that best reflect the essence of the hand-drawn graphics, for example, the graphic contour curvature of the hand-drawn graphic is the largest or the contour direction is sudden. The place to change. For example, when the hand-drawn graphics acquired by the acquisition device 101 are in a heart shape, the recognition device 102 recognizes the heart-shaped graphic features by feature extraction and selection: sharp angles with the same curvature in both directions.
  • the first determining means 103 determines one or more fill elements corresponding to the graphical features. Specifically, the manner in which the first determining means 103 determines one or more filling elements corresponding to the graphic features recognized by the identifying means 102 includes, but is not limited to:
  • the elements stored in the element library 1 include, but are not limited to, at least one of the following: a symbol, an expression, or a custom picture.
  • Symbols include but are not limited to Chinese punctuation marks, English standards Dot, mathematical symbol, serial number, Greek symbol, phonetic symbol, tab symbol, etc.
  • Expressions include but are not limited to emoji expressions, gif expressions, and the like.
  • Custom images include, but are not limited to, custom images uploaded by other users and uploaded by other users.
  • the element library 1 may be located in the user equipment or in a third party device connected to the user equipment through a network. Further, the element library 1 includes, but is not limited to, at least one of the following sub-libraries: a symbol library, an expression library, or a custom picture library.
  • the generating device 1 further comprises an updating device (not shown) that acquires a custom picture uploaded by the uploader, and builds or updates the element library.
  • an updating device (not shown) that acquires a custom picture uploaded by the uploader, and builds or updates the element library.
  • the update device acquires the customized picture uploaded by the uploader, and stores the customized picture in the element library, thereby implementing the establishment or update of the element library.
  • the element library 2 can also be created or updated in the manner described above.
  • the element library 2 may be identical or partially identical to the element library 1; the element library 2 may also be a user-defined element library.
  • the upper screen device 104 fills the hand-drawn graphics with the one or more filling elements according to the graphic features, and generates corresponding facial characters for performing an upper screen. Specifically, the upper screen device 104 determines one or more padding elements determined by the first determining device 103 through the element library matching, the user selected, or a combination of the two according to the graphic features recognized by the identifying device 102. The hand-drawn graphic is filled with elements to generate a corresponding facial text upper screen, that is, the hand-drawn graphic is displayed by the one or more filling elements.
  • the upper screen device 104 displays the generated face text directly on the screen of the user device by, for example, one or more calls to dynamic page technologies such as JSP, ASP or PHP; or when the user clicks and presses the up button When the screen operation is performed in an equal manner, the upper screen device 104 displays the generated face text on the screen of the user device, for example, in the input box of the user.
  • dynamic page technologies such as JSP, ASP or PHP
  • the recognition device 102 recognizes that the main features of the heart shape are: sharp angles with the same curvature in both directions.
  • the first determining means 103 determines one or more filling elements corresponding to the heart shape according to the graphic features, for example: ",.!-".
  • the upper screen device 104 fills the heart-shaped pattern with these elements and uploads the screen to generate a face text as shown in FIG. 2; or the first determining means 103 determines a corresponding fill element according to the user's selection, which is used by the upper screen device 104. Fill the heart pattern and put it on the screen to generate the face text as shown in Figure 3.
  • the generating device 1 acquires a hand-drawn graphic input by the user, identifies a graphic feature of the hand-drawn graphic, and determines one or more filling elements corresponding to the graphic feature, according to the graphic feature, Or a plurality of filling elements fill the hand-drawn graphics, generate corresponding facial characters for the upper screen, and the user can create the facial characters by himself, and only need to simply graffiti, and the facial characters with the same shape as the graffiti can be automatically generated, and the personality is realized. Demand.
  • the user can also independently select a fill element that constitutes a face text, and generate a corresponding face text on the screen based on the fill element, thereby further improving the user's input use experience.
  • the generating device 1 further comprises a pre-processing device 405.
  • the preferred embodiment is described in detail below.
  • the obtaining device 401 acquires a hand-drawn graphic input by a user; the pre-processing device 405 performs pre-processing on the hand-drawn graphic to obtain a pre-processed hand-drawn graphic; and the recognition device 402 recognizes the a graphical feature of the pre-processed hand-drawn graphic; the first determining means 403 determines one or more padding elements corresponding to the graphic features; the upper-screen device 404, according to the graphic features, the one or A plurality of filling elements fill the hand-drawn graphics, and generate corresponding facial characters for upper screen.
  • the obtaining device 401, the identifying device 402, the first determining device 403, and the upper screen device 404 are the same as or substantially the same as the corresponding device in FIG. 1, and thus are not described here
  • the pre-processing device 405 performs pre-processing on the hand-drawn graphic to obtain a pre-processed hand-drawn graphic. Specifically, the pre-processing device 405 performs pre-processing such as graphics enhancement processing, denoising, and graphics sharpening on the hand-drawn graphics acquired by the acquiring device 401, thereby obtaining the pre-processed hand-drawn graphics.
  • pre-processing such as graphics enhancement processing, denoising, and graphics sharpening
  • the pre-processing device 405 may further include a graphics enhancement unit (not shown), a graphics smoothing unit (not shown), and a graphics edge processing unit (not shown).
  • the graphic enhancement unit realizes enhancement processing of the hand-drawn graphic by graphic transformation;
  • the graphic smoothing unit realizes smoothing or filtering the hand-drawn graphic by removing noise from the graphic, thereby improving image quality and facilitating extraction of the object feature;
  • the edge processing unit realizes edge sharpening processing on the hand-drawn graphic by graphic sharpening.
  • the graphic smoothing unit can remove the redundant point, help remove noise, repair the graphic;
  • the unit helps to accurately identify the corners of the heart, and high fidelity restores the user's hand-drawn graphics.
  • the generating device 1 After acquiring the hand-drawn graphics input by the user, the generating device 1 performs pre-processing on the first, and then performs graphic feature recognition on the pre-processed hand-drawn graphics, thereby improving the accuracy of the graphic feature recognition, and the generated facial characters and User-entered hand-drawn graphics are more matched, One step has improved the user experience.
  • FIG. 5 is a block diagram showing a generating apparatus for generating face characters according to still another preferred embodiment of the present invention.
  • the generating device 1 further comprises a second determining device 506.
  • the obtaining means 501 acquires a hand-drawn graphic input by the user;
  • the identifying means 502 identifies the graphic feature of the hand-drawn graphic;
  • the first determining means 503 determines a corresponding to the graphic feature One or more filling elements;
  • a second determining means 506 determining to fill the element density of the hand-drawn graphic with the filling element; an upper screen device 504, according to the graphic feature, in combination with the element density, Or a plurality of filling elements fill the hand-drawn graphics, and generate corresponding facial characters for upper screen.
  • the obtaining device 501, the identifying device 502, the first determining device 503, and the upper screen device 504 are the same as or substantially the same as the corresponding device in FIG. 1, and thus are not described herein again, and are included herein by
  • the second determining means 506 determines to fill the element density of the hand-drawn graphic with the filling element.
  • the manner in which the second determining means 506 determines the density of the element includes but is not limited to:
  • the user After automatically determining the density of the elements according to the graphical features, the user adjusts to determine the density of the elements together.
  • the second determining means 506 automatically increases the element density of the filling element; for the gentle graphic portion,
  • the second determining device 506 adopts a uniform packing density; some users may not like the preset packing density, and may set the packing density according to their own senses; when the element density determined by one method alone does not satisfy the user's demand, the two may be combined. A certain way of determining together an element density.
  • the generating device 1 also determines to fill the hand-drawn graphics input by the user with the filling elements.
  • the element density combined with the element density, fills the hand-drawn graphics with elements, further improving the matching degree between the generated facial characters and the hand-drawn graphics input by the user, thereby improving the user experience.
  • the generating device 1 further comprises a storage device (not shown) that stores the facial text for direct selection and use by the user when entering the next time.
  • the user saves the created face text by clicking, long pressing the save button, etc.
  • the storage device stores the face text, for example, stored in the local character library of the user device, or stored in the
  • the network character library connected to the user equipment through the network may be directly called when the user uses the same face text next time; or the storage device uses the face text as a custom picture to be stored in the element library; or The storage device uploads the text to the network text library for use by other users.
  • the generating device 1 also stores, dumps or uploads the user-generated facial characters to facilitate the use of the user or other network users, thereby further improving the user experience.
  • FIG. 6 shows a flow chart of a method for generating face text in accordance with another aspect of the present invention.
  • the generating device 1 can, for example, be located in a computer device, which comprises a user device and a network device.
  • the generating device 1 communicates with the user equipment through the network, acquires a hand-drawn graphic input by the user in the user device, identifies graphic features of the hand-drawn graphic, and determines corresponding to the graphic feature.
  • the one or more filling elements according to the graphic feature, filling the hand-drawn graphic with the one or more filling elements, generating a corresponding color text to return to the user equipment, and the user equipment Screen.
  • the generating device 1 acquires a hand-drawn graphic input by the user.
  • the user inputs a hand-drawn graphic by interaction with the user equipment.
  • the user equipment is a mobile terminal such as a touch screen mobile phone
  • the user is in the hand-drawn input area of the touch screen mobile phone.
  • a certain hand-drawn graphic is input; for example, when the user device is a desktop computer, a notebook computer, etc., the user slides through the mouse, or the finger slides on the touch pad, etc.
  • a certain hand-drawn graphic is input in the hand-drawn input area of the device.
  • the hand-drawn input area triggers display, for example, when the user clicks on a corresponding button in the input application.
  • the generating device 1 acquires a hand-drawn graphic input by the user through an interaction with the user device, such as one or more calls to an application program interface (API) provided by the input application, for example, acquiring a stylus pen through a sensor.
  • API application program interface
  • the motion track is used to obtain the hand-drawn graphics input by the user; or, in step 601, the generating device 1 obtains the hand-drawn graphics by, for example, mapping the content hand-drawn by the user.
  • the generating device 1 identifies graphical features of the hand-drawn graphics. Specifically, in step 602, the generating device 1 performs feature extraction and selection on the hand-drawn graphic acquired in step 601 to identify a graphic feature that best reflects the essence of the hand-drawn graphic, for example, the graphic contour curvature of the hand-drawn graphic. The place where the maximum or contour direction suddenly changes. For example, when the hand-drawn graphics acquired by the generating device 1 in step 601 is a heart shape, in step 602, the generating device 1 identifies the graphic features of the heart shape by feature extraction and selection: acute angles with the same curvature in two directions .
  • step 603 the generating means 1 determines one or more fill elements corresponding to the graphical features. Specifically, in step 603, the manner in which the generating device 1 determines one or more padding elements corresponding to the graphic features identified in step 602 includes, but is not limited to:
  • the elements stored in the element library 1 include, but are not limited to, at least one of the following: a symbol, an expression, or a custom picture.
  • Symbols include, but are not limited to, Chinese punctuation, English punctuation, mathematical notation, serial number, Greek symbol, phonetic symbol, tab symbol, and the like.
  • Expressions include but are not limited to emoji expressions, gif expressions, and the like.
  • Custom images include, but are not limited to, custom images uploaded by other users and uploaded by other users.
  • the element library 1 may be located in the user equipment or in a third party device connected to the user equipment through a network. Further, the element library 1 includes, but is not limited to, at least one of the following sub-libraries: a symbol library, an expression library, or a custom picture library.
  • the step 603 further includes a sub-step 6031 (not shown).
  • the generating device 1 acquires a customized picture uploaded by the uploader, and establishes or updates the element library.
  • the user or another user on the network uploads a customized image
  • the generating device 1 obtains the customized image uploaded by the uploader, and stores the customized image in the element library, thereby implementing the establishment of the element library or Update.
  • the element library 2 can also be created or updated in the manner described above.
  • the element library 2 may be identical or partially identical to the element library 1; the element library 2 may also be a user-defined element library.
  • step 604 the generating device 1 fills the hand-drawn graphics with the one or more filling elements according to the graphic features, and generates corresponding facial characters for performing an upper screen. Specifically, in step 604, the generating device 1 determines one or more determined by the element library matching, the user selected, or a combination of the two according to the graphic features identified in step 602. The filling element performs element filling on the hand-drawn graphic to generate a corresponding facial text upper screen, that is, the hand-drawn graphic is displayed by the one or more filling elements.
  • step 604 the generating device 1 displays the generated face text directly on the screen of the user device, for example, by calling a dynamic page technology such as JSP, ASP or PHP one or more times; or, in step 604, when the user When the upper screen operation is performed by clicking, long pressing the upper screen button, etc., the generating device 1 displays the generated facial character on the screen of the user device, for example, in the input box of the user.
  • a dynamic page technology such as JSP, ASP or PHP one or more times
  • the generating device 1 displays the generated facial character on the screen of the user device, for example, in the input box of the user.
  • the main features of the heart shape are identified in step 602 as: sharp angles with the same curvature in both directions.
  • the generating device 1 determines one or more filling elements corresponding to the heart shape according to the graphic features, for example: ",.!-".
  • the generating device 1 fills the heart-shaped pattern with these elements and uploads the screen to generate a face text as shown in FIG. 2; or in step 603, the generating device 1 determines the corresponding fill element according to the user's selection,
  • the heart-shaped pattern is filled with these elements and screened up to generate a face text as shown in FIG.
  • the generating device 1 acquires the hand-drawn graphics input by the user through steps 601-604, identifies the graphic features of the hand-drawn graphics, and determines one or more filling elements corresponding to the graphic features, according to the graphic features, Filling the hand-drawn graphics with the one or more filling elements, generating corresponding facial characters for the upper screen, and the user can make the facial characters by himself, and simply generating the graffiti, the face with the same shape as the graffiti can be automatically generated.
  • the text fulfills the individual needs.
  • the user can also independently select a fill element that constitutes a face text, and generate a corresponding face text on the screen based on the fill element, thereby further improving the user's input use experience.
  • Figure 7 illustrates a flow chart of a method for generating facial text in accordance with a preferred embodiment of the present invention.
  • the generating device 1 acquires the hand-drawn graphics input by the user; in step 705, the generating device 1 performs pre-processing on the hand-drawn graphics to obtain the pre-processed hand-drawn graphics; in step 702, generates The device 1 identifies the graphical features of the pre-processed hand-drawn graphics; in step 703, the generating device 1 determines one or more padding elements corresponding to the graphics features; in step 704, the generating device 1
  • the graphic feature fills the hand-drawn graphic with the one or more filling elements, and generates corresponding graphic characters for performing an upper screen.
  • the steps 701, 702-704 are the same as or substantially the same as the corresponding steps shown in FIG. 6, and therefore are not described herein again, and are included herein by reference.
  • step 705 the generating device 1 performs pre-processing on the hand-drawn graphics to obtain a pre-processed hand-drawn graphics. Specifically, in step 705, the generating device 1 performs preprocessing on the hand-drawn graphics acquired in step 701, such as graphics enhancement processing, denoising, graphics sharpening, etc., to obtain the pre-processed hand-drawn graphics.
  • step 705 can also include sub-step 7051 (not shown), sub-step 7052 (not shown), and sub-step 7053 (not shown).
  • sub-step 7051 the generating device 1 implements enhancement processing of the hand-drawn graphics by graphical transformation
  • sub-step 7052 the generating device 1 performs smoothing or filtering of the hand-drawn graphics by removing noise from the graphics, thereby contributing to improvement of image quality.
  • the generating device 1 implements edge sharpening processing on the hand-drawn graphic by graphic sharpening.
  • the hand-drawn graphic input by the user is a heart shape
  • the figure The shape smoothing unit can remove the extra points, help to remove noise and repair the graphics; the graphic edge processing unit helps to accurately recognize the corners of the heart, and the high fidelity restores the user's hand-drawn graphics.
  • step 701 the pre-processing of step 705 is performed first, and then the graphic feature recognition is performed on the pre-processed hand-drawn graphics, thereby improving the accuracy of the graphic feature recognition.
  • the generated facial text is more closely matched with the hand-drawn graphics input by the user, further enhancing the user experience.
  • Figure 8 illustrates a flow chart of a method for generating face text in accordance with yet another preferred embodiment of the present invention.
  • the generating device 1 acquires a hand-drawn graphic input by the user; in step 802, the generating device 1 recognizes the graphic feature of the hand-drawn graphic; in step 803, the generating device 1 determines that the graphic feature is Corresponding one or more filling elements; in step 806, the generating means 1 determines to fill the element density of the hand-drawn graphics with the filling elements; in step 804, the generating means 1 according to the graphic features, The element density is filled in the hand-drawn graphic with the one or more filling elements, and the corresponding facial characters are generated for the upper screen.
  • the steps 801-803 are the same as or substantially the same as the corresponding steps in FIG. 6, and therefore are not described herein again, and are included herein by reference.
  • step 806 the generating means 1 determines to fill the element density of the hand-drawn graphic with the filling element.
  • the manner in which the generating device 1 determines the density of the element includes, but is not limited to:
  • the user After automatically determining the density of the elements according to the graphical features, the user adjusts to determine the density of the elements together.
  • the generating device 1 automatically increases the element density of the filling element; for the gentle graphic
  • the generating device 1 adopts a uniform packing density; some users may not like the preset packing density, and may set the packing density according to their own senses; the element density determined by one method alone does not satisfy the user's demand. When combined, two methods can be combined to determine one Element density.
  • the method for generating a face text further determines that the element density of the hand-drawn graphic input by the user is filled with the fill element, and the element fill is performed on the hand-drawn graphic in combination with the element density, thereby further improving the generated face text and the hand-drawn graphic input by the user.
  • the matching degree improves the user experience.
  • the method for generating a face text further includes a step 807 (not shown), in which the generating device 1 stores the face text for direct selection and use by the user when inputting the next time.
  • the user saves the created face text by clicking, long pressing the save button, etc.
  • the generating device 1 stores the face text, for example, stored in the local face font of the user device. Or stored in the network character library connected to the user equipment through the network, and may be directly called when the user uses the same face text next time; or, in step 807, the generating device 1 uses the face text as the self.
  • the definition picture is transferred to the element library; or, in step 807, the generating device 1 uploads the face text to the network character library for use by other users.
  • step 807 enables the generating device 1 to store, dump, or upload the user-generated facial characters to facilitate the use of the user or other network users, thereby further improving the user experience.
  • the present invention can be implemented in software and/or a combination of software and hardware.
  • the various devices of the present invention can be implemented using an application specific integrated circuit (ASIC) or any other similar hardware device.
  • the software program of the present invention may be executed by a processor to implement the steps or functions described above.
  • the software program (including related data structures) of the present invention can be stored in a computer readable recording medium such as a RAM memory, a magnetic or optical drive or a floppy disk and the like.
  • a computer readable recording medium such as a RAM memory, a magnetic or optical drive or a floppy disk and the like.
  • this hair Some of the steps or functions of the description may be implemented in hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.
  • a method for generating a face text comprising the steps of:
  • step c comprises:
  • step c comprises:
  • step d includes:
  • step b comprises:
  • step d includes:
  • a generating device for generating a face text wherein the generating device comprises:
  • An identification device configured to identify a graphic feature of the hand-drawn graphic
  • a first determining means for determining one or more filling elements corresponding to the graphic features
  • the upper screen device is configured to fill the hand-drawn graphic with the one or more filling elements according to the graphic feature, and generate corresponding facial characters for performing an upper screen.
  • the updating device is configured to acquire a customized image uploaded by the uploader, and establish or update the element library.
  • the upper screen device is used for:
  • a pre-processing device for pre-processing the drawn shape to obtain a pre-processed hand-drawn graphic
  • identification device is used to:
  • a second determining means configured to determine an element density of filling the hand-drawn graphic with the filling element
  • the upper screen device is used for:
  • a storage device configured to store the face text for direct input by the user next time Choose to use.
  • a computer readable storage medium storing computer code, the method of any one of clauses 1 to 8 being executed when the computer code is executed.
  • a computer device comprising a memory and a processor, the memory storing computer code, the processor being configured to execute the computer code to perform any of clauses 1-8 Said method.

Abstract

A method and device for generating emoticon, the method includes:obtaining hand drawn graphics input by a user (601); identifying graphic features of the hand drawn graphics (602); determining one or more filling elements corresponding to the graphic features (603); according to the graphic features, filling the hand drawn graphics with the one or more filling elements, generating the corresponding emoticon on a screen (604). Compared with the prior art, the method and the device allow a user to make emoticon personally, emoticon with the same shape as that of a graffiti could be generated automatically only with a simple graffiti, and a personalized demand could be achieved.

Description

一种生成颜文字的方法和装置Method and device for generating facial characters
相关申请的交叉引用Cross-reference to related applications
本申请享有2015年08月31日提交的专利申请号为201510548856.7、名称为“一种生成颜文字的方法和装置”的中国专利申请的优先权,该在先申请的内容以引用方式合并于此。The present application has priority to the Chinese Patent Application No. 201510548856.7, filed on Aug. 31, 2015, which is incorporated herein by reference. .
技术领域Technical field
本发明涉及输入技术领域,尤其涉及一种用于生成颜文字的方法和装置。The present invention relates to the field of input technologies, and in particular, to a method and apparatus for generating facial characters.
背景技术Background technique
表情是用户在文字输入过程中表达情感的重要途径,可以使输入过程更加有趣生动,颜文字就是表情的一种,颜文字因为能在输入场景下,生动形象地表达用户的意图,从而被广泛使用。Emoticons are an important way for users to express emotions in the process of text input. They can make the input process more interesting and vivid. The text is a kind of expression. Because the characters can vividly express the user's intention in the input scene, they are widely used. use.
在现有的输入法设计中,都会为用户内置多种颜文字,但是这些颜文字都是为用户设定好的,用户很难按照自己的需求找到合适的颜文字;此外,大家都使用相同的颜文字,缺乏个性,表达情感的方式受到了限制,趋于单一而新鲜感不足;并且,现有的颜文字表现形式单一,只是由简单的符号或者线条组成,形成元素比较单一。In the existing input method design, a variety of face texts are built in for the user, but these characters are set for the user, and it is difficult for the user to find a suitable face text according to their own needs; in addition, everyone uses the same The lack of personality, lack of personality, the way to express emotions is limited, tends to be single and lack of freshness; and the existing expressions of the characters are single, consisting of simple symbols or lines, forming a single element.
因此,如何基于用户需求生成颜文字,提升其输入体验,成为本领域技术人员亟需解决的问题之一。Therefore, how to generate facial characters based on user requirements and enhance their input experience has become one of the problems that those skilled in the art need to solve.
发明内容Summary of the invention
本发明的目的是提供一种用于生成颜文字的方法和装置。It is an object of the present invention to provide a method and apparatus for generating facial characters.
根据本发明的一个方面,提供一种用于生成颜文字的方法,其中,该方法包括以下步骤:According to an aspect of the invention, a method for generating a face text is provided, wherein the method comprises the steps of:
a获取用户输入的手绘图形;a obtain the hand-drawn graphics input by the user;
b识别所述手绘图形的图形特征; b identifying a graphical feature of the hand-drawn graphic;
c确定与所述图形特征相对应的一种或多种填充元素;c determining one or more fill elements corresponding to the graphical features;
d根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。d according to the graphic feature, filling the hand-drawn graphic with the one or more filling elements, and generating corresponding facial characters for performing an upper screen.
根据本发明的另一个方面,还提供了一种用于生成颜文字的生成装置,其中,该装置包括以下装置:According to another aspect of the present invention, there is also provided a generating apparatus for generating a face text, wherein the apparatus comprises the following means:
获取装置,用于获取用户输入的手绘图形;Obtaining means for acquiring a hand-drawn graphic input by a user;
识别装置,用于识别所述手绘图形的图形特征;An identification device, configured to identify a graphic feature of the hand-drawn graphic;
第一确定装置,用于确定与所述图形特征相对应的一种或多种填充元素;a first determining means for determining one or more filling elements corresponding to the graphic features;
上屏装置,用于根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。The upper screen device is configured to fill the hand-drawn graphic with the one or more filling elements according to the graphic feature, and generate corresponding facial characters for performing an upper screen.
与现有技术相比,本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:
1)用户可以自己制作颜文字,只需要简单的进行涂鸦,就可以自动生成与涂鸦形状相同的颜文字,实现了个性化的需求;1) Users can make their own texts by simply making graffiti, and they can automatically generate the same facial characters as the graffiti shape, realizing the personalized needs;
2)用户还可以自主选择组成颜文字的填充元素,基于该填充元素生成对应的颜文字上屏,进一步提升了用户的输入使用体验;。2) The user can also select the filling element that constitutes the character text, and generate the corresponding face text on the screen based on the filling element, thereby further improving the user's input and use experience;
3)在获取用户输入的手绘图形之后,对此先进行预处理,再对预处理后的手绘图形进行图形特征识别,提高了图形特征识别的准确度,使得生成的颜文字与用户输入的手绘图形更匹配,进一步提升了用户的使用体验;3) After obtaining the hand-drawn graphics input by the user, the pre-processing is performed on the pre-processed hand-drawn graphics, and the accuracy of the graphic feature recognition is improved, so that the generated facial text and the user input are hand-drawn. The graphics are more matched, which further enhances the user experience;
4)确定以填充元素填充用户输入的手绘图形的元素密度,结合该元素密度对该手绘图形进行元素填充,进一步提高了所生成的颜文字与用户输入的手绘图形的匹配度,提升了用户的使用体验;4) determining to fill the element density of the hand-drawn graphic input by the user with the filling element, and combining the element density to fill the hand-drawn graphic element, further improving the matching degree between the generated facial text and the hand-drawn graphic input by the user, thereby improving the user's Use experience
5)将用户生成的颜文字进行存储、转存或上传,方便该用户或其他网络用户的使用,进一步提升用户的使用体验。5) Store, transfer or upload the user-generated face text to facilitate the use of the user or other network users, further enhancing the user experience.
附图说明DRAWINGS
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显: Other features, objects, and advantages of the present invention will become more apparent from the Detailed Description of Description
图1示出根据本发明一个方面的用于生成颜文字的生成装置示意图;1 shows a schematic diagram of a generating device for generating a face text according to an aspect of the present invention;
图2示出本发明一个实施例的生成颜文字效果图;2 is a diagram showing an effect of generating a face text according to an embodiment of the present invention;
图3示出本发明另一个实施例的生成颜文字效果图;FIG. 3 is a diagram showing an effect of generating a face text according to another embodiment of the present invention; FIG.
图4示出了根据本发明一个优选实施例的用于生成颜文字的生成装置示意图;4 is a schematic diagram of a generating device for generating a face text according to a preferred embodiment of the present invention;
图5示出了根据本发明又一个优选实施例的用于生成颜文字的生成装置示意图;FIG. 5 is a schematic diagram of a generating apparatus for generating a face text according to still another preferred embodiment of the present invention; FIG.
图6示出根据本发明另一个方面的用于生成颜文字的流程示意图;6 is a flow chart showing the process of generating a face text according to another aspect of the present invention;
图7示出了根据本发明一个优选实施例的用于生成颜文字的流程示意图;FIG. 7 is a flow chart showing a method for generating a face text according to a preferred embodiment of the present invention; FIG.
图8示出了根据本发明另一个优选实施例的用于生成颜文字的流程示意图。FIG. 8 is a flow chart showing the process of generating face characters according to another preferred embodiment of the present invention.
附图中相同或相似的附图标记代表相同或相似的部件。The same or similar reference numerals in the drawings denote the same or similar components.
具体实施方式detailed description
在更加详细地讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将各项操作描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。此外,各项操作的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。Before discussing the exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as a process or method depicted as a flowchart. Although the flowcharts describe various operations as a sequential process, many of the operations can be implemented in parallel, concurrently or concurrently. In addition, the order of operations can be rearranged. The process may be terminated when its operation is completed, but may also have additional steps not included in the figures. The processing may correspond to methods, functions, procedures, subroutines, subroutines, and the like.
在上下文中所称“计算机设备”,也称为“电脑”,是指可以通过运行预定程序或指令来执行数值计算和/或逻辑计算等预定处理过程的智能电子设备,其可以包括处理器与存储器,由处理器执行在存储器中预存的存续指令来执行预定处理过程,或是由ASIC、FPGA、DSP等硬件执行预定处理过程,或是由上述二者组合来实现。By "computer device", also referred to as "computer" in the context, is meant an intelligent electronic device that can perform predetermined processing, such as numerical calculations and/or logical calculations, by running a predetermined program or instruction, which can include a processor and The memory is executed by the processor to execute a predetermined process pre-stored in the memory to execute a predetermined process, or is executed by hardware such as an ASIC, an FPGA, a DSP, or the like, or a combination of the two.
所述计算机设备包括用户设备与网络设备。其中,所述用户设备 包括但不限于个人电脑、笔记本电脑、平板电脑、智能手机、PDA等;所述网络设备包括但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算(Cloud Computing)的由大量计算机或网络服务器构成的云,其中,云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个超级虚拟计算机。其中,所述计算机设备可单独运行来实现本发明,也可接入网络并通过与网络中的其他计算机设备的交互操作来实现本发明。其中,所述计算机设备所处的网络包括但不限于互联网、广域网、城域网、局域网、VPN网络等。The computer device includes a user device and a network device. Wherein the user equipment Including but not limited to personal computers, notebook computers, tablets, smart phones, PDAs, etc.; the network devices include but are not limited to a single network server, a server group composed of multiple network servers, or a large number of cloud computing based A cloud composed of a computer or a network server, wherein cloud computing is a type of distributed computing, a super virtual computer composed of a group of loosely coupled computers. Wherein, the computer device can be operated separately to implement the present invention, and can also access the network and implement the present invention by interacting with other computer devices in the network. The network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
需要说明的是,所述用户设备、网络设备和网络等仅为举例,其他现有的或今后可能出现的计算机设备或网络如可适用于本发明,也应包含在本发明保护范围以内,并以引用方式包含于此。It should be noted that the user equipment, the network equipment, the network, and the like are merely examples, and other existing or future possible computer equipment or networks, such as those applicable to the present invention, are also included in the scope of the present invention. It is included here by reference.
后面所讨论的方法(其中一些通过流程图示出)可以通过硬件、软件、固件、中间件、微代码、硬件描述语言或者其任意组合来实施。当用软件、固件、中间件或微代码来实施时,用以实施必要任务的程序代码或代码段可以被存储在机器或计算机可读介质(比如存储介质)中。(一个或多个)处理器可以实施必要的任务。The methods discussed below, some of which are illustrated by flowcharts, can be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to carry out the necessary tasks can be stored in a machine or computer readable medium, such as a storage medium. The processor(s) can perform the necessary tasks.
这里所公开的具体结构和功能细节仅仅是代表性的,并且是用于描述本发明的示例性实施例的目的。但是本发明可以通过许多替换形式来具体实现,并且不应当被解释成仅仅受限于这里所阐述的实施例。The specific structural and functional details disclosed are merely representative and are for the purpose of describing exemplary embodiments of the invention. The present invention may, however, be embodied in many alternative forms and should not be construed as being limited only to the embodiments set forth herein.
应当理解的是,虽然在这里可能使用了术语“第一”、“第二”等等来描述各个单元,但是这些单元不应当受这些术语限制。使用这些术语仅仅是为了将一个单元与另一个单元进行区分。举例来说,在不背离示例性实施例的范围的情况下,第一单元可以被称为第二单元,并且类似地第二单元可以被称为第一单元。这里所使用的术语“和/或”包括其中一个或更多所列出的相关联项目的任意和所有组合。It should be understood that although the terms "first," "second," etc. may be used herein to describe the various elements, these elements should not be limited by these terms. These terms are used only to distinguish one unit from another. For example, a first unit could be termed a second unit, and similarly a second unit could be termed a first unit, without departing from the scope of the exemplary embodiments. The term "and/or" used herein includes any and all combinations of one or more of the associated listed items.
应当理解的是,当一个单元被称为“连接”或“耦合”到另一单元时,其可以直接连接或耦合到所述另一单元,或者可以存在中间单元。与此相对,当一个单元被称为“直接连接”或“直接耦合”到另一单元时, 则不存在中间单元。应当按照类似的方式来解释被用于描述单元之间的关系的其他词语(例如“处于...之间”相比于“直接处于...之间”,“与...邻近”相比于“与...直接邻近”等等)。It will be understood that when a unit is referred to as "connected" or "coupled" to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present. In contrast, when a unit is referred to as being "directly connected" or "directly coupled" to another unit, Then there is no intermediate unit. Other words used to describe the relationship between the units should be interpreted in a similar manner (eg "between" and "directly between" and "adjacent to" Than "directly adjacent to", etc.).
这里所使用的术语仅仅是为了描述具体实施例而不意图限制示例性实施例。除非上下文明确地另有所指,否则这里所使用的单数形式“一个”、“一项”还意图包括复数。还应当理解的是,这里所使用的术语“包括”和/或“包含”规定所陈述的特征、整数、步骤、操作、单元和/或组件的存在,而不排除存在或添加一个或更多其他特征、整数、步骤、操作、单元、组件和/或其组合。The terminology used herein is for the purpose of describing the particular embodiments, The singular forms "a", "an", It is also to be understood that the terms "comprising" and """ Other features, integers, steps, operations, units, components, and/or combinations thereof.
还应当提到的是,在一些替换实现方式中,所提到的功能/动作可以按照不同于附图中标示的顺序发生。举例来说,取决于所涉及的功能/动作,相继示出的两幅图实际上可以基本上同时执行或者有时可以按照相反的顺序来执行。It should also be noted that, in some alternative implementations, the functions/acts noted may occur in a different order than that illustrated in the drawings. For example, two figures shown in succession may in fact be executed substantially concurrently or sometimes in the reverse order, depending on the function/acts involved.
下面结合附图对本发明作进一步详细描述。The invention is further described in detail below with reference to the accompanying drawings.
图1示出根据本发明一个方面的用于生成颜文字的生成装置示意图。生成装置1包括获取装置101、识别装置102、第一确定装置103和上屏装置104。1 shows a schematic diagram of a generating apparatus for generating a face text in accordance with an aspect of the present invention. The generating device 1 includes an obtaining device 101, an identifying device 102, a first determining device 103, and an upper screen device 104.
在此,生成装置1例如可以位于计算机设备中,所述计算机设备包括用户设备与网络设备。当该生成装置1位于网络设备时,其与用户设备之间通过网络进行相互通信,获取用户在用户设备中输入的手绘图形,识别所述手绘图形的图形特征,确定与所述图形特征相对应的一种或多种填充元素,根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字返回该用户设备,该用户设备对该颜文字进行上屏。In this case, the generating device 1 can, for example, be located in a computer device, which comprises a user device and a network device. When the generating device 1 is located in the network device, it communicates with the user equipment through the network, acquires a hand-drawn graphic input by the user in the user device, identifies graphic features of the hand-drawn graphic, and determines corresponding to the graphic feature. And the one or more filling elements, according to the graphic feature, filling the hand-drawn graphic with the one or more filling elements, generating a corresponding color text to return to the user equipment, and the user equipment Screen.
以下以该生成装置1位于用户设备中为例进行详细描述。The following is a detailed description of the case where the generating device 1 is located in the user equipment.
其中,获取装置101获取用户输入的手绘图形。具体地,用户通过与用户设备之间的交互,输入了手绘图形,例如,当该用户设备为触屏手机等移动终端时,用户在该触屏手机的手绘输入区域,通过手指在屏幕上滑动、触屏笔绘画等方式,输入了一定的手绘图形;又如, 当该用户设备为台式机、笔记本电脑等设备时,用户通过鼠标滑动、或手指在触摸板滑动等方式,在该设备的手绘输入区域输入了一定的手绘图形。较佳地,该手绘输入区域例如当用户点击输入应用中的相应按钮时触发显示。获取装置101通过与该用户设备的交互,如一次或多次调用输入应用提供的应用程序接口(API),获取该用户输入的手绘图形,例如,通过传感器获取触屏笔的运动轨迹,以获取该用户输入的手绘图形;或者,获取装置101例如通过对用户手绘的内容进行映射,从而获取手绘图形。The acquiring device 101 acquires a hand-drawn graphic input by a user. Specifically, the user inputs a hand-drawn graphic by interacting with the user equipment. For example, when the user equipment is a mobile terminal such as a touch screen mobile phone, the user slides on the screen by using a finger in the hand-drawn input area of the touch screen mobile phone. , touch screen pen painting, etc., input a certain hand-painted graphics; When the user equipment is a desktop computer, a notebook computer, or the like, the user inputs a certain hand-drawn graphic in the hand-drawn input area of the device by sliding the mouse or sliding the finger on the touch panel. Preferably, the hand-drawn input area triggers display, for example, when the user clicks on a corresponding button in the input application. The obtaining device 101 acquires a hand-drawn graphic input by the user through an interaction with the user device, such as one or more calls to an application program interface (API) provided by the input application, for example, acquiring a motion track of the stylus pen through the sensor to obtain The hand-drawn graphic input by the user; or the obtaining means 101 obtains the hand-drawn graphic by, for example, mapping the content hand-drawn by the user.
本领域技术人员应能理解上述获取手绘图形的方式仅为举例,其他现有的或今后可能出现的获取手绘图形的方式如可适用于本发明,也应包含在本发明保护范围以内,并在此以引用方式包含于此。Those skilled in the art should understand that the above manner of obtaining hand-drawn graphics is only an example, and other existing or future possible methods for obtaining hand-drawn graphics may be applicable to the present invention, and should also be included in the scope of protection of the present invention. This is hereby incorporated by reference.
识别装置102识别所述手绘图形的图形特征。具体地,识别装置102对由获取装置101获取的手绘图形进行特征抽取和选择,以识别最能反映该手绘图形的本质的图形特征,例如,该手绘图形的图形轮廓曲度最大或轮廓方向突然改变的地方。例如,当获取装置101获取的手绘图形为心形时,识别装置102通过特征提取和选择,识别出心形的图形特征:两个方向相同的带有弧度的锐角。 Identification device 102 identifies graphical features of the hand-drawn graphics. Specifically, the identification device 102 performs feature extraction and selection on the hand-drawn graphics acquired by the acquisition device 101 to identify graphic features that best reflect the essence of the hand-drawn graphics, for example, the graphic contour curvature of the hand-drawn graphic is the largest or the contour direction is sudden. The place to change. For example, when the hand-drawn graphics acquired by the acquisition device 101 are in a heart shape, the recognition device 102 recognizes the heart-shaped graphic features by feature extraction and selection: sharp angles with the same curvature in both directions.
本领域技术人员应能理解上述识别图形特征的方式仅为举例,其他现有的或今后可能出现的识别图形的方式如可适用于本发明,也应包含在本发明保护范围以内,并在此以引用方式包含于此。Those skilled in the art should understand that the above manner of identifying graphic features is merely an example, and other existing or future possible identification patterns may be applicable to the present invention, and should also be included in the scope of protection of the present invention. It is included here by reference.
第一确定装置103确定与所述图形特征相对应的一种或多种填充元素。具体地,第一确定装置103确定与识别装置102识别的图形特征相对应的一种或多种填充元素的方式包括但不限于:The first determining means 103 determines one or more fill elements corresponding to the graphical features. Specifically, the manner in which the first determining means 103 determines one or more filling elements corresponding to the graphic features recognized by the identifying means 102 includes, but is not limited to:
1)根据识别的图形特征,在元素库1中进行匹配查询,获得与所述图形特征对应的一种或多种填充元素。例如,对于图形特征“圆弧”,匹配获得与其对应的填充元素为符号“(”或者“)”。又如,对于图形特征“直线”,匹配获得与其对应的填充元素为符号“!”。1) Perform a matching query in the element library 1 according to the identified graphic features to obtain one or more filling elements corresponding to the graphic features. For example, for the graphic feature "arc", the matching obtains the corresponding filling element as the symbol "(" or ")". For another example, for the graphic feature "straight line", the matching obtains the corresponding filling element as the symbol "!".
在此,元素库1中存储的元素包括但不限于以下至少任一项:符号、表情或者自定义图片。符号包括但不限于中文标点符、英文标 点符、数学符号、序号、希俄符号、注音符号、制表符号等。表情包括但不限于emoji表情、gif表情等。自定义图片包括但不限于该用户所上传的、其他用户所上传的自定义图片。Here, the elements stored in the element library 1 include, but are not limited to, at least one of the following: a symbol, an expression, or a custom picture. Symbols include but are not limited to Chinese punctuation marks, English standards Dot, mathematical symbol, serial number, Greek symbol, phonetic symbol, tab symbol, etc. Expressions include but are not limited to emoji expressions, gif expressions, and the like. Custom images include, but are not limited to, custom images uploaded by other users and uploaded by other users.
在此,该元素库1可以位于该用户设备中,也可以位于与该用户设备通过网络相连接的第三方设备中。进一步地,元素库1包括但不限于以下至少任一个子库:符号库、表情库或自定义图片库。Here, the element library 1 may be located in the user equipment or in a third party device connected to the user equipment through a network. Further, the element library 1 includes, but is not limited to, at least one of the following sub-libraries: a symbol library, an expression library, or a custom picture library.
优选地,生成装置1还包括更新装置(未示出),该更新装置获取上传者所上传的自定义图片,建立或更新所述元素库。Preferably, the generating device 1 further comprises an updating device (not shown) that acquires a custom picture uploaded by the uploader, and builds or updates the element library.
具体地,该用户或网络端其他用户上传了自定义图片,更新装置获取了这些上传者所上传的自定义图片,将自定义图片存储于元素库中,从而实现对该元素库的建立或更新。Specifically, the user or another user on the network uploads a customized picture, and the update device acquires the customized picture uploaded by the uploader, and stores the customized picture in the element library, thereby implementing the establishment or update of the element library. .
2)获取所述用户所选择的与所述手绘图形相对应的一种或多种填充元素,例如,用户从元素库2中自主选择要使用的一种或多种填充元素。2) Acquiring one or more padding elements selected by the user corresponding to the hand-drawn graphics, for example, the user autonomously selects one or more padding elements to be used from the element library 2.
较佳地,该元素库2也可以通过上述方式进行建立或更新。Preferably, the element library 2 can also be created or updated in the manner described above.
3)通过上述1)和2)相结合的方式,例如,在根据识别的图形特征在元素库1中进行匹配查询,获得与所述图形特征对应的一种或多种填充元素的基础上,用户再根据具体情况从元素库2中自主选择部分填充元素,通过两者结合确定要填充的一种或多种填充元素;或者,在根据识别的图形特征在元素库1中进行匹配查询,获得与所述图形特征对应的一种或多种填充元素的基础上,用户再从该一种或多种填充元素中选择其想要的填充元素。3) by combining the above 1) and 2), for example, based on the identified graphical features in the element library 1 to perform a matching query to obtain one or more filling elements corresponding to the graphic features, The user then selects a part of the filling element from the element library 2 according to the specific situation, and determines one or more filling elements to be filled by combining the two; or, in the element library 1 according to the recognized graphic features, the matching query is obtained. Based on the one or more fill elements corresponding to the graphical features, the user then selects the desired fill elements from the one or more fill elements.
在此,所述元素库2可以与元素库1相同或部分相同;所述元素库2也可以是用户自定义的元素库。Here, the element library 2 may be identical or partially identical to the element library 1; the element library 2 may also be a user-defined element library.
本领域技术人员应能理解上述确定与所述图形特征相对应的一种或多种填充元素的方式仅为举例,其他现有的或今后可能出现的确定与图形特征相对应的一种或多种填充元素的方法如可适用于本发明,也应包含在本发明保护范围内,并在此以引用方式包含于此。本领域技术人员还应能理解上述填充元素仅为举例,其他现有的或今后 可能出现的填充元素的方法如可适用于本发明,也应包含在本发明保护范围内,并在此以引用方式包含于此。Those skilled in the art will appreciate that the above-described manner of determining one or more fill elements corresponding to the graphical features is merely an example, and other existing or future possible determinations may correspond to one or more of the graphical features. A method of filling elements, as applicable to the present invention, is also intended to be included within the scope of the invention and is hereby incorporated by reference. Those skilled in the art should also understand that the above filling elements are merely examples, other existing or future Methods of filling elements that may be present, as applicable to the present invention, are also intended to be included within the scope of the invention and are hereby incorporated by reference.
上屏装置104根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。具体地,上屏装置104根据识别装置102识别出的图形特征,以第一确定装置103通过元素库匹配确定的、用户选择的、或两者相结合所确定的一种或多种填充元素,对该手绘图形进行元素填充,生成对应的颜文字上屏,即,以该一种或多种填充元素显示该手绘图形。该上屏装置104例如通过一次或多次调用诸如JSP、ASP或PHP等动态页面技术,将该生成的颜文字直接显示在用户设备的屏幕上;或者,当用户通过点击、长按上屏按钮等方式进行上屏操作时,上屏装置104将该生成的颜文字显示在该用户设备的屏幕上,例如,显示在该用户的输入框内。The upper screen device 104 fills the hand-drawn graphics with the one or more filling elements according to the graphic features, and generates corresponding facial characters for performing an upper screen. Specifically, the upper screen device 104 determines one or more padding elements determined by the first determining device 103 through the element library matching, the user selected, or a combination of the two according to the graphic features recognized by the identifying device 102. The hand-drawn graphic is filled with elements to generate a corresponding facial text upper screen, that is, the hand-drawn graphic is displayed by the one or more filling elements. The upper screen device 104 displays the generated face text directly on the screen of the user device by, for example, one or more calls to dynamic page technologies such as JSP, ASP or PHP; or when the user clicks and presses the up button When the screen operation is performed in an equal manner, the upper screen device 104 displays the generated face text on the screen of the user device, for example, in the input box of the user.
本领域技术人员应能理解上述上屏方式仅为举例,其他现有的或今后可能出现的新的上屏方式如可适用于本发明,也应包含在本发明保护范围以内,并在此以引用方式包含于此。Those skilled in the art should understand that the above-mentioned upper screen mode is only an example, and other existing upper screen modes that may exist or may appear in the future, as applicable to the present invention, are also included in the scope of protection of the present invention, and The reference is included here.
以下用一实施例对上述装置的操作进行详细描述:The operation of the above apparatus will be described in detail below using an embodiment:
例如,假设获取装置101所获取的手绘图形为心形,识别装置102识别出心形的主要特征为:两个方向相同的带有弧度的锐角。第一确定装置103根据所述图形特征确定出一种或几种与心形对应的填充元素,例如:“,.!-”。上屏装置104以这些元素填充心形图案并上屏,生成如图2所示的颜文字;或者第一确定装置103根据用户的选择确定出对应的填充元素,由上屏装置104以这些元素填充心形图案并上屏,生成如图3所示的颜文字。For example, assuming that the hand-drawn graphics acquired by the acquisition device 101 are in a heart shape, the recognition device 102 recognizes that the main features of the heart shape are: sharp angles with the same curvature in both directions. The first determining means 103 determines one or more filling elements corresponding to the heart shape according to the graphic features, for example: ",.!-". The upper screen device 104 fills the heart-shaped pattern with these elements and uploads the screen to generate a face text as shown in FIG. 2; or the first determining means 103 determines a corresponding fill element according to the user's selection, which is used by the upper screen device 104. Fill the heart pattern and put it on the screen to generate the face text as shown in Figure 3.
在此,生成装置1获取用户输入的手绘图形,识别所述手绘图形的图形特征,确定与所述图形特征相对应的一种或多种填充元素,根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏,用户可以自己制作颜文字,只需要简单的进行涂鸦,就可以自动生成与涂鸦形状相同的的颜文字,实现了个性化的需求。 Here, the generating device 1 acquires a hand-drawn graphic input by the user, identifies a graphic feature of the hand-drawn graphic, and determines one or more filling elements corresponding to the graphic feature, according to the graphic feature, Or a plurality of filling elements fill the hand-drawn graphics, generate corresponding facial characters for the upper screen, and the user can create the facial characters by himself, and only need to simply graffiti, and the facial characters with the same shape as the graffiti can be automatically generated, and the personality is realized. Demand.
进一步地,用户还可以自主选择组成颜文字的填充元素,基于该填充元素生成对应的颜文字上屏,进一步提升了用户的输入使用体验。Further, the user can also independently select a fill element that constitutes a face text, and generate a corresponding face text on the screen based on the fill element, thereby further improving the user's input use experience.
图4示出了根据本发明一个优选实施例的用于生成颜文字的生成装置示意图。该生成装置1还包括预处理装置405。以下对该优选实施例进行详细描述:具体地,获取装置401获取用户输入的手绘图形;预处理装置405对所述手绘图形进行预处理,以获得预处理后的手绘图形;识别装置402识别所述预处理后的手绘图形的图形特征;第一确定装置403确定与所述图形特征相对应的一种或多种填充元素;上屏装置404,根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。其中,获取装置401、识别装置402、第一确定装置403和上屏装置404与图1对应装置相同或基本相同,故此处不再赘述,并通过引用的方式包含于此。4 shows a schematic diagram of a generating device for generating facial characters in accordance with a preferred embodiment of the present invention. The generating device 1 further comprises a pre-processing device 405. The preferred embodiment is described in detail below. Specifically, the obtaining device 401 acquires a hand-drawn graphic input by a user; the pre-processing device 405 performs pre-processing on the hand-drawn graphic to obtain a pre-processed hand-drawn graphic; and the recognition device 402 recognizes the a graphical feature of the pre-processed hand-drawn graphic; the first determining means 403 determines one or more padding elements corresponding to the graphic features; the upper-screen device 404, according to the graphic features, the one or A plurality of filling elements fill the hand-drawn graphics, and generate corresponding facial characters for upper screen. The obtaining device 401, the identifying device 402, the first determining device 403, and the upper screen device 404 are the same as or substantially the same as the corresponding device in FIG. 1, and thus are not described herein again, and are included herein by reference.
其中,预处理装置405对所述手绘图形进行预处理,以获得预处理后的手绘图形。具体地,预处理装置405对获取装置401所获取的手绘图形,进行图形增强处理、去噪、图形锐化等预处理,从而获得预处理后的手绘图形。The pre-processing device 405 performs pre-processing on the hand-drawn graphic to obtain a pre-processed hand-drawn graphic. Specifically, the pre-processing device 405 performs pre-processing such as graphics enhancement processing, denoising, and graphics sharpening on the hand-drawn graphics acquired by the acquiring device 401, thereby obtaining the pre-processed hand-drawn graphics.
优选地,所述预处理装置405还可以包括图形增强单元(未示出)、图形平滑单元(未示出)和图形边缘处理单元(未示出)。该图形增强单元通过图形变换实现对该手绘图形的增强处理;该图形平滑单元通过对图形去除噪点实现对该手绘图形的平滑或滤波,有助于改善图像质量并利于对象特征的抽取;该图形边缘处理单元通过图形锐化实现对该手绘图形的边缘清晰化处理。例如,当用户输入的手绘图形为心形时,假如其不小心在画好的心形旁边多点了一个点,图形平滑单元可以去除该多余的点,帮助去除噪声、修复图形;图形边缘处理单元有助于精确地识别心形的角,高保真还原用户手绘图形。Preferably, the pre-processing device 405 may further include a graphics enhancement unit (not shown), a graphics smoothing unit (not shown), and a graphics edge processing unit (not shown). The graphic enhancement unit realizes enhancement processing of the hand-drawn graphic by graphic transformation; the graphic smoothing unit realizes smoothing or filtering the hand-drawn graphic by removing noise from the graphic, thereby improving image quality and facilitating extraction of the object feature; The edge processing unit realizes edge sharpening processing on the hand-drawn graphic by graphic sharpening. For example, when the hand-drawn graphic input by the user is a heart shape, if it accidentally points a dot more than the drawn heart shape, the graphic smoothing unit can remove the redundant point, help remove noise, repair the graphic; The unit helps to accurately identify the corners of the heart, and high fidelity restores the user's hand-drawn graphics.
在此,生成装置1在获取用户输入的手绘图形之后,对此先进行预处理,再对预处理后的手绘图形进行图形特征识别,提高了图形特征识别的准确度,使得生成的颜文字与用户输入的手绘图形更匹配,进 一步提升了用户的使用体验。Here, after acquiring the hand-drawn graphics input by the user, the generating device 1 performs pre-processing on the first, and then performs graphic feature recognition on the pre-processed hand-drawn graphics, thereby improving the accuracy of the graphic feature recognition, and the generated facial characters and User-entered hand-drawn graphics are more matched, One step has improved the user experience.
图5示出了根据本发明又一个优选实施例的用于生成颜文字的生成装置示意图。该生成装置1还包括第二确定装置506。以下对该优选实施例进行详细描述:具体地,获取装置501获取用户输入的手绘图形;识别装置502识别所述手绘图形的图形特征;第一确定装置503确定与所述图形特征相对应的一种或多种填充元素;第二确定装置506确定以所述填充元素填充所述手绘图形的元素密度;上屏装置504,根据所述图形特征,并结合所述元素密度,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。其中,获取装置501、识别装置502、第一确定装置503和上屏装置504与图1对应装置相同或基本相同,故此处不再赘述,并通过引用的方式包含于此。FIG. 5 is a block diagram showing a generating apparatus for generating face characters according to still another preferred embodiment of the present invention. The generating device 1 further comprises a second determining device 506. The preferred embodiment is described in detail below: specifically, the obtaining means 501 acquires a hand-drawn graphic input by the user; the identifying means 502 identifies the graphic feature of the hand-drawn graphic; the first determining means 503 determines a corresponding to the graphic feature One or more filling elements; a second determining means 506 determining to fill the element density of the hand-drawn graphic with the filling element; an upper screen device 504, according to the graphic feature, in combination with the element density, Or a plurality of filling elements fill the hand-drawn graphics, and generate corresponding facial characters for upper screen. The obtaining device 501, the identifying device 502, the first determining device 503, and the upper screen device 504 are the same as or substantially the same as the corresponding device in FIG. 1, and thus are not described herein again, and are included herein by reference.
其中,第二确定装置506确定以所述填充元素填充所述手绘图形的元素密度。具体地,第二确定装置506确定所述元素密度的方式包括但不限于:Wherein, the second determining means 506 determines to fill the element density of the hand-drawn graphic with the filling element. Specifically, the manner in which the second determining means 506 determines the density of the element includes but is not limited to:
1)根据图形特征自动确定所述元素密度;1) automatically determining the density of the element according to the graphical features;
2)获取用户根据喜好所确定的所述元素密度;2) obtaining the density of the element determined by the user according to the preference;
3)在根据图形特征自动确定所述元素密度之后,用户在此基础上进行调整共同确定所述元素密度。3) After automatically determining the density of the elements according to the graphical features, the user adjusts to determine the density of the elements together.
例如,对于图形轮廓曲度大或者轮廓曲度突然改变的图形处,为了平滑地表示出拐角或者图形的变化,第二确定装置506自动加大填充元素的元素密度;对于平缓的图形处,第二确定装置506采用均匀的填充密度;部分用户可能不喜欢预设的填充密度,可以根据自己的感官设定填充密度;单独用一种方式确定的元素密度都不满足用户需求时,可以结合两种确定方式共同确定一个元素密度。For example, at a graph where the contour contour curvature is large or the contour curvature suddenly changes, in order to smoothly represent the corner or the change of the pattern, the second determining means 506 automatically increases the element density of the filling element; for the gentle graphic portion, The second determining device 506 adopts a uniform packing density; some users may not like the preset packing density, and may set the packing density according to their own senses; when the element density determined by one method alone does not satisfy the user's demand, the two may be combined. A certain way of determining together an element density.
本领域技术人员应能理解上述确定元素密度的方式仅为举例,其他现有的或今后可能出现的确定元素密度的方式如可适用于本发明,也应包含在本发明保护范围以内,并在此以引用方式包含于此。Those skilled in the art should understand that the above manner of determining the density of the elements is merely an example, and other existing or future possible ways of determining the density of the elements, as applicable to the present invention, are also included in the scope of the present invention, and This is hereby incorporated by reference.
在此,生成装置1还确定以填充元素填充用户输入的手绘图形的 元素密度,结合该元素密度对该手绘图形进行元素填充,进一步提高了所生成的颜文字与用户输入的手绘图形的匹配度,提升了用户的使用体验。优选地,生成装置1还包括存储装置(未示出),该存储装置存储所述颜文字,以供所述用户下次输入时直接选择使用。具体地,用户例如通过点击、长按保存按钮等方式,对自己制作的颜文字进行保存操作,存储装置即将该颜文字进行存储,例如,存储在该用户设备的本地颜文字库,或存储在与该用户设备通过网络相连接的网络颜文字库,下次该用户用到同样的颜文字时可以直接调用;或者,存储装置将该颜文字用作自定义图片转存入元素库中;或者,存储装置将该颜文字上传到网络颜文字库,供其他用户使用。Here, the generating device 1 also determines to fill the hand-drawn graphics input by the user with the filling elements. The element density, combined with the element density, fills the hand-drawn graphics with elements, further improving the matching degree between the generated facial characters and the hand-drawn graphics input by the user, thereby improving the user experience. Preferably, the generating device 1 further comprises a storage device (not shown) that stores the facial text for direct selection and use by the user when entering the next time. Specifically, the user saves the created face text by clicking, long pressing the save button, etc., and the storage device stores the face text, for example, stored in the local character library of the user device, or stored in the The network character library connected to the user equipment through the network may be directly called when the user uses the same face text next time; or the storage device uses the face text as a custom picture to be stored in the element library; or The storage device uploads the text to the network text library for use by other users.
本领域技术人员应能理解上述存储颜文字的方式仅为举例,其他现有的或今后可能出现的存储颜文字的方式如可适用于本发明,也应包含在本发明保护范围以内,并在此以引用方式包含于此。Those skilled in the art should be able to understand that the manner of storing the characters on the above is only an example, and other existing or future possible ways of storing the characters may be applicable to the present invention, and should also be included in the scope of the present invention. This is hereby incorporated by reference.
在此,生成装置1还将用户生成的颜文字进行存储、转存或上传,方便该用户或其他网络用户的使用,进一步提升用户的使用体验。Here, the generating device 1 also stores, dumps or uploads the user-generated facial characters to facilitate the use of the user or other network users, thereby further improving the user experience.
图6示出根据本发明另一个方面的用于生成颜文字的方法流程图。6 shows a flow chart of a method for generating face text in accordance with another aspect of the present invention.
在此,生成装置1例如可以位于计算机设备中,所述计算机设备包括用户设备与网络设备。当该生成装置1位于网络设备时,其与用户设备之间通过网络进行相互通信,获取用户在用户设备中输入的手绘图形,识别所述手绘图形的图形特征,确定与所述图形特征相对应的一种或多种填充元素,根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字返回该用户设备,该用户设备对该颜文字进行上屏。In this case, the generating device 1 can, for example, be located in a computer device, which comprises a user device and a network device. When the generating device 1 is located in the network device, it communicates with the user equipment through the network, acquires a hand-drawn graphic input by the user in the user device, identifies graphic features of the hand-drawn graphic, and determines corresponding to the graphic feature. And the one or more filling elements, according to the graphic feature, filling the hand-drawn graphic with the one or more filling elements, generating a corresponding color text to return to the user equipment, and the user equipment Screen.
以下以该生成装置1位于用户设备中为例进行详细描述。The following is a detailed description of the case where the generating device 1 is located in the user equipment.
在步骤601中,生成装置1获取用户输入的手绘图形。具体地,用户通过与用户设备之间的交互,输入了手绘图形,例如,当该用户设备为触屏手机等移动终端时,用户在该触屏手机的手绘输入区域, 通过手指在屏幕上滑动、触屏笔绘画等方式,输入了一定的手绘图形;又如,当该用户设备为台式机、笔记本电脑等设备时,用户通过鼠标滑动、或手指在触摸板滑动等方式,在该设备的手绘输入区域输入了一定的手绘图形。较佳地,该手绘输入区域例如当用户点击输入应用中的相应按钮时触发显示。在步骤601中,生成装置1通过与该用户设备的交互,如一次或多次调用输入应用提供的应用程序接口(API),获取该用户输入的手绘图形,例如,通过传感器获取触屏笔的运动轨迹,以获取该用户输入的手绘图形;或者,在步骤601中,生成装置1例如通过对用户手绘的内容进行映射,从而获取手绘图形。In step 601, the generating device 1 acquires a hand-drawn graphic input by the user. Specifically, the user inputs a hand-drawn graphic by interaction with the user equipment. For example, when the user equipment is a mobile terminal such as a touch screen mobile phone, the user is in the hand-drawn input area of the touch screen mobile phone. By finger sliding on the screen, touch screen pen painting, etc., a certain hand-drawn graphic is input; for example, when the user device is a desktop computer, a notebook computer, etc., the user slides through the mouse, or the finger slides on the touch pad, etc. In the way, a certain hand-drawn graphic is input in the hand-drawn input area of the device. Preferably, the hand-drawn input area triggers display, for example, when the user clicks on a corresponding button in the input application. In step 601, the generating device 1 acquires a hand-drawn graphic input by the user through an interaction with the user device, such as one or more calls to an application program interface (API) provided by the input application, for example, acquiring a stylus pen through a sensor. The motion track is used to obtain the hand-drawn graphics input by the user; or, in step 601, the generating device 1 obtains the hand-drawn graphics by, for example, mapping the content hand-drawn by the user.
本领域技术人员应能理解上述获取手绘图形的步骤仅为举例,其他现有的或今后可能出现的获取手绘图形的步骤如可适用于本发明,也应包含在本发明保护范围以内,并在此以引用方式包含于此。Those skilled in the art should understand that the steps of obtaining the hand-drawn graphics are only examples, and other existing or future possible steps for obtaining hand-drawn graphics may be applicable to the present invention, and should also be included in the scope of the present invention. This is hereby incorporated by reference.
在步骤602中,生成装置1识别所述手绘图形的图形特征。具体地,在步骤602中,生成装置1对在步骤601中获取的手绘图形进行特征抽取和选择,以识别最能反映该手绘图形的本质的图形特征,例如,该手绘图形的图形轮廓曲度最大或轮廓方向突然改变的地方。例如,当生成装置1在步骤601中获取的手绘图形为心形时,步骤602中,生成装置1通过特征提取和选择,识别出心形的图形特征:两个方向相同的带有弧度的锐角。In step 602, the generating device 1 identifies graphical features of the hand-drawn graphics. Specifically, in step 602, the generating device 1 performs feature extraction and selection on the hand-drawn graphic acquired in step 601 to identify a graphic feature that best reflects the essence of the hand-drawn graphic, for example, the graphic contour curvature of the hand-drawn graphic. The place where the maximum or contour direction suddenly changes. For example, when the hand-drawn graphics acquired by the generating device 1 in step 601 is a heart shape, in step 602, the generating device 1 identifies the graphic features of the heart shape by feature extraction and selection: acute angles with the same curvature in two directions .
本领域技术人员应能理解上述识别图形特征的方式仅为举例,其他现有的或今后可能出现的识别图形的方式如可适用于本发明,也应包含在本发明保护范围以内,并在此以引用方式包含于此。Those skilled in the art should understand that the above manner of identifying graphic features is merely an example, and other existing or future possible identification patterns may be applicable to the present invention, and should also be included in the scope of protection of the present invention. It is included here by reference.
在步骤603中,生成装置1确定与所述图形特征相对应的一种或多种填充元素。具体地,在步骤603中,生成装置1确定与在步骤602中识别的图形特征相对应的一种或多种填充元素的方式包括但不限于:In step 603, the generating means 1 determines one or more fill elements corresponding to the graphical features. Specifically, in step 603, the manner in which the generating device 1 determines one or more padding elements corresponding to the graphic features identified in step 602 includes, but is not limited to:
1)根据识别的图形特征,在元素库1中进行匹配查询,获得与所述图形特征对应的一种或多种填充元素。例如,对于图形特征“圆弧”,匹配获得与其对应的填充元素为符号“(”或者“)”。又如,对于 图形特征“直线”,匹配获得与其对应的填充元素为符号“!”。1) Perform a matching query in the element library 1 according to the identified graphic features to obtain one or more filling elements corresponding to the graphic features. For example, for the graphic feature "arc", the matching obtains the corresponding filling element as the symbol "(" or ")". Another example, for The graphic feature "straight line", the matching obtains the corresponding filling element as the symbol "!".
在此,元素库1中存储的元素包括但不限于以下至少任一项:符号、表情或者自定义图片。符号包括但不限于中文标点符、英文标点符、数学符号、序号、希俄符号、注音符号、制表符号等。表情包括但不限于emoji表情、gif表情等。自定义图片包括但不限于该用户所上传的、其他用户所上传的自定义图片。Here, the elements stored in the element library 1 include, but are not limited to, at least one of the following: a symbol, an expression, or a custom picture. Symbols include, but are not limited to, Chinese punctuation, English punctuation, mathematical notation, serial number, Greek symbol, phonetic symbol, tab symbol, and the like. Expressions include but are not limited to emoji expressions, gif expressions, and the like. Custom images include, but are not limited to, custom images uploaded by other users and uploaded by other users.
在此,该元素库1可以位于该用户设备中,也可以位于与该用户设备通过网络相连接的第三方设备中。进一步地,元素库1包括但不限于以下至少任一个子库:符号库、表情库或自定义图片库。Here, the element library 1 may be located in the user equipment or in a third party device connected to the user equipment through a network. Further, the element library 1 includes, but is not limited to, at least one of the following sub-libraries: a symbol library, an expression library, or a custom picture library.
优选地,步骤603还包括子步骤6031(未示出),该子步骤6031中,生成装置1获取上传者所上传的自定义图片,建立或更新所述元素库。Preferably, the step 603 further includes a sub-step 6031 (not shown). In the sub-step 6031, the generating device 1 acquires a customized picture uploaded by the uploader, and establishes or updates the element library.
具体地,该用户或网络端其他用户上传了自定义图片,生成装置1获取了这些上传者所上传的自定义图片,将自定义图片存储于元素库中,从而实现对该元素库的建立或更新。Specifically, the user or another user on the network uploads a customized image, and the generating device 1 obtains the customized image uploaded by the uploader, and stores the customized image in the element library, thereby implementing the establishment of the element library or Update.
2)获取所述用户所选择的与所述手绘图形相对应的一种或多种填充元素,例如,用户从元素库2中自主选择要使用的一种或多种填充元素。2) Acquiring one or more padding elements selected by the user corresponding to the hand-drawn graphics, for example, the user autonomously selects one or more padding elements to be used from the element library 2.
较佳地,该元素库2也可以通过上述方式进行建立或更新。Preferably, the element library 2 can also be created or updated in the manner described above.
3)通过上述1)和2)相结合的方式,例如,在根据识别的图形特征在元素库1中进行匹配查询,获得与所述图形特征对应的一种或多种填充元素的基础上,用户再根据具体情况从元素库2中自主选择部分填充元素,通过两者结合确定要填充的一种或多种填充元素;或者,在根据识别的图形特征在元素库1中进行匹配查询,获得与所述图形特征对应的一种或多种填充元素的基础上,用户再从该一种或多种填充元素中选择其想要的填充元素。3) by combining the above 1) and 2), for example, based on the identified graphical features in the element library 1 to perform a matching query to obtain one or more filling elements corresponding to the graphic features, The user then selects a part of the filling element from the element library 2 according to the specific situation, and determines one or more filling elements to be filled by combining the two; or, in the element library 1 according to the recognized graphic features, the matching query is obtained. Based on the one or more fill elements corresponding to the graphical features, the user then selects the desired fill elements from the one or more fill elements.
在此,所述元素库2可以与元素库1相同或部分相同;所述元素库2也可以是用户自定义的元素库。Here, the element library 2 may be identical or partially identical to the element library 1; the element library 2 may also be a user-defined element library.
本领域技术人员应能理解上述确定与所述图形特征相对应的一 种或多种填充元素的方式仅为举例,其他现有的或今后可能出现的确定与图形特征相对应的一种或多种填充元素的方法如可适用于本发明,也应包含在本发明保护范围内,并在此以引用方式包含于此。本领域技术人员还应能理解上述填充元素仅为举例,其他现有的或今后可能出现的填充元素的方法如可适用于本发明,也应包含在本发明保护范围内,并在此以引用方式包含于此。Those skilled in the art should be able to understand that the above determination corresponds to the graphical feature. The manner of one or more filling elements is merely an example, and other existing or future possible methods for determining one or more filling elements corresponding to the graphic features are applicable to the present invention and are also included in the present invention. It is within the scope of protection and is hereby incorporated by reference. Those skilled in the art should also understand that the above-mentioned filling elements are only examples, and other existing or future possible filling elements may be included in the scope of the present invention, and are hereby incorporated by reference. The way is included here.
在步骤604中,生成装置1根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。具体地,在步骤604中,生成装置1根据在步骤602中识别出的图形特征,以在步骤603中通过元素库匹配确定的、用户选择的、或两者相结合所确定的一种或多种填充元素,对该手绘图形进行元素填充,生成对应的颜文字上屏,即,以该一种或多种填充元素显示该手绘图形。在步骤604中,生成装置1例如通过一次或多次调用诸如JSP、ASP或PHP等动态页面技术,将该生成的颜文字直接显示在用户设备的屏幕上;或者,在步骤604中,当用户通过点击、长按上屏按钮等方式进行上屏操作时,生成装置1将该生成的颜文字显示在该用户设备的屏幕上,例如,显示在该用户的输入框内。In step 604, the generating device 1 fills the hand-drawn graphics with the one or more filling elements according to the graphic features, and generates corresponding facial characters for performing an upper screen. Specifically, in step 604, the generating device 1 determines one or more determined by the element library matching, the user selected, or a combination of the two according to the graphic features identified in step 602. The filling element performs element filling on the hand-drawn graphic to generate a corresponding facial text upper screen, that is, the hand-drawn graphic is displayed by the one or more filling elements. In step 604, the generating device 1 displays the generated face text directly on the screen of the user device, for example, by calling a dynamic page technology such as JSP, ASP or PHP one or more times; or, in step 604, when the user When the upper screen operation is performed by clicking, long pressing the upper screen button, etc., the generating device 1 displays the generated facial character on the screen of the user device, for example, in the input box of the user.
本领域技术人员应能理解上述上屏方式仅为举例,其他现有的或今后可能出现的新的上屏方式如可适用于本发明,也应包含在本发明保护范围以内,并在此以引用方式包含于此。Those skilled in the art should understand that the above-mentioned upper screen mode is only an example, and other existing upper screen modes that may exist or may appear in the future, as applicable to the present invention, are also included in the scope of protection of the present invention, and The reference is included here.
以下用一实施例对上述步骤的操作进行详细描述:The operation of the above steps will be described in detail below using an embodiment:
例如,假设生成装置1在步骤601中所获取的手绘图形为心形,在步骤602中识别出心形的主要特征为:两个方向相同的带有弧度的锐角。在步骤603中,生成装置1根据所述图形特征确定出一种或几种与心形对应的填充元素,例如:“,.!-”。在步骤604中,生成装置1以这些元素填充心形图案并上屏,生成如图2所示的颜文字;或者在步骤603中,生成装置1根据用户的选择确定出对应的填充元素,在步骤604中以这些元素填充心形图案并上屏,生成如图3所示的颜文字。 For example, assuming that the hand-drawn graphics acquired by the generating device 1 in step 601 are in a heart shape, the main features of the heart shape are identified in step 602 as: sharp angles with the same curvature in both directions. In step 603, the generating device 1 determines one or more filling elements corresponding to the heart shape according to the graphic features, for example: ",.!-". In step 604, the generating device 1 fills the heart-shaped pattern with these elements and uploads the screen to generate a face text as shown in FIG. 2; or in step 603, the generating device 1 determines the corresponding fill element according to the user's selection, In step 604, the heart-shaped pattern is filled with these elements and screened up to generate a face text as shown in FIG.
在此,生成装置1通过步骤601-604获取用户输入的手绘图形,识别所述手绘图形的图形特征,确定与所述图形特征相对应的一种或多种填充元素,根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏,用户可以自己制作颜文字,只需要简单的进行涂鸦,就可以自动生成与涂鸦形状相同的的颜文字,实现了个性化的需求。Here, the generating device 1 acquires the hand-drawn graphics input by the user through steps 601-604, identifies the graphic features of the hand-drawn graphics, and determines one or more filling elements corresponding to the graphic features, according to the graphic features, Filling the hand-drawn graphics with the one or more filling elements, generating corresponding facial characters for the upper screen, and the user can make the facial characters by himself, and simply generating the graffiti, the face with the same shape as the graffiti can be automatically generated. The text fulfills the individual needs.
进一步地,用户还可以自主选择组成颜文字的填充元素,基于该填充元素生成对应的颜文字上屏,进一步提升了用户的输入使用体验。Further, the user can also independently select a fill element that constitutes a face text, and generate a corresponding face text on the screen based on the fill element, thereby further improving the user's input use experience.
图7示出了根据本发明一个优选实施例的用于生成颜文字的方法流程图。具体地,在步骤701中,生成装置1获取用户输入的手绘图形;在步骤705中,生成装置1对所述手绘图形进行预处理,以获得预处理后的手绘图形;在步骤702中,生成装置1识别所述预处理后的手绘图形的图形特征;在步骤703中,生成装置1确定与所述图形特征相对应的一种或多种填充元素;在步骤704中,生成装置1根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。其中,步骤701,702-704与图6所示对应步骤相同或基本相同,故此处不再赘述,并通过引用的方式包含于此。Figure 7 illustrates a flow chart of a method for generating facial text in accordance with a preferred embodiment of the present invention. Specifically, in step 701, the generating device 1 acquires the hand-drawn graphics input by the user; in step 705, the generating device 1 performs pre-processing on the hand-drawn graphics to obtain the pre-processed hand-drawn graphics; in step 702, generates The device 1 identifies the graphical features of the pre-processed hand-drawn graphics; in step 703, the generating device 1 determines one or more padding elements corresponding to the graphics features; in step 704, the generating device 1 The graphic feature fills the hand-drawn graphic with the one or more filling elements, and generates corresponding graphic characters for performing an upper screen. The steps 701, 702-704 are the same as or substantially the same as the corresponding steps shown in FIG. 6, and therefore are not described herein again, and are included herein by reference.
其中,在步骤705中,生成装置1对所述手绘图形进行预处理,以获得预处理后的手绘图形。具体地,在步骤705中,生成装置1对在步骤701中所获取的手绘图形,进行图形增强处理、去噪、图形锐化等预处理,从而获得预处理后的手绘图形。Wherein, in step 705, the generating device 1 performs pre-processing on the hand-drawn graphics to obtain a pre-processed hand-drawn graphics. Specifically, in step 705, the generating device 1 performs preprocessing on the hand-drawn graphics acquired in step 701, such as graphics enhancement processing, denoising, graphics sharpening, etc., to obtain the pre-processed hand-drawn graphics.
优选地,步骤705还可以包括子步骤7051(未示出)、子步骤7052(未示出)和子步骤7053(未示出)。在子步骤7051中,生成装置1通过图形变换实现对该手绘图形的增强处理;子步骤7052中,生成装置1通过对图形去除噪点实现对该手绘图形的平滑或滤波,有助于改善图像质量并利于对象特征的抽取;子步骤7053中,生成装置1通过图形锐化实现对该手绘图形的边缘清晰化处理。例如,当用户输入的手绘图形为心形时,假如其不小心在画好的心形旁边多点了一个点,图 形平滑单元可以去除该多余的点,帮助去除噪声、修复图形;图形边缘处理单元有助于精确地识别心形的角,高保真还原用户手绘图形。Preferably, step 705 can also include sub-step 7051 (not shown), sub-step 7052 (not shown), and sub-step 7053 (not shown). In sub-step 7051, the generating device 1 implements enhancement processing of the hand-drawn graphics by graphical transformation; in sub-step 7052, the generating device 1 performs smoothing or filtering of the hand-drawn graphics by removing noise from the graphics, thereby contributing to improvement of image quality. And facilitating the extraction of the object feature; in sub-step 7053, the generating device 1 implements edge sharpening processing on the hand-drawn graphic by graphic sharpening. For example, when the hand-drawn graphic input by the user is a heart shape, if it is accidentally placed a dot next to the drawn heart shape, the figure The shape smoothing unit can remove the extra points, help to remove noise and repair the graphics; the graphic edge processing unit helps to accurately recognize the corners of the heart, and the high fidelity restores the user's hand-drawn graphics.
在此,生成装置1在步骤701中获取用户输入的手绘图形之后,对此先进行步骤705的预处理,再对预处理后的手绘图形进行图形特征识别,提高了图形特征识别的准确度,使得生成的颜文字与用户输入的手绘图形更匹配,进一步提升了用户的使用体验。Here, after the generating device 1 acquires the hand-drawn graphics input by the user in step 701, the pre-processing of step 705 is performed first, and then the graphic feature recognition is performed on the pre-processed hand-drawn graphics, thereby improving the accuracy of the graphic feature recognition. The generated facial text is more closely matched with the hand-drawn graphics input by the user, further enhancing the user experience.
图8示出了根据本发明又一个优选实施例的用于生成颜文字的方法流程图。具体地,在步骤801中,生成装置1获取用户输入的手绘图形;在步骤802中,生成装置1识别所述手绘图形的图形特征;在步骤803中,生成装置1确定与所述图形特征相对应的一种或多种填充元素;在步骤806中,生成装置1确定以所述填充元素填充所述手绘图形的元素密度;在步骤804中,生成装置1根据所述图形特征,并结合所述元素密度,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。其中,步骤801-803与图6对应步骤相同或基本相同,故此处不再赘述,并通过引用的方式包含于此。Figure 8 illustrates a flow chart of a method for generating face text in accordance with yet another preferred embodiment of the present invention. Specifically, in step 801, the generating device 1 acquires a hand-drawn graphic input by the user; in step 802, the generating device 1 recognizes the graphic feature of the hand-drawn graphic; in step 803, the generating device 1 determines that the graphic feature is Corresponding one or more filling elements; in step 806, the generating means 1 determines to fill the element density of the hand-drawn graphics with the filling elements; in step 804, the generating means 1 according to the graphic features, The element density is filled in the hand-drawn graphic with the one or more filling elements, and the corresponding facial characters are generated for the upper screen. The steps 801-803 are the same as or substantially the same as the corresponding steps in FIG. 6, and therefore are not described herein again, and are included herein by reference.
其中,在步骤806中,生成装置1确定以所述填充元素填充所述手绘图形的元素密度。具体地,在步骤806中,生成装置1确定所述元素密度的方式包括但不限于:Wherein, in step 806, the generating means 1 determines to fill the element density of the hand-drawn graphic with the filling element. Specifically, in step 806, the manner in which the generating device 1 determines the density of the element includes, but is not limited to:
1)根据图形特征自动确定所述元素密度;1) automatically determining the density of the element according to the graphical features;
2)获取用户根据喜好所确定的所述元素密度;2) obtaining the density of the element determined by the user according to the preference;
3)在根据图形特征自动确定所述元素密度之后,用户在此基础上进行调整共同确定所述元素密度。3) After automatically determining the density of the elements according to the graphical features, the user adjusts to determine the density of the elements together.
例如,对于图形轮廓曲度大或者轮廓曲度突然改变的图形处,为了平滑地表示出拐角或者图形的变化,在步骤806中,生成装置1自动加大填充元素的元素密度;对于平缓的图形处,在步骤806中生成装置1采用均匀的填充密度;部分用户可能不喜欢预设的填充密度,可以根据自己的感官设定填充密度;单独用一种方式确定的元素密度都不满足用户需求时,可以结合两种确定方式共同确定一个 元素密度。For example, in the graph where the contour contour curvature is large or the contour curvature suddenly changes, in order to smoothly represent the corner or the change of the pattern, in step 806, the generating device 1 automatically increases the element density of the filling element; for the gentle graphic At step 806, the generating device 1 adopts a uniform packing density; some users may not like the preset packing density, and may set the packing density according to their own senses; the element density determined by one method alone does not satisfy the user's demand. When combined, two methods can be combined to determine one Element density.
本领域技术人员应能理解上述确定元素密度的方式仅为举例,其他现有的或今后可能出现的确定元素密度的方式如可适用于本发明,也应包含在本发明保护范围以内,并在此以引用方式包含于此。Those skilled in the art should understand that the above manner of determining the density of the elements is merely an example, and other existing or future possible ways of determining the density of the elements, as applicable to the present invention, are also included in the scope of the present invention, and This is hereby incorporated by reference.
在此,该生成颜文字的方法还确定以填充元素填充用户输入的手绘图形的元素密度,结合该元素密度对该手绘图形进行元素填充,进一步提高了所生成的颜文字与用户输入的手绘图形的匹配度,提升了用户的使用体验。Here, the method for generating a face text further determines that the element density of the hand-drawn graphic input by the user is filled with the fill element, and the element fill is performed on the hand-drawn graphic in combination with the element density, thereby further improving the generated face text and the hand-drawn graphic input by the user. The matching degree improves the user experience.
优选地,生成颜文字的方法还包括步骤807(未示出),该步骤807中,生成装置1存储所述颜文字,以供所述用户下次输入时直接选择使用。具体地,用户例如通过点击、长按保存按钮等方式,对自己制作的颜文字进行保存操作,在步骤807中,生成装置1存储该颜文字,例如,存储在该用户设备的本地颜文字库,或存储在与该用户设备通过网络相连接的网络颜文字库,下次该用户用到同样的颜文字时可以直接调用;或者,在步骤807中,生成装置1将该颜文字用作自定义图片转存入元素库中;或者,在步骤807中,生成装置1将该颜文字上传到网络颜文字库,供其他用户使用。Preferably, the method for generating a face text further includes a step 807 (not shown), in which the generating device 1 stores the face text for direct selection and use by the user when inputting the next time. Specifically, the user saves the created face text by clicking, long pressing the save button, etc., in step 807, the generating device 1 stores the face text, for example, stored in the local face font of the user device. Or stored in the network character library connected to the user equipment through the network, and may be directly called when the user uses the same face text next time; or, in step 807, the generating device 1 uses the face text as the self. The definition picture is transferred to the element library; or, in step 807, the generating device 1 uploads the face text to the network character library for use by other users.
本领域技术人员应能理解上述存储颜文字的方式仅为举例,其他现有的或今后可能出现的存储颜文字的方式如可适用于本发明,也应包含在本发明保护范围以内,并在此以引用方式包含于此。Those skilled in the art should be able to understand that the manner of storing the characters on the above is only an example, and other existing or future possible ways of storing the characters may be applicable to the present invention, and should also be included in the scope of the present invention. This is hereby incorporated by reference.
在此,步骤807使得生成装置1实现将用户生成的颜文字进行存储、转存或上传,方便该用户或其他网络用户的使用,进一步提升用户的使用体验。Here, step 807 enables the generating device 1 to store, dump, or upload the user-generated facial characters to facilitate the use of the user or other network users, thereby further improving the user experience.
需要注意的是,本发明可在软件和/或软件与硬件的组合体中被实施,例如,本发明的各个装置可采用专用集成电路(ASIC)或任何其他类似硬件设备来实现。在一个实施例中,本发明的软件程序可以通过处理器执行以实现上文所述步骤或功能。同样地,本发明的软件程序(包括相关的数据结构)可以被存储到计算机可读记录介质中,例如,RAM存储器,磁或光驱动器或软磁盘及类似设备。另外,本发 明的一些步骤或功能可采用硬件来实现,例如,作为与处理器配合从而执行各个步骤或功能的电路。It should be noted that the present invention can be implemented in software and/or a combination of software and hardware. For example, the various devices of the present invention can be implemented using an application specific integrated circuit (ASIC) or any other similar hardware device. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. Likewise, the software program (including related data structures) of the present invention can be stored in a computer readable recording medium such as a RAM memory, a magnetic or optical drive or a floppy disk and the like. In addition, this hair Some of the steps or functions of the description may be implemented in hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.
对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其他的具体形式实现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本发明内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。系统权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。It is apparent to those skilled in the art that the present invention is not limited to the details of the above-described exemplary embodiments, and the present invention can be embodied in other specific forms without departing from the spirit or essential characteristics of the invention. Therefore, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the invention is defined by the appended claims instead All changes in the meaning and scope of equivalent elements are included in the present invention. Any reference signs in the claims should not be construed as limiting the claim. In addition, it is to be understood that the word "comprising" does not exclude other elements or steps. A plurality of units or devices recited in the system claims can also be implemented by a unit or device by software or hardware. The first, second, etc. words are used to denote names and do not denote any particular order.
在权利要求书中规定了各个实施例的各个方面。在下列编号条款中规定了各个实施例的这些和其他方面:Various aspects of various embodiments are set forth in the claims. These and other aspects of various embodiments are set forth in the following numbering clauses:
1.一种用于生成颜文字的方法,其中,该方法包括以下步骤:A method for generating a face text, wherein the method comprises the steps of:
a获取用户输入的手绘图形;a obtain the hand-drawn graphics input by the user;
b识别所述手绘图形的图形特征;b identifying a graphical feature of the hand-drawn graphic;
c确定与所述图形特征相对应的一种或多种填充元素;c determining one or more fill elements corresponding to the graphical features;
d根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。d according to the graphic feature, filling the hand-drawn graphic with the one or more filling elements, and generating corresponding facial characters for performing an upper screen.
2.根据条款1所述的方法,其中,所述步骤c包括:2. The method of clause 1, wherein the step c comprises:
-根据所述图形特征,在元素库中进行匹配查询,以获得与所述图形特征相对应的一种或多种填充元素。- performing a matching query in the element library based on the graphical features to obtain one or more fill elements corresponding to the graphical features.
3.根据条款2所述的方法,其中,所述元素库中存储的元素包括以下至少任一项:3. The method of clause 2, wherein the elements stored in the element library comprise at least one of the following:
-符号;-symbol;
-表情;-expression;
-自定义图片。- Customize the picture.
4.根据条款3所述的方法,其中,该方法还包括: 4. The method of clause 3, wherein the method further comprises:
-获取上传者所上传的自定义图片,建立或更新所述元素库。- Get the custom image uploaded by the uploader, create or update the element library.
5.根据条款1所述的方法,其中,所述步骤c包括:5. The method of clause 1, wherein the step c comprises:
-获取所述用户所选择的与所述手绘图形相对应的一种或多种填充元素;Obtaining one or more padding elements selected by the user corresponding to the hand-drawn graphics;
其中,所述步骤d包括:Wherein the step d includes:
-根据所述图形特征,以所述用户所选择的一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。- filling the hand-drawn graphics with one or more padding elements selected by the user according to the graphic features, and generating corresponding facial characters for performing an upper screen.
6.根据条款1至5中任一项所述的方法,其中,该方法还包括:The method of any of clauses 1 to 5, wherein the method further comprises:
-对所述手绘图形进行预处理,以获得预处理后的手绘图形;- pre-processing the hand-drawn graphics to obtain pre-processed hand-drawn graphics;
其中,所述步骤b包括:Wherein the step b comprises:
-识别所述预处理后的手绘图形的图形特征。Identifying graphical features of the pre-processed hand-drawn graphics.
7.根据条款1至5中任一项所述的方法,其中,该方法还包括:The method of any of clauses 1 to 5, wherein the method further comprises:
-确定以所述填充元素填充所述手绘图形的元素密度;Determining an element density of filling the hand-drawn graphic with the fill element;
其中,所述步骤d包括:Wherein the step d includes:
-根据所述图形特征,并结合所述元素密度,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。And filling the hand-drawn graphic with the one or more filling elements according to the graphic feature, and combining the one or more filling elements to generate a corresponding facial image for performing an upper screen.
8.根据条款1至5中任一项所述的方法,其中,该方法还包括:The method of any of clauses 1 to 5, wherein the method further comprises:
-存储所述颜文字,以供所述用户下次输入时直接选择使用。- storing the facial text for direct selection by the user for the next input.
9.一种用于生成颜文字的生成装置,其中,该生成装置包括:9. A generating device for generating a face text, wherein the generating device comprises:
获取装置,用于获取用户输入的手绘图形;Obtaining means for acquiring a hand-drawn graphic input by a user;
识别装置,用于识别所述手绘图形的图形特征;An identification device, configured to identify a graphic feature of the hand-drawn graphic;
第一确定装置,用于确定与所述图形特征相对应的一种或多种填充元素;a first determining means for determining one or more filling elements corresponding to the graphic features;
上屏装置,用于根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。The upper screen device is configured to fill the hand-drawn graphic with the one or more filling elements according to the graphic feature, and generate corresponding facial characters for performing an upper screen.
10.根据条款9所述的生成装置,其中,所述第一确定装置用于:10. The generating device of clause 9, wherein the first determining device is configured to:
-根据所述图形特征,在元素库中进行匹配查询,以获得与所述图形特征相对应的一种或多种填充元素。- performing a matching query in the element library based on the graphical features to obtain one or more fill elements corresponding to the graphical features.
11.根据条款10所述的生成装置,其中,所述元素库中存储的元素 包括以下至少任一项:11. The generating device of clause 10, wherein the elements stored in the element library Includes at least one of the following:
-符号;-symbol;
-表情;-expression;
-自定义图片。- Customize the picture.
12.根据条款11所述的生成装置,其中,该生成装置还包括:12. The generating device of clause 11, wherein the generating device further comprises:
更新装置,用于获取上传者所上传的自定义图片,建立或更新所述元素库。The updating device is configured to acquire a customized image uploaded by the uploader, and establish or update the element library.
13.根据条款9所述的生成装置,其中,所述第一确定装置用于:13. The generating device of clause 9, wherein the first determining device is configured to:
-获取所述用户所选择的与所述手绘图形相对应的一种或多种填充元素;Obtaining one or more padding elements selected by the user corresponding to the hand-drawn graphics;
其中,所述上屏装置用于:Wherein the upper screen device is used for:
-根据所述图形特征,以所述用户所选择的一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。- filling the hand-drawn graphics with one or more padding elements selected by the user according to the graphic features, and generating corresponding facial characters for performing an upper screen.
14.根据条款9至13中任一项所述的生成装置,其中,该生成装置还包括:The generating device of any one of clauses 9 to 13, wherein the generating device further comprises:
预处理装置,用于对所绘图形进行预处理,以获得预处理后的手绘图形;a pre-processing device for pre-processing the drawn shape to obtain a pre-processed hand-drawn graphic;
其中,所述识别装置用于:Wherein the identification device is used to:
-识别所述预处理后的手绘图形的图形特征。Identifying graphical features of the pre-processed hand-drawn graphics.
15.根据条款9至13中任一项所述的生成装置,其中,该生成装置还包括:The generating device of any one of clauses 9 to 13, wherein the generating device further comprises:
第二确定装置,用于确定以所述填充元素填充所述手绘图形的元素密度;a second determining means, configured to determine an element density of filling the hand-drawn graphic with the filling element;
其中,所述上屏装置用于:Wherein the upper screen device is used for:
-根据所述图形特征,并结合所述元素密度,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。And filling the hand-drawn graphic with the one or more filling elements according to the graphic feature, and combining the one or more filling elements to generate a corresponding facial image for performing an upper screen.
16.根据条款9至13中任一项所述的生成装置,其中,该生成装置还包括:The generating device of any one of clauses 9 to 13, wherein the generating device further comprises:
存储装置,用于存储所述颜文字,以供所述用户下次输入时直接 选择使用。a storage device, configured to store the face text for direct input by the user next time Choose to use.
17.一种计算机可读存储介质,所述计算机可读存储介质存储有计算机代码,当所述计算机代码被执行时,如条款1至8中任一项所述的方法被执行。17. A computer readable storage medium storing computer code, the method of any one of clauses 1 to 8 being executed when the computer code is executed.
18.一种计算机程序产品,当所述计算机程序产品被计算机设备执行时,如条款1至8中任一项所述的方法被执行。18. A computer program product, when the computer program product is executed by a computer device, the method of any one of clauses 1 to 8 being performed.
19.一种计算机设备,所述计算机设备包括存储器和处理器,所述存储器中存储有计算机代码,所述处理器被配置来通过执行所述计算机代码以执行如条款1至8中任一项所述的方法。 19. A computer device comprising a memory and a processor, the memory storing computer code, the processor being configured to execute the computer code to perform any of clauses 1-8 Said method.

Claims (19)

  1. 一种用于生成颜文字的方法,其中,该方法包括以下步骤:A method for generating a face text, wherein the method comprises the following steps:
    a获取用户输入的手绘图形;a obtain the hand-drawn graphics input by the user;
    b识别所述手绘图形的图形特征;b identifying a graphical feature of the hand-drawn graphic;
    c确定与所述图形特征相对应的一种或多种填充元素;c determining one or more fill elements corresponding to the graphical features;
    d根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。d according to the graphic feature, filling the hand-drawn graphic with the one or more filling elements, and generating corresponding facial characters for performing an upper screen.
  2. 根据权利要求1所述的方法,其中,所述步骤c包括:The method of claim 1 wherein said step c comprises:
    -根据所述图形特征,在元素库中进行匹配查询,以获得与所述图形特征相对应的一种或多种填充元素。- performing a matching query in the element library based on the graphical features to obtain one or more fill elements corresponding to the graphical features.
  3. 根据权利要求2所述的方法,其中,所述元素库中存储的元素包括以下至少任一项:The method of claim 2, wherein the elements stored in the element library comprise at least one of the following:
    -符号;-symbol;
    -表情;-expression;
    -自定义图片。- Customize the picture.
  4. 根据权利要求3所述的方法,其中,该方法还包括:The method of claim 3, wherein the method further comprises:
    -获取上传者所上传的自定义图片,建立或更新所述元素库。- Get the custom image uploaded by the uploader, create or update the element library.
  5. 根据权利要求1所述的方法,其中,所述步骤c包括:The method of claim 1 wherein said step c comprises:
    -获取所述用户所选择的与所述手绘图形相对应的一种或多种填充元素;Obtaining one or more padding elements selected by the user corresponding to the hand-drawn graphics;
    其中,所述步骤d包括:Wherein the step d includes:
    -根据所述图形特征,以所述用户所选择的一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。- filling the hand-drawn graphics with one or more padding elements selected by the user according to the graphic features, and generating corresponding facial characters for performing an upper screen.
  6. 根据权利要求1至5中任一项所述的方法,其中,该方法还包括:The method according to any one of claims 1 to 5, wherein the method further comprises:
    -对所述手绘图形进行预处理,以获得预处理后的手绘图形;- pre-processing the hand-drawn graphics to obtain pre-processed hand-drawn graphics;
    其中,所述步骤b包括:Wherein the step b comprises:
    -识别所述预处理后的手绘图形的图形特征。 Identifying graphical features of the pre-processed hand-drawn graphics.
  7. 根据权利要求1至5中任一项所述的方法,其中,该方法还包括:The method according to any one of claims 1 to 5, wherein the method further comprises:
    -确定以所述填充元素填充所述手绘图形的元素密度;Determining an element density of filling the hand-drawn graphic with the fill element;
    其中,所述步骤d包括:Wherein the step d includes:
    -根据所述图形特征,并结合所述元素密度,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。And filling the hand-drawn graphic with the one or more filling elements according to the graphic feature, and combining the one or more filling elements to generate a corresponding facial image for performing an upper screen.
  8. 根据权利要求1至5中任一项所述的方法,其中,该方法还包括:The method according to any one of claims 1 to 5, wherein the method further comprises:
    -存储所述颜文字,以供所述用户下次输入时直接选择使用。- storing the facial text for direct selection by the user for the next input.
  9. 一种用于生成颜文字的生成装置,其中,该生成装置包括:A generating device for generating a face text, wherein the generating device comprises:
    获取装置,用于获取用户输入的手绘图形;Obtaining means for acquiring a hand-drawn graphic input by a user;
    识别装置,用于识别所述手绘图形的图形特征;An identification device, configured to identify a graphic feature of the hand-drawn graphic;
    第一确定装置,用于确定与所述图形特征相对应的一种或多种填充元素;a first determining means for determining one or more filling elements corresponding to the graphic features;
    上屏装置,用于根据所述图形特征,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。The upper screen device is configured to fill the hand-drawn graphic with the one or more filling elements according to the graphic feature, and generate corresponding facial characters for performing an upper screen.
  10. 根据权利要求9所述的生成装置,其中,所述第一确定装置用于:The generating device according to claim 9, wherein said first determining means is for:
    -根据所述图形特征,在元素库中进行匹配查询,以获得与所述图形特征相对应的一种或多种填充元素。- performing a matching query in the element library based on the graphical features to obtain one or more fill elements corresponding to the graphical features.
  11. 根据权利要求10所述的生成装置,其中,所述元素库中存储的元素包括以下至少任一项:The generating apparatus according to claim 10, wherein the element stored in the element library comprises at least one of the following:
    -符号;-symbol;
    -表情;-expression;
    -自定义图片。- Customize the picture.
  12. 根据权利要求11所述的生成装置,其中,该生成装置还包括:The generating device according to claim 11, wherein the generating device further comprises:
    更新装置,用于获取上传者所上传的自定义图片,建立或更新所述元素库。 The updating device is configured to acquire a customized image uploaded by the uploader, and establish or update the element library.
  13. 根据权利要求9所述的生成装置,其中,所述第一确定装置用于:The generating device according to claim 9, wherein said first determining means is for:
    -获取所述用户所选择的与所述手绘图形相对应的一种或多种填充元素;Obtaining one or more padding elements selected by the user corresponding to the hand-drawn graphics;
    其中,所述上屏装置用于:Wherein the upper screen device is used for:
    -根据所述图形特征,以所述用户所选择的一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。- filling the hand-drawn graphics with one or more padding elements selected by the user according to the graphic features, and generating corresponding facial characters for performing an upper screen.
  14. 根据权利要求9至13中任一项所述的生成装置,其中,该生成装置还包括:The generating device according to any one of claims 9 to 13, wherein the generating device further comprises:
    预处理装置,用于对所绘图形进行预处理,以获得预处理后的手绘图形;a pre-processing device for pre-processing the drawn shape to obtain a pre-processed hand-drawn graphic;
    其中,所述识别装置用于:Wherein the identification device is used to:
    -识别所述预处理后的手绘图形的图形特征。Identifying graphical features of the pre-processed hand-drawn graphics.
  15. 根据权利要求9至13中任一项所述的生成装置,其中,该生成装置还包括:The generating device according to any one of claims 9 to 13, wherein the generating device further comprises:
    第二确定装置,用于确定以所述填充元素填充所述手绘图形的元素密度;a second determining means, configured to determine an element density of filling the hand-drawn graphic with the filling element;
    其中,所述上屏装置用于:Wherein the upper screen device is used for:
    -根据所述图形特征,并结合所述元素密度,以所述一种或多种填充元素填充所述手绘图形,生成对应的颜文字进行上屏。And filling the hand-drawn graphic with the one or more filling elements according to the graphic feature, and combining the one or more filling elements to generate a corresponding facial image for performing an upper screen.
  16. 根据权利要求9至13中任一项所述的生成装置,其中,该生成装置还包括:The generating device according to any one of claims 9 to 13, wherein the generating device further comprises:
    存储装置,用于存储所述颜文字,以供所述用户下次输入时直接选择使用。The storage device is configured to store the facial characters for direct selection and use by the user when inputting the next time.
  17. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机代码,当所述计算机代码被执行时,如权利要求1至8中任一项所述的方法被执行。A computer readable storage medium storing computer code, the method of any one of claims 1 to 8 being executed when the computer code is executed.
  18. 一种计算机程序产品,当所述计算机程序产品被计算机设备执行时,如权利要求1至8中任一项所述的方法被执行。 A computer program product, when the computer program product is executed by a computer device, the method of any one of claims 1 to 8 being performed.
  19. 一种计算机设备,所述计算机设备包括存储器和处理器,所述存储器中存储有计算机代码,所述处理器被配置来通过执行所述计算机代码以执行如权利要求1至8中任一项所述的方法。 A computer device comprising a memory and a processor, the memory storing computer code, the processor being configured to perform the computer code according to any one of claims 1 to 8 The method described.
PCT/CN2015/096392 2015-08-31 2015-12-04 Method and device for generating emoticon WO2017035971A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510548856.7 2015-08-31
CN201510548856.7A CN105183316B (en) 2015-08-31 2015-08-31 A kind of method and apparatus for generating face word

Publications (1)

Publication Number Publication Date
WO2017035971A1 true WO2017035971A1 (en) 2017-03-09

Family

ID=54905427

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/096392 WO2017035971A1 (en) 2015-08-31 2015-12-04 Method and device for generating emoticon

Country Status (2)

Country Link
CN (1) CN105183316B (en)
WO (1) WO2017035971A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106919943A (en) * 2015-12-25 2017-07-04 北京搜狗科技发展有限公司 A kind of data processing method and device
CN105677059A (en) * 2015-12-31 2016-06-15 广东小天才科技有限公司 Method and system for inputting expression pictures
CN106598453A (en) * 2016-11-03 2017-04-26 北京百度网讯科技有限公司 Method and device for outputting shaped character information
CN108846881B (en) * 2018-05-29 2023-05-12 珠海格力电器股份有限公司 Expression image generation method and device
CN110764627B (en) * 2018-07-25 2023-11-10 北京搜狗科技发展有限公司 Input method and device and electronic equipment
CN109165072A (en) * 2018-08-28 2019-01-08 珠海格力电器股份有限公司 A kind of expression packet generation method and device
CN111399729A (en) * 2020-03-10 2020-07-10 北京字节跳动网络技术有限公司 Image drawing method and device, readable medium and electronic equipment
CN112269522A (en) * 2020-10-27 2021-01-26 维沃移动通信(杭州)有限公司 Image processing method, image processing device, electronic equipment and readable storage medium
CN113761204B (en) * 2021-09-06 2023-07-28 南京大学 Emoji text emotion analysis method and system based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789130A (en) * 2009-12-24 2010-07-28 中兴通讯股份有限公司 Method and device for terminal equipment to use self-drawn picture
CN104050691A (en) * 2013-03-11 2014-09-17 百度国际科技(深圳)有限公司 Device and method for generating corresponding character picture based on image in terminal
US20150100537A1 (en) * 2013-10-03 2015-04-09 Microsoft Corporation Emoji for Text Predictions

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6049531B2 (en) * 2013-04-24 2016-12-21 京セラドキュメントソリューションズ株式会社 Image processing apparatus and image forming apparatus
CN104463779A (en) * 2014-12-18 2015-03-25 北京奇虎科技有限公司 Portrait caricature generating method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789130A (en) * 2009-12-24 2010-07-28 中兴通讯股份有限公司 Method and device for terminal equipment to use self-drawn picture
CN104050691A (en) * 2013-03-11 2014-09-17 百度国际科技(深圳)有限公司 Device and method for generating corresponding character picture based on image in terminal
US20150100537A1 (en) * 2013-10-03 2015-04-09 Microsoft Corporation Emoji for Text Predictions

Also Published As

Publication number Publication date
CN105183316B (en) 2018-05-08
CN105183316A (en) 2015-12-23

Similar Documents

Publication Publication Date Title
WO2017035971A1 (en) Method and device for generating emoticon
CN110555795B (en) High resolution style migration
US11062494B2 (en) Electronic messaging utilizing animatable 3D models
US10460483B2 (en) Tool for creating and editing arcs
US9824266B2 (en) Handwriting input apparatus and control method thereof
CN106997613B (en) 3D model generation from 2D images
US20160004672A1 (en) Method, System, and Tool for Providing Self-Identifying Electronic Messages
TW201445421A (en) Automatically manipulating visualized data based on interactivity
CN111192190B (en) Method and device for eliminating image watermark and electronic equipment
US20170090725A1 (en) Selecting at least one graphical user interface item
US20170286385A1 (en) Ink in an Electronic Document
US10339372B2 (en) Analog strokes to digital ink strokes
US10818050B2 (en) Vector graphic font character generation techniques
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
US11741650B2 (en) Advanced electronic messaging utilizing animatable 3D models
US11380028B2 (en) Electronic drawing with handwriting recognition
US10970476B2 (en) Augmenting digital ink strokes
CN108292193B (en) Cartoon digital ink
EP2911115A2 (en) Electronic device and method for color extraction
US10514841B2 (en) Multi-layered ink object
WO2023024536A1 (en) Drawing method and apparatus, and computer device and storage medium
CN107967709B (en) Improved object painting by using perspective or transport
WO2022057535A1 (en) Information display method and apparatus, and storage medium and electronic device
EP3612921A1 (en) Enhanced inking capabilities for content creation applications
US10930045B2 (en) Digital ink based visual components

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15902777

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15902777

Country of ref document: EP

Kind code of ref document: A1