CN110764627B - Input method and device and electronic equipment - Google Patents

Input method and device and electronic equipment Download PDF

Info

Publication number
CN110764627B
CN110764627B CN201810828947.XA CN201810828947A CN110764627B CN 110764627 B CN110764627 B CN 110764627B CN 201810828947 A CN201810828947 A CN 201810828947A CN 110764627 B CN110764627 B CN 110764627B
Authority
CN
China
Prior art keywords
image
information
reference images
user
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810828947.XA
Other languages
Chinese (zh)
Other versions
CN110764627A (en
Inventor
韩秦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201810828947.XA priority Critical patent/CN110764627B/en
Priority to PCT/CN2019/071010 priority patent/WO2020019683A1/en
Publication of CN110764627A publication Critical patent/CN110764627A/en
Application granted granted Critical
Publication of CN110764627B publication Critical patent/CN110764627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/80Creating or modifying a manually drawn or painted image using a manual input device, e.g. mouse, light pen, direction keys on keyboard

Abstract

The embodiment of the invention provides an input method, an input device and electronic equipment, wherein the method comprises the following steps: the input method obtains drawing track information of a keyboard area of the input method, and displays a corresponding drawing image according to the drawing track information; identifying at least one associated image corresponding to the drawn image; displaying the associated image; when the user is not satisfied with the drawing image, the image conforming to the intention can be selected from the associated images of the drawing image for input without re-drawing, so that the input efficiency of the image is improved.

Description

Input method and device and electronic equipment
Technical Field
The present invention relates to the field of input methods, and in particular, to an input method, an input device, and an electronic device.
Background
With the development of computer technology, electronic devices such as mobile phones and tablet computers are becoming more popular, and great convenience is brought to life, study and work of people. These electronic devices are typically installed with an input method application (input method for short) so that a user can input information using the input method.
With the increasing number of users using input methods, the input methods have more and more functions, including many personalized functions such as hand-drawing expressions, i.e. the users can hand-draw patterns in the keyboard and then output the hand-drawn patterns; however, the hand-drawn patterns of the user are uneven, so that the user cannot accurately express the intention of the user sometimes, and the user needs to repeatedly redraw the patterns, so that the efficiency is low.
Disclosure of Invention
The embodiment of the invention provides an input method for improving the input efficiency of images.
Correspondingly, the embodiment of the invention also provides an input device and electronic equipment, which are used for ensuring the realization and application of the method.
In order to solve the above problems, an embodiment of the present invention discloses an input method, which specifically includes: the input method obtains drawing track information of a keyboard area of the input method, and displays a corresponding drawing image according to the drawing track information; identifying at least one associated image corresponding to the drawn image; and displaying the associated image.
Optionally, the identifying at least one associated image corresponding to the drawn image includes: and identifying at least one associated image corresponding to the drawn image by adopting an identification model.
Optionally, the identifying at least one associated image corresponding to the drawn image by adopting an identification model includes: inputting drawing track information of the drawing images into an identification model, and determining a plurality of reference images; and determining at least one associated image corresponding to the drawing image according to the reference image.
Optionally, the determining at least one associated image corresponding to the drawing image according to the reference image includes: acquiring association data, wherein the association data comprises: contextual information and/or user behavior data; and selecting at least one associated image corresponding to the drawing image from the reference images according to the associated data.
Optionally, the determining at least one associated image corresponding to the drawing image according to the reference image includes: sequencing the reference images according to the similarity information corresponding to the reference images; and selecting the first N reference images with highest similarity information as associated images, wherein N is an integer greater than 0.
Optionally, before the identifying at least one associated image corresponding to the drawn image, the method further includes: and receiving an editing operation for the drawing image, and editing the drawing image to obtain a corresponding editing image.
Optionally, the step of identifying at least one associated image corresponding to the drawn image further includes: and determining the editing image as an associated image corresponding to the drawing image.
The embodiment of the invention also discloses an input device, which specifically comprises: the information acquisition module is used for acquiring drawing track information of a keyboard area of the information acquisition module by an input method and displaying a corresponding drawing image according to the drawing track information; the image recognition module is used for recognizing at least one associated image corresponding to the drawing image; and the image display module is used for displaying the associated image.
Optionally, the image recognition module is configured to recognize at least one associated image corresponding to the drawn image by using a recognition model.
Optionally, the image recognition module includes: the information calculation sub-module is used for inputting the drawing track information of the drawing images into the recognition model and determining a plurality of reference images; and the first image determining sub-module is used for determining at least one associated image corresponding to the drawing image according to the reference image.
Optionally, the first image determining submodule is specifically configured to acquire association data, where the association data includes: contextual information and/or user behavior data; and selecting at least one associated image corresponding to the drawing image from the reference images according to the associated data.
Optionally, the first image determining submodule is specifically configured to sort the reference images according to similarity information corresponding to the reference images; and selecting the first N reference images with highest similarity information as associated images, wherein N is an integer greater than 0.
Optionally, the apparatus further comprises: and the editing module is used for receiving the editing operation aiming at the drawing image and editing the drawing image to obtain a corresponding editing image.
Optionally, the image recognition module includes: and the second image determining sub-module is used for determining the editing image as an associated image corresponding to the drawing image.
The embodiment of the invention also discloses a readable storage medium, which is characterized in that when the instructions in the storage medium are executed by a processor of the electronic device, the electronic device can execute the input method according to any one of the embodiments of the invention.
The embodiment of the invention also discloses an electronic device, which comprises a memory and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, and the one or more programs comprise instructions for: the input method obtains drawing track information of a keyboard area of the input method, and displays a corresponding drawing image according to the drawing track information; identifying at least one associated image corresponding to the drawn image; and displaying the associated image.
Optionally, the identifying at least one associated image corresponding to the drawn image includes: and identifying at least one associated image corresponding to the drawn image by adopting an identification model.
Optionally, the identifying at least one associated image corresponding to the drawn image by adopting an identification model includes: inputting drawing track information of the drawing images into an identification model, and determining a plurality of reference images; and determining at least one associated image corresponding to the drawing image according to the reference image.
Optionally, the determining at least one associated image corresponding to the drawing image according to the reference image includes: acquiring association data, wherein the association data comprises: contextual information and/or user behavior data; and selecting at least one associated image corresponding to the drawing image from the reference images according to the associated data.
Optionally, the determining at least one associated image corresponding to the drawing image according to the reference image includes: sequencing the reference images according to the similarity information corresponding to the reference images; and selecting the first N reference images with highest similarity information as associated images, wherein N is an integer greater than 0.
Optionally, before the identifying at least one associated image corresponding to the drawn image, instructions for: and receiving an editing operation for the drawing image, and editing the drawing image to obtain a corresponding editing image.
Optionally, the step of identifying at least one associated image corresponding to the drawn image further comprises instructions for: and determining the editing image as an associated image corresponding to the drawing image.
The embodiment of the invention has the following advantages:
the input method of the embodiment of the invention can acquire the drawing track information of the keyboard area, display the corresponding drawing image according to the drawing track information, then identify at least one associated image corresponding to the drawing image and display the associated image for the user to select; when the user is not satisfied with the drawing image, the image conforming to the intention can be selected from the associated images of the drawing image for input without re-drawing, so that the input efficiency of the image is improved.
Drawings
FIG. 1 is a flow chart of the steps of an embodiment of an input method of the present invention;
FIG. 2 is a schematic diagram of an input method keyboard region display drawing image according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an associated image presentation interface according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an associated image and rendered image presentation interface in accordance with an embodiment of the present invention;
FIG. 5 is a flow chart of the steps of an alternative embodiment of an input method of the present invention;
FIG. 6 is a schematic diagram of a rendered image presentation interface according to an embodiment of the present invention;
FIG. 7 is a flow chart of steps of an alternative embodiment of an input method of the present invention;
FIG. 8 is a schematic diagram of an editing interface for drawing an image according to an embodiment of the present invention;
FIG. 9 is a block diagram of an embodiment of an input device of the present invention;
FIG. 10 is a block diagram of an alternative embodiment of an input device of the present invention;
FIG. 11 is a block diagram of an electronic device for input, according to an exemplary embodiment;
fig. 12 is a schematic structural view of an electronic device for input according to another exemplary embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
One of the core concepts of the embodiment of the invention is that after the input method determines the drawing image drawn by the user, the associated image corresponding to the drawing image can be identified and then displayed; and further, the associated images which possibly meet the intention of the user are provided for the user, so that the user can select the image which meets the intention from the displayed associated images without re-drawing when the user is not satisfied with drawing the image, and the input efficiency of the image is improved.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of an input method of the present invention may specifically include the following steps:
step 102, the input method obtains drawing track information of a keyboard area of the input method, and corresponding drawing images are displayed according to the drawing track information.
In the embodiment of the invention, the input method can provide the function of hand-drawing patterns, so that a user can input the drawn patterns by drawing in the keyboard area of the input method; in the process that the user draws in the keyboard region, the keyboard region of the input method can receive drawing operation of the user, and then drawing track information corresponding to the drawing operation, such as coordinates of each pixel point corresponding to the drawing operation, is recorded. In the process of executing drawing operation by a user, the input method can display the track drawn by the user in real time, so that the input method can draw the corresponding track in the keyboard area according to the drawing track information after obtaining the drawing track information, and a drawing image is obtained. For example, according to the recorded sequence of the pixel coordinates, the pixel points corresponding to the pixel coordinates are sequentially connected, so that the corresponding drawing image is displayed. Before the user finishes drawing, the drawing image displayed by the input method in the keyboard area is updated continuously, namely after the input method receives the user drawing operation, a new track can be drawn on the basis of the original drawing image, and a new drawing image is obtained until the user finishes drawing. As shown in fig. 2, the drawing image of the input method in the keyboard area is the pattern drawn by the user.
Step 104, identifying at least one associated image corresponding to the drawing image.
And 106, displaying the associated image.
After the embodiment of the invention determines that the user finishes drawing the image, one or more associated images corresponding to the drawn image can be identified; for example, features of the drawn image may be extracted, and then a corresponding associated image is determined based on the extracted features. The related image may be an image with similar characteristics to the drawing image, for example, the user draws a puppy, and the related image may be pictures of various dogs, or pictures of other animals similar to limb actions or facial expressions of the puppy drawn by the user; of course, the associated image may also be a modified rendered image. Of course, in the process of drawing by the user, the embodiment of the invention can identify the image related to the track which is drawn at present based on the track which is drawn at present. The image related to the track which is drawn currently can include various images which are predicted by the user based on the track which is drawn currently, for example, the user wants to draw a sheep, and when the head of the sheep is drawn, the input method can give out the drawing image of the sheep; for example, after predicting the drawing image of the sheep, determining the associated image of the sheep, such as the pictures of various sheep, such as the pictures of "portrait sheep", "happy sheep", etc.; also, for example, associated images related to the drawn trajectory are identified, such as identifying associated images such as expression packs of various sheep heads based on the trajectory of the sheep heads.
And then displaying the associated image, wherein the area for displaying the associated image can be called an associated display area, and can be specifically set according to requirements, for example, the area can be a candidate area of an input method, an image display area provided by an application and the like. When the user does not satisfactorily draw the image, the user can select the image meeting the intention from the displayed associated images, so that the drawing times of the image meeting the intention for inputting by the user can be reduced, and the image input efficiency is improved. As shown in fig. 3, an example of displaying an associated image according to the present invention, wherein the drawing image corresponding to the associated image is the drawing image in fig. 2.
Many users are likely to select a drawing image in order to increase the interest of chat, and thus embodiments of the present invention may also display a drawing image, where the area where the drawing image is displayed may be referred to as a drawing display area, which may or may not be the same as the associated display area. As shown in fig. 4, an example of the present invention shows an associated image and a drawn image, in which a drawn display area is different from an associated display area, 1 is the drawn image, and 2 is the associated image.
In summary, the input method of the embodiment of the invention can acquire the drawing track information of the keyboard region, display the corresponding drawing image according to the drawing track information, then identify at least one associated image corresponding to the drawing image and display the associated image for the user to select; when the user is not satisfied with the drawing image, the image conforming to the intention can be selected from the associated images of the drawing image for input without re-drawing, so that the input efficiency of the image is improved.
In another embodiment of the present invention, an identification model may be used to perform identification processing, and at least one associated image corresponding to the drawn image is determined, which is specifically as follows:
referring to fig. 5, a flowchart illustrating steps of an alternative embodiment of an input method of the present invention may specifically include the steps of:
step 502, the input method obtains drawing track information of a keyboard area of the input method, and displays a corresponding drawing image according to the drawing track information.
In the embodiment of the invention, in the process of drawing patterns in the keyboard area of the input method, the user can acquire corresponding drawing track information by the input method, and display the corresponding drawing images in real time in the keyboard area according to the drawing track information. According to the embodiment of the invention, in the process of user drawing, the image which is drawn by the user can be continuously predicted or associated, and the image which is predicted by the user and is wanted to be drawn based on the current drawn track is associated with the associated image, or the associated image of the current drawn track is associated with the associated image; after the user finishes drawing, determining a final drawing image of the user, and identifying a related image corresponding to the drawing image according to the final drawing image. In the embodiment of the invention, the input method can actively execute the operation of identifying at least one associated image corresponding to the drawn image, and can also execute the operation of identifying at least one associated image corresponding to the drawn image according to the search operation of a user. Therefore, after the final drawing image is determined by the input method, one way is that the drawing image is not required to be displayed in a drawing display area, but is directly associated according to the drawing image, and at least one associated image corresponding to the drawing image is determined; alternatively, the drawn image may be displayed in a drawing display area, such as a candidate area, and then the user may perform a search operation on the drawn image, for example, click on the search button 3 in fig. 6, and after the input method receives the search operation, the input method may query at least one associated image corresponding to the drawn image.
In the association or inquiry process, at least one associated image corresponding to the drawn image can be identified by adopting an identification model; the identifying modes of the identifying model comprise a plurality of identifying modes, wherein one identifying mode can be identifying according to the drawing track information of the drawing image, and the identifying modes can be realized through a step 504 and a step 508; another way of identifying may be to identify according to the drawn image, which may be specifically implemented through step 506 and step 508.
Step 504, drawing track information of the drawing image is input into a recognition model, and a plurality of reference images are determined.
In the embodiment of the invention, if the identification model is identified according to the drawing track information corresponding to the drawing image, the drawing track information of the drawing image can be input into the identification model, then the drawing track information is matched with the track characteristic points of each reference image, and the similarity information of the drawing track information and each reference image is calculated. The reference image may be a local image, a network image, or a history drawing image of the user or a history drawing image of another user, which is not limited in the present invention. Then selecting a plurality of reference images from all the reference images according to the similarity information, for example, selecting the first few reference images with high similarity; and determining an associated image according to the reference image, namely executing step 508.
Step 506, inputting the drawn image into an image model, and determining a plurality of reference images.
In the embodiment of the invention, if the identification model is identified according to the drawing track corresponding to the drawing image, the drawing image can be input into the image model, the drawing image is matched with each reference image, and the similarity information of the drawing image and each reference image is calculated respectively. And selecting a plurality of reference images from the reference images according to the similarity information, and determining according to the reference images, namely executing step 508.
Step 508, determining at least one associated image corresponding to the drawing image according to the reference image.
In the embodiment of the present invention, the manner of determining at least one associated image corresponding to the drawn image includes multiple manners, where one manner is to determine the associated image according to the similarity between each reference image and the drawn image, and the method may be implemented in sub-step 82-sub-step 84:
and a sub-step 82 of sorting the reference images according to the similarity information corresponding to the reference images.
In the sub-step 84, the top N reference images with the highest similarity information are selected as the associated images, where N is an integer greater than 0.
In the embodiment of the invention, the similarity information may be proportional to the similarity, that is, the larger the similarity information is, the larger the similarity is, and vice versa; therefore, the reference images can be ordered in a descending order according to the similarity information corresponding to the reference images, and then the first N reference images with the highest similarity information are selected as related images, wherein N is an integer greater than 0; and the first N reference images most similar to the rendered image may be selected. Of course, the similarity information may be inversely proportional to the similarity, and then the reference images may be sorted in ascending order according to the similarity information corresponding to each reference image, and then the first N reference images with the lowest similarity information are selected as the associated images. One or more associated images corresponding to the drawn image may be identified by the identification model.
Another way is to determine an associated image according to associated data, wherein the associated data may include context information and/or user behavior data, wherein the context information may be used to embody a user drawing intention, and the user behavior data may embody a user habit; wherein, can be realized by sub-step 92-sub-step 94:
Substep 92, obtaining association data, the association data comprising context information and/or user behavior data. If the context information is currently in the chat application, the context information can comprise local end information and/or opposite end information; the user behavior data may be behavior data of the user, or may be behavior data of all users in the whole network.
And a substep 94, selecting at least one associated image corresponding to the drawing image from the reference images according to the associated data.
In the embodiment of the invention, at least one associated image can be determined according to the context information, in one example of the invention, the reference image comprises sheep in a plurality of cartoons, sketched sheep and Pickle-style sheep, and the acquired context information is 'you draw me Pickle-style sheep', and then the Pickle-style sheep can be determined as the associated image. In another example of the present invention, the reference image includes pictures of an animator, a mouse in a black cat police officer, a mouse in a cat and a mouse, a mouse in a ninja tortoise, and a mouse in real life, and the acquired contextual information is "i feel that the mouse in the animator is super lovely", and then the animator, a mouse in a black cat police officer, a cat and a mouse, and a ninja tortoise can be determined as the relevant information. In another example of the invention, the reference image comprises smiling faces of A, B, C, D expression packages, and the frequency of selecting pictures in the A expression package by the user is determined according to the user behavior data, which is far higher than the frequency of selecting pictures in other expression packages, so that the smiling faces in the A expression package can be used as the associated images. Of course, the embodiment of the invention can also combine the context information and the user behavior data to determine the associated image.
Step 510, displaying the associated image.
Step 512, receiving a screen operation, and screen-on or sending an associated image corresponding to the screen operation.
In the embodiment of the invention, after the associated image of the drawing image is identified, the associated image can be displayed; for example, the associated images may be displayed according to the similarity information corresponding to each associated image, for example, the associated image with high similarity information is displayed before the associated image with low similarity information. For another example, if the associated display area and the drawing display area are the same, the drawing image may be used as the first candidate of the first screen, and the associated image may be used as the other candidate of the first screen, which is not limited in the present invention.
And then after the user determines that the related image needs to be input into the editing frame or needs to be transmitted, the user can execute the screen-on operation on the related image, and after the input method receives the screen-on operation, the related image can be screen-on to the corresponding editing frame or the related image can be transmitted. Of course, if a drawing image is displayed, the user may also perform a screen-on operation for the drawing image.
In the embodiment of the invention, after the drawing image is determined, the input method can identify the associated image corresponding to the drawing image and display the associated image, so that when the user is not satisfied with the drawing image, the image conforming to the intention can be selected from the associated image without re-drawing, and the input efficiency of the image is improved.
Secondly, in the embodiment of the invention, an identification model can be adopted to identify the associated image corresponding to the drawing image; the method for identifying the associated image comprises the steps of inputting drawing track information of the drawing image into an identification model, respectively calculating similarity information of the drawing track information and each reference image, and determining at least one associated image corresponding to the drawing image according to each reference image and the corresponding similarity information; the accuracy of the associated image recognition can be further improved, the displayed associated image can better meet the user intention, the image input efficiency is further improved, and the user experience is also improved.
Further, in the embodiment of the invention, when at least one associated image corresponding to the drawing image is determined according to each reference image and the corresponding similarity information, each reference image is ordered according to the similarity information corresponding to each reference image, and the first N reference images with the highest similarity information are selected as the associated images; namely, the associated image with high similarity for drawing the image is selected, so that the user experience is further improved.
In another embodiment of the present invention, the user may be less satisfied with certain portions of the pattern he draws, such as the color of the line, the size of a certain portion, the amount of radian of a certain line segment, etc.; therefore, the input method can provide an editing function of the drawing image, and a user can execute editing operation on the drawing image to modify the drawing image, so that re-drawing is not needed, and the input efficiency of the pattern is further improved.
Referring to fig. 7, a flowchart illustrating steps of an alternative embodiment of an input method of the present invention may specifically include the following steps:
step 702, obtaining drawing track information of a keyboard region by an input method, and displaying a corresponding drawing image according to the drawing track information.
This step is similar to step 502 described above and will not be described again.
In the embodiment of the invention, the input method can also provide an editing function for the drawing image, so that when a user is dissatisfied with the drawing image, the drawing image can be modified by editing the drawing image. Therefore, after the final drawing image of the user is determined by the input method, the drawing image can be displayed in a drawing display area such as a candidate area, so that the user can edit the drawing image.
Step 704, receiving an editing operation for the drawing image, and editing the drawing image to obtain a corresponding editing image.
In the embodiment of the invention, when the user determines that the drawn image needs to be edited, the operation of displaying the corresponding editing interface can be performed on the drawn image, for example, the editing button 4 in fig. 6 is clicked; after the input method receives the operation, an editing interface corresponding to the drawing image can be displayed, as shown in fig. 8, which is an example of the editing interface corresponding to the drawing image, where the drawing image is the drawing image in fig. 2. The user can further execute editing operations in the editing interface, the editing operations comprise various operations, such as erasing operations, line drawing operations, color filling and the like, the input method receives the editing operations of the user, and then the drawing image can be edited according to the editing information corresponding to the editing operations, so that an editing image is obtained. Therefore, the user can modify the drawn pattern through editing the drawn image without re-drawing, and the input efficiency of the image is further improved.
And step 706, determining the edited image as an associated image corresponding to the drawing image.
Step 708, displaying the associated image.
And 710, receiving a screen operation, and screening or sending an associated image corresponding to the screen operation.
In the embodiment of the invention, after the input method determines that the user finishes editing the drawn image, the edited image can be determined as the associated image and then displayed; furthermore, a user can execute a screen-on operation aiming at the edited image, and an input method can screen the edited image or send the edited image; of course, the associated image may also be saved.
In an alternative embodiment of the present invention, the associated display area and the drawing display area may be the same, so that when the drawing display area displays an image, an edit image and a drawing image may be displayed at the same time, or the edit image may be used to replace the drawing image, that is, only the edit image is displayed in the drawing display area. Of course, in the embodiment of the present invention, after the edited image is determined as the associated image, the edited image may also be directly displayed on a screen or sent, and may be specifically set according to the requirement, which is not limited in the embodiment of the present invention.
Of course, in an alternative embodiment of the present invention, after determining the drawing image, the input method may determine the editing image according to the editing operation of the user, and determine the editing image as the associated image, and may identify the associated image corresponding to the drawing image according to the searching operation (or association) of the user; and further such that the associated image to which the drawn image corresponds may include the edited image and the image identified according to the search operation (or association).
In addition, in an optional embodiment of the present invention, after the input method determines the editing image, the input method may further identify an associated image of the editing image, for example, may directly associate the editing image, identify an associated image corresponding to the editing image, or may identify an associated image corresponding to the editing image according to a search operation performed by a user on the editing image after the editing image is displayed in the associated display area. Then the associated image of the editing image can be displayed, and the editing image can be displayed; the invention is not limited in this regard.
In summary, in the embodiment of the invention, when the input method acquires the drawing track information of the keyboard region, a corresponding drawing image can be displayed according to the drawing track information, then the drawing image can be edited to an editing image according to the editing operation for the drawing image, and then the editing image is determined to be a related image and displayed; and when the user is not satisfied with the drawing image, the image which can satisfy the intention can be generated through editing the drawing image without re-drawing, so that the input efficiency of the image is further improved.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 9, a block diagram of an embodiment of an input device of the present invention is shown, and may specifically include the following modules: an information acquisition module 902, an image recognition module 904, and an image presentation module 906, wherein,
the information acquisition module 902 is configured to acquire drawing track information of a keyboard region according to an input method, and display a corresponding drawing image according to the drawing track information;
an image recognition module 904, configured to recognize at least one associated image corresponding to the drawn image;
and the image display module 906 is configured to display the associated image.
Referring to fig. 10, a block diagram of an alternative embodiment of an input device of the present invention is shown. In an alternative embodiment of the present invention, the apparatus further comprises: and the editing module 908 is configured to receive an editing operation for the drawing image, and edit the drawing image to obtain a corresponding edited image.
In an alternative embodiment of the present invention, the image recognition module 904 is configured to recognize at least one associated image corresponding to the drawn image using a recognition model.
In an alternative embodiment of the present invention, the image recognition module 904 includes: an information calculation sub-module 9042, a first image determination sub-module 9044 and a second image determination sub-module 9046, wherein,
an information calculation sub-module 9042, configured to input drawing track information of the drawing image into an identification model, and determine a plurality of reference images;
a first image determining sub-module 9044, configured to determine at least one associated image corresponding to the drawing image according to the reference image.
And a second image determining sub-module 9048, configured to determine the edited image as an associated image corresponding to the drawing image.
The first image determining submodule 9044 is configured to sort the reference images according to similarity information corresponding to the reference images; and selecting the first N reference images with highest similarity information as associated images, wherein N is an integer greater than 0.
The first image determining sub-module 9044 is specifically configured to obtain association data, where the association data includes: contextual information and/or user behavior data; and selecting at least one associated image corresponding to the drawing image from the reference images according to the associated data.
The input method of the embodiment of the invention can acquire the drawing track information of the keyboard area, display the corresponding drawing image according to the drawing track information, then identify at least one associated image corresponding to the drawing image and display the associated image for the user to select; when the user is not satisfied with the drawing image, the image conforming to the intention can be selected from the associated images of the drawing image for input without re-drawing, so that the input efficiency of the image is improved.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
Fig. 11 is a block diagram illustrating a configuration of an electronic device 1100 for input, according to an example embodiment. For example, the electronic device 1100 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 11, an electronic device 1100 may include one or more of the following components: a processing component 1102, a memory 1104, a power component 1106, a multimedia component 1108, an audio component 1110, an input/output (I/O) interface 1112, a sensor component 1114, and a communication component 1116.
The processing component 1102 generally controls overall operation of the electronic device 1100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 1102 may include one or more processors 1120 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1102 can include one or more modules that facilitate interactions between the processing component 1102 and other components. For example, the processing component 1102 may include a multimedia module to facilitate interaction between the multimedia component 1108 and the processing component 1102.
Memory 1104 is configured to store various types of data to support operations at device 1100. Examples of such data include instructions for any application or method operating on the electronic device 1100, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1104 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 1106 provides power to the various components of the electronic device 1100. The power components 1106 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 1100.
The multimedia component 1108 includes a screen between the electronic device 1100 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, multimedia component 1108 includes a front camera and/or a rear camera. When the electronic device 1100 is in an operational mode, such as a shooting mode or a video mode, the front-facing camera and/or the rear-facing camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 1110 is configured to output and/or input an audio signal. For example, the audio component 1110 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 1104 or transmitted via the communication component 1116. In some embodiments, the audio component 1110 further comprises a speaker for outputting audio signals.
The I/O interface 1112 provides an interface between the processing component 1102 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 1114 includes one or more sensors for providing status assessment of various aspects of the electronic device 1100. For example, the sensor assembly 1114 may detect an on/off state of the device 1100, a relative positioning of components such as a display and keypad of the electronic device 1100, a change in position of the electronic device 1100 or a component of the electronic device 1100, the presence or absence of a user's contact with the electronic device 1100, an orientation or acceleration/deceleration of the electronic device 1100, and a change in temperature of the electronic device 1100. The sensor assembly 1114 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 1114 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1114 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1116 is configured to facilitate communication between the electronic device 1100 and other devices, either wired or wireless. The electronic device 1100 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication part 1114 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 1114 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer-readable storage medium is also provided, such as a memory 1104 including instructions executable by the processor 1120 of the electronic device 1100 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A non-transitory computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform an input method, the method comprising: the input method obtains drawing track information of a keyboard area of the input method, and displays a corresponding drawing image according to the drawing track information; identifying at least one associated image corresponding to the drawn image; and displaying the associated image.
Optionally, the identifying at least one associated image corresponding to the drawn image includes: and identifying at least one associated image corresponding to the drawn image by adopting an identification model.
Optionally, the identifying at least one associated image corresponding to the drawn image by adopting an identification model includes: inputting drawing track information of the drawing images into an identification model, and determining a plurality of reference images; and determining at least one associated image corresponding to the drawing image according to the reference image.
Optionally, the determining at least one associated image corresponding to the drawing image according to the reference image includes: acquiring association data, wherein the association data comprises: contextual information and/or user behavior data; and selecting at least one associated image corresponding to the drawing image from the reference images according to the associated data.
Optionally, the determining at least one associated image corresponding to the drawing image according to the reference image includes: sequencing the reference images according to the similarity information corresponding to the reference images; and selecting the first N reference images with highest similarity information as associated images, wherein N is an integer greater than 0.
Optionally, before the identifying at least one associated image corresponding to the drawn image, the method further includes: and receiving an editing operation for the drawing image, and editing the drawing image to obtain a corresponding editing image.
Optionally, the step of identifying at least one associated image corresponding to the drawn image further includes: and determining the editing image as an associated image corresponding to the drawing image.
Fig. 12 is a schematic structural view of an electronic device 1200 for input according to another exemplary embodiment of the present invention. The electronic device 1200 may be a server, which may vary widely in configuration or performance, and may include one or more central processing units (central processing units, CPU) 1222 (e.g., one or more processors) and memory 1232, one or more storage media 1230 (e.g., one or more mass storage devices) storing applications 1242 or data 1244. Wherein memory 1232 and storage medium 1230 can be transitory or persistent. The program stored on the storage medium 1230 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 1222 may be configured to communicate with a storage medium 1230, executing a series of instruction operations on the storage medium 1230 on a server.
The servers may also include one or more power supplies 1226, one or more wired or wireless network interfaces 1250, one or more input/output interfaces 1258, one or more keyboards 1256, and/or one or more operating systems 1241, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, and the like.
An electronic device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for: the input method obtains drawing track information of a keyboard area of the input method, and displays a corresponding drawing image according to the drawing track information; identifying at least one associated image corresponding to the drawn image; and displaying the associated image.
Optionally, the identifying at least one associated image corresponding to the drawn image includes: and identifying at least one associated image corresponding to the drawn image by adopting an identification model.
Optionally, the identifying at least one associated image corresponding to the drawn image by adopting an identification model includes: inputting drawing track information of the drawing images into an identification model, and determining a plurality of reference images; and determining at least one associated image corresponding to the drawing image according to the reference image.
Optionally, the determining at least one associated image corresponding to the drawing image according to the reference image includes: acquiring association data, wherein the association data comprises: contextual information and/or user behavior data; and selecting at least one associated image corresponding to the drawing image from the reference images according to the associated data.
Optionally, the determining at least one associated image corresponding to the drawing image according to the reference image includes: sequencing the reference images according to the similarity information corresponding to the reference images; and selecting the first N reference images with highest similarity information as associated images, wherein N is an integer greater than 0.
Optionally, before the identifying at least one associated image corresponding to the drawn image, instructions for: and receiving an editing operation for the drawing image, and editing the drawing image to obtain a corresponding editing image.
Optionally, the step of identifying at least one associated image corresponding to the drawn image further comprises instructions for: and determining the editing image as an associated image corresponding to the drawing image.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The above description of an input method, an input device and an electronic device provided by the present invention applies specific examples to illustrate the principles and embodiments of the present invention, and the above examples are only used to help understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (7)

1. An input method, comprising:
the method comprises the steps of obtaining drawing track information of a keyboard region by an input method, displaying a corresponding drawing image according to the drawing track information, predicting based on a current drawn track, and displaying a predicted image;
inputting drawing track information of the drawing images into an identification model, and determining a plurality of reference images, wherein the similarity information between track characteristic points of the reference images and the drawing track information is higher than that between other images except the reference images and the drawing track information; acquiring associated data, wherein the associated data comprises at least one of context information and user behavior data, the context information comprises at least one of local end information and opposite end information, the context information is used for reflecting drawing intention of a user, the user behavior data comprises behavior data of the user, and the user behavior data is used for reflecting habit of the user; selecting at least one associated image corresponding to the drawing image from the reference images according to the associated data; the method comprises the steps of,
Receiving an editing operation aiming at the drawing image, and editing the drawing image to obtain a corresponding editing image; determining the editing image as an associated image corresponding to the drawing image;
displaying at least one associated image corresponding to the drawing image and at least one associated image corresponding to the prediction image;
and receiving a screen-on operation, and screening or sending an associated image, the drawing image or the prediction image corresponding to the screen-on operation.
2. The method according to claim 1, wherein the method further comprises:
sequencing the reference images according to the similarity information corresponding to the reference images;
and selecting the first N reference images with highest similarity information as associated images, wherein N is an integer greater than 0.
3. An input device, comprising:
the information acquisition module is used for acquiring drawing track information of the keyboard region by an input method and displaying a corresponding drawing image according to the drawing track information;
a module for performing the steps of: predicting based on the current drawn track, and displaying a predicted image;
the image recognition module comprises an information calculation sub-module, a first image determination sub-module and a second image determination sub-module, wherein the information calculation sub-module is used for inputting the drawing track information of the drawing image into a recognition model and determining a plurality of reference images, and the similarity information between the track characteristic points of the reference images and the drawing track information is higher than the similarity information between other images except the reference images and the drawing track information;
The first image determining sub-module is used for acquiring associated data, wherein the associated data comprises at least one of context information and user behavior data, the context information comprises at least one of local end information and opposite end information, the context information is used for reflecting drawing intention of a user, the user behavior data comprises behavior data of the user, and the user behavior data is used for reflecting habit of the user; selecting at least one associated image corresponding to the drawing image from the reference images according to the associated data;
the editing module is used for receiving the editing operation aiming at the drawing image and editing the drawing image to obtain a corresponding editing image;
the second image determining submodule is used for determining the editing image as an associated image corresponding to the drawing image;
the image display module is used for displaying at least one associated image corresponding to the drawing image and at least one associated image corresponding to the prediction image;
a module for performing the steps of: and receiving a screen-on operation, and screening or sending an associated image, the drawing image or the prediction image corresponding to the screen-on operation.
4. The apparatus of claim 3, wherein the device comprises a plurality of sensors,
the first image determining sub-module is specifically configured to sort the reference images according to similarity information corresponding to the reference images; and selecting the first N reference images with highest similarity information as associated images, wherein N is an integer greater than 0.
5. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the input method according to any of the method claims 1-2.
6. An electronic device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for:
the method comprises the steps of obtaining drawing track information of a keyboard region by an input method, displaying a corresponding drawing image according to the drawing track information, predicting based on a current drawn track, and displaying a predicted image;
inputting drawing track information of the drawing images into an identification model, and determining a plurality of reference images, wherein the similarity information between track characteristic points of the reference images and the drawing track information is higher than that between other images except the reference images and the drawing track information; acquiring associated data, wherein the associated data comprises at least one of context information and user behavior data, the context information comprises at least one of local end information and opposite end information, the context information is used for reflecting drawing intention of a user, the user behavior data comprises behavior data of the user, and the user behavior data is used for reflecting habit of the user; selecting at least one associated image corresponding to the drawing image from the reference images according to the associated data; the method comprises the steps of,
Receiving an editing operation aiming at the drawing image, and editing the drawing image to obtain a corresponding editing image; determining the editing image as an associated image corresponding to the drawing image;
displaying at least one associated image corresponding to the drawing image and at least one associated image corresponding to the prediction image;
and receiving a screen-on operation, and screening or sending an associated image, the drawing image or the prediction image corresponding to the screen-on operation.
7. The electronic device of claim 6, wherein the electronic device further comprises instructions to:
sequencing the reference images according to the similarity information corresponding to the reference images;
and selecting the first N reference images with highest similarity information as associated images, wherein N is an integer greater than 0.
CN201810828947.XA 2018-07-25 2018-07-25 Input method and device and electronic equipment Active CN110764627B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810828947.XA CN110764627B (en) 2018-07-25 2018-07-25 Input method and device and electronic equipment
PCT/CN2019/071010 WO2020019683A1 (en) 2018-07-25 2019-01-09 Input method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810828947.XA CN110764627B (en) 2018-07-25 2018-07-25 Input method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110764627A CN110764627A (en) 2020-02-07
CN110764627B true CN110764627B (en) 2023-11-10

Family

ID=69180221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810828947.XA Active CN110764627B (en) 2018-07-25 2018-07-25 Input method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN110764627B (en)
WO (1) WO2020019683A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857913A (en) * 2020-07-03 2020-10-30 Oppo广东移动通信有限公司 Screen-turning image generation method and device, electronic equipment and readable storage medium
CN112099645A (en) * 2020-09-04 2020-12-18 北京百度网讯科技有限公司 Input image generation method and device, electronic equipment and storage medium
CN112269522A (en) * 2020-10-27 2021-01-26 维沃移动通信(杭州)有限公司 Image processing method, image processing device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789130A (en) * 2009-12-24 2010-07-28 中兴通讯股份有限公司 Method and device for terminal equipment to use self-drawn picture
CN104461099A (en) * 2013-09-24 2015-03-25 邓桂成 Handwritten simplified Chinese character input method and system
CN105144037A (en) * 2012-08-01 2015-12-09 苹果公司 Device, method, and graphical user interface for entering characters
CN105677059A (en) * 2015-12-31 2016-06-15 广东小天才科技有限公司 Method and system for inputting expression pictures
CN108287857A (en) * 2017-02-13 2018-07-17 腾讯科技(深圳)有限公司 Expression picture recommends method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102285699B1 (en) * 2015-01-09 2021-08-04 삼성전자주식회사 User terminal for displaying image and image display method thereof
CN105183316B (en) * 2015-08-31 2018-05-08 百度在线网络技术(北京)有限公司 A kind of method and apparatus for generating face word
CN107122113B (en) * 2017-03-31 2021-07-13 北京小米移动软件有限公司 Method and device for generating picture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789130A (en) * 2009-12-24 2010-07-28 中兴通讯股份有限公司 Method and device for terminal equipment to use self-drawn picture
CN105144037A (en) * 2012-08-01 2015-12-09 苹果公司 Device, method, and graphical user interface for entering characters
CN104461099A (en) * 2013-09-24 2015-03-25 邓桂成 Handwritten simplified Chinese character input method and system
CN105677059A (en) * 2015-12-31 2016-06-15 广东小天才科技有限公司 Method and system for inputting expression pictures
CN108287857A (en) * 2017-02-13 2018-07-17 腾讯科技(深圳)有限公司 Expression picture recommends method and device

Also Published As

Publication number Publication date
WO2020019683A1 (en) 2020-01-30
CN110764627A (en) 2020-02-07

Similar Documents

Publication Publication Date Title
CN109089133B (en) Video processing method and device, electronic equipment and storage medium
CN108038102B (en) Method and device for recommending expression image, terminal and storage medium
CN108227950B (en) Input method and device
CN106485567B (en) Article recommendation method and device
CN109961094B (en) Sample acquisition method and device, electronic equipment and readable storage medium
CN109447125B (en) Processing method and device of classification model, electronic equipment and storage medium
CN110781323A (en) Method and device for determining label of multimedia resource, electronic equipment and storage medium
CN110764627B (en) Input method and device and electronic equipment
US11335348B2 (en) Input method, device, apparatus, and storage medium
CN113065591B (en) Target detection method and device, electronic equipment and storage medium
CN111242303A (en) Network training method and device, and image processing method and device
CN111046927B (en) Method and device for processing annotation data, electronic equipment and storage medium
CN113920293A (en) Information identification method and device, electronic equipment and storage medium
CN112784151B (en) Method and related device for determining recommended information
CN109144286B (en) Input method and device
CN109901726B (en) Candidate word generation method and device and candidate word generation device
CN111831132A (en) Information recommendation method and device and electronic equipment
CN113870195A (en) Target map detection model training and map detection method and device
CN109145151B (en) Video emotion classification acquisition method and device
CN113761275A (en) Video preview moving picture generation method, device and equipment and readable storage medium
CN108154092B (en) Face feature prediction method and device
CN112036247A (en) Expression package character generation method and device and storage medium
CN111428806B (en) Image tag determining method and device, electronic equipment and storage medium
CN113190725B (en) Object recommendation and model training method and device, equipment, medium and product
CN115484471B (en) Method and device for recommending anchor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant