CN113434045A - Input method, input device and input device - Google Patents

Input method, input device and input device Download PDF

Info

Publication number
CN113434045A
CN113434045A CN202010209469.1A CN202010209469A CN113434045A CN 113434045 A CN113434045 A CN 113434045A CN 202010209469 A CN202010209469 A CN 202010209469A CN 113434045 A CN113434045 A CN 113434045A
Authority
CN
China
Prior art keywords
content
user
augmented
input
writing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010209469.1A
Other languages
Chinese (zh)
Inventor
冯静静
郝男
蔡雅莉
鲁剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202010209469.1A priority Critical patent/CN113434045A/en
Publication of CN113434045A publication Critical patent/CN113434045A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses an input method, an input device and a device for inputting. An embodiment of the method comprises: detecting whether the user has the expansion writing intention or not based on the input related information of the user; if the augmented writing intention exists, acquiring the associated content associated with the augmented writing intention of the user; integrating the input content of the user with the acquired associated content to generate expanded writing content; and displaying the generated at least one augmented writing content. The implementation mode ensures the richness of the input content and improves the convenience of content acquisition and the input efficiency.

Description

Input method, input device and input device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an input method, an input device and an input device.
Background
In the input scene, the user generally needs to fill and moisten the input content and then send the content. For example, when a user sends a new year blessing to an opposite user through an instant messaging application, the user usually needs to edit a blessing message and then send the blessing message.
In the existing method, in order to ensure richness of input contents, a user is generally required to manually search related contents from a network based on own input intention, and edit the contents by referring to the searched contents, so as to transmit the contents. Alternatively, the user creates the content by himself and then transmits the content. However, the content of manual search is easily uniform and not easy to meet the user requirement. Meanwhile, the process of manually searching or creating content is generally time-consuming, resulting in low input efficiency. Especially for users with poor searching and editing capabilities, the process brings more inconvenience to the users.
Disclosure of Invention
The embodiment of the application provides an input method, an input device and an input device, so that the richness of input contents is ensured, and meanwhile, the convenience of content acquisition and the input efficiency are improved.
In a first aspect, an embodiment of the present application provides an input method, where the method includes: detecting whether the user has the expansion writing intention or not based on the input related information of the user; if the augmented writing intention exists, acquiring the associated content associated with the augmented writing intention of the user; integrating the input content of the user with the acquired associated content to generate expanded writing content; and displaying the generated at least one augmented writing content.
In a second aspect, an embodiment of the present application provides an input device, including: a detection unit configured to detect whether the user has an augmented writing intention based on input related information of the user; an acquisition unit configured to acquire associated content associated with the augmented writing intention of the user if the augmented writing intention exists; the integration unit is configured to integrate the input content of the user with the acquired associated content to generate the expanded writing content; a presentation unit configured to present the generated at least one augmented write content.
In a third aspect, an embodiment of the present application provides an apparatus for input, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs are configured to be executed by the one or more processors and include instructions for: detecting whether the user has the expansion writing intention or not based on the input related information of the user; if the augmented writing intention exists, acquiring the associated content associated with the augmented writing intention of the user; integrating the input content of the user with the acquired associated content to generate expanded writing content; and displaying the generated at least one augmented writing content.
In a fourth aspect, embodiments of the present application provide a computer-readable medium on which a computer program is stored, which when executed by a processor, implements the method as described in the first aspect above.
According to the input method, the input device and the input device provided by the embodiment of the application, when it is detected that the user has the augmented writing intention based on the input related information of the user, the associated content associated with the augmented writing intention of the user is obtained, and then the input content of the user is integrated with the obtained associated content to generate the augmented writing content, so that at least one generated augmented writing content can be displayed. Therefore, the input content can be expanded through the associated content associated with the expansion intention of the user, and the richness of the input content is ensured. Meanwhile, when the user is detected to have the expanding and writing intention, the expanding and writing content can be automatically generated and displayed, and compared with an expanding and writing mode of manual searching or creation, the convenience and the input efficiency of content acquisition are improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow diagram of one embodiment of an input method according to the present application;
FIG. 2 is a flow diagram of yet another embodiment of an input method according to the present application;
FIG. 3 is a schematic diagram of an augmented written content presentation process according to the present application;
FIG. 4 is yet another schematic diagram of an augmented content presentation process according to the present application;
FIG. 5 is a schematic diagram of an embodiment of an input device according to the present application;
FIG. 6 is a schematic diagram of a structure of an apparatus for input according to the present application;
FIG. 7 is a schematic diagram of a server in accordance with some embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to FIG. 1, a flow 100 of one embodiment of an input method according to the present application is shown. The input method can be operated in various electronic devices including but not limited to: a server, a smart phone, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III) player, an MP4 (Moving Picture Experts Group Audio Layer IV) player, a laptop, a car computer, a desktop computer, a set-top box, an intelligent tv, a wearable device, and so on.
The electronic device may be installed with various types of client applications, such as an input method application, an instant messaging application, social platform software, and the like.
The input method application mentioned in the embodiment of the application can support various input methods. The input method may be an encoding method used for inputting various symbols to electronic devices such as computers and mobile phones, and a user may conveniently input a desired character or character string to the electronic devices using the input method application. It should be noted that, in the embodiment of the present application, in addition to the common chinese input method (such as pinyin input method, wubi input method, zhuyin input method, phonetic input method, handwriting input method, etc.), the input method may also support other languages (such as english input method, japanese hiragana input method, korean input method, etc.), and the input method and the language category of the input method are not limited at all.
The input method in this embodiment may include the following steps:
step 101, detecting whether the user has the expansion intention or not based on the input related information of the user.
In this embodiment, the execution subject of the input method may first acquire input-related information of the user. The output-related information herein may include, but is not limited to, at least one of: the input content of the user, the context information of the input content, the input scene of the user, the personal preference of the user, the current input scene, the historical behavior data of the user in the input process and the like. The input content here may refer to text content that the user is currently editing but has not yet sent. As an example, in a scenario where a local user is instant messaging with an opposite user through some instant messaging application, the input content may be an instant messaging message that the local user is currently editing but has not yet been sent to the opposite user.
After acquiring the input related information of the user, the execution subject may detect whether the user has an augmented writing intention based on the input related information. The augmented intent herein may refer to the user's intent to further enrich the current input content.
In one scenario, the execution body may be deployed with an algorithm or model for detecting the user's augmented writing intent. At this time, the execution subject can detect whether the user has the augmented writing intention and the specific augmented writing intention (such as category, name, etc.) based on the locally deployed algorithm or model.
In another scenario, the execution agent may detect whether the user has the write-extending intention by requesting the server. Specifically, an augmented writing intention detection request including the input related information may be sent to the server, and the server detects whether the user has an augmented writing intention based on the input related information, so as to obtain an augmented writing intention detection result. The augmented writing intention detection result can not only represent whether the user has augmented writing intention, but also represent the category or name of the specific augmented writing intention of the user when the user has augmented writing intention, such as blessing intention, comfort intention, liability intention and the like.
Here, the category or name of the expanded intention may be further divided as necessary. For example, the blessing intentions may be further divided into a new year blessing intention, a birthday blessing intention, a teacher's festival blessing intention, and the like. The present embodiment does not limit the way of classifying the categories of the expanded writing intentions.
The server here may be hardware or software. When the server is hardware, the server may be implemented as a distributed device cluster formed by multiple devices, or may be implemented as a single device, such as a server or a server cluster. When the server is software, the server may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module. And is not particularly limited herein.
In this embodiment, the detection of the augmented writing intention of the user can be performed in various ways. As an example, a keyword table corresponding to each augmented intention may be constructed in advance. For example, if the expanded intent is a new year blessing intent, the keywords in the corresponding keyword list may include, but are not limited to, "New year good," "New year Happy," "old year good," "everything is good," "year surplus," "New year wins old year," and so on. Keywords may then be extracted from the user's input content and the extracted keywords may be matched to each keyword table. If the keyword list is matched, the user can be considered to have the expansion intention, and the expansion intention corresponding to the matched keyword list can be used as the expansion intention of the user. If the keyword table is not matched, the user can be considered to have no augmented intention.
As yet another example, historical behavior data of the user during the input process may be obtained first, and the frequency of user triggers on automatically generated augmented written content may be analyzed. If the trigger frequency is less than a predetermined value, it may be determined that the user is not preferred to use the automatically generated augmented written content. At this time, the user may be considered not to have the augmented writing intention. Otherwise, if the triggering frequency is greater than or equal to a certain preset value, it may be considered that the user prefers to use the automatically generated augmented written content. At this time, the user can be considered to have an expanded writing intention. Further, the current specific augmented writing intention of the user can be detected through the input related information of the user. For example, the current specific augmented writing intention of the user may be detected based on the current input content, or may be determined based on personal preferences of the user.
It is understood that, for users with less triggering frequency or times, in order to guide the usage habits of these users, the potential augmented writing intention of the user may also be determined based on the input related content (such as the input content, the context content, and the like) of the user, and the following process of generating the augmented writing content is performed. When the high-quality augmented writing content is obtained, the augmented writing content can be displayed or recommended, so that a user can be guided to know and try to use the augmented writing function of the input method conveniently.
As yet another example, context information of the current input content may be first obtained, and based on the context information, the augmented writing intent of the user may be detected. For example, if a keyword corresponding to a certain augmented writing intention exists in the context information, the user can be considered to have the augmented writing intention. Where the context information may refer to the current input location context information. The input position may refer to a position where an input focus (e.g., a cursor, etc.) in a current input scene is located. For example, when editing a document using word, the input position may refer to where a cursor in the word document is located. In addition, for an instant messaging scenario, contextual information may refer to information sent or received in the instant messaging interface prior to the current time.
In some optional implementations of this embodiment, the input related information of the user may include input content of the user. At this time, the user's augmented writing intention may also be detected by:
firstly, extracting keywords from input contents, and acquiring near-meaning words of the keywords.
The keywords in the input content may be extracted in various ways. As one example, the input content may be first participled. And then detecting whether each word is a keyword or not by a common keyword detection method such as word list matching. As yet another example, keywords in the input content may be determined using an existing keyword detection model.
The synonyms of the keywords can be obtained in various ways. As one example, the category of the keyword may be determined first. For example, the category of the keyword "happy new year" is blessing new year. Then, the words belonging to the category can be used as the similar words of the keyword. For example, the keywords belonging to the same blessing category include "New year good", "old year good", "all the things being good", "richness of year", and the like. At this time, these words may be treated as synonyms for "happy new year". It should be noted that after obtaining the similar meaning words, the similar meaning words may be further filtered to obtain one word with the highest relevance to the keyword or at least one word with a higher relevance. As another example, a large number of near-meaning phrases may be manually set, and the near-meaning words in the extracted keywords may be determined by means of character string matching.
And secondly, determining input characteristics based on the keywords and the similar meaning words. Here, Word vectors (Word embedding) of the keywords and the near Word may be determined. The determined word vector is then used as an input feature.
And thirdly, acquiring the user characteristics of the user. Here, the user feature may be a vector representation of the user representation. The user representation may include, but is not limited to, information about the user's age, gender, location, preferences, occupation, work, etc.
And fourthly, respectively inputting the input characteristics and the user characteristics into a pre-trained augmented writing intention detection model to generate an augmented writing intention detection result of the user. The augmented writing intent detection model herein can be used to detect whether a user has an augmented writing intent and a specific augmented writing intent.
In practice, the augmented writing intention detection model may be obtained by pre-training using a machine learning method (e.g., a supervised learning method). A large amount of sample text may be included in the sample set used to train the augmented intent detection model. Each sample text may have corresponding input features and user features. Meanwhile, each sample text also has corresponding labeling information. The annotation information may indicate an augmented writing intent (e.g., blessing intent, comfort intent, blame intent, no augmented writing intent, etc.) of the sample text. Here, the non-augmented intent can also be used as a classification to facilitate the model to detect whether the user has augmented intent and specific augmented intent.
Before training the augmented intent detection model, an initial model may be pre-constructed. The initial model here may be various models that can implement a classification function. Such as Convolutional Neural Networks (CNN), etc. And training the initial model through a sample set to obtain an augmented writing intention detection model.
Specifically, the input features and the user features corresponding to the sample text may be used as inputs of the initial model, and the loss value of the current model is determined by combining the information output by the initial model and the labeled information corresponding to the input features. The loss value can be used to characterize the difference between the information output by the current model and the actual annotation information, i.e., can be used to characterize the accuracy of the model output. The larger the loss value, the larger the difference and the smaller the accuracy. The above loss value may be determined based on the euclidean distance or the like. Based on the loss value, a back propagation algorithm and a gradient descent algorithm can be adopted to update the parameters of the current model, so that the model is trained once. Therefore, the parameters of the model can be updated once the characteristics of the sample text are input once.
In practice, whether training is complete may be determined in a number of ways. As an example, when the similarity between the information output by the model and the corresponding annotation information reaches a preset value (e.g., 95%), it may be determined that the training is completed. As yet another example, the training may be determined to be complete if the number of times the model is trained is equal to a preset number of times. Here, when it is determined that the training is completed, the trained model may be determined as the augmented intention detection model.
And 102, if the augmented writing intention exists, acquiring the associated content associated with the augmented writing intention of the user.
In this embodiment, if it is detected that the user has an augmented writing intention, the execution subject may acquire the associated content associated with the augmented writing intention of the user. The associated content here may also be text content that matches the user's augmented writing intent. For example, if the user's expanded writing intention is blessing, the associated content may be a blessing phrase "wish to win the old year, happy, peace, i want you accompany you one year after another year".
In practice, the associated content associated with the user's augmented writing intent may be obtained from a preset content library. In one scenario, the content library may be pre-configured locally to the execution entity, such as in a data packet of an input method application. At this time, the execution subject may directly obtain the associated content associated with the user's augmented writing intention from the local. In another scenario, the content library may be stored in a server, and at this time, the execution subject may obtain, from the server, associated content associated with the augmented writing intention of the user by sending a request. In addition, the two modes can be combined, and the associated content associated with the user's augmented writing intention can be acquired from the local side and the server side. In some optional implementations of the embodiment, the associated content associated with the augmented writing intention of the user may be acquired from the first content library and the second content library, respectively. The content in the first content library is created manually in advance, and the content in the second content library can be searched from the internet in advance or can be searched from the internet in real time. In practice, a plurality of associated contents may be manually created in advance for different augmented writing intentions, and the associated contents corresponding to the respective augmented writing intentions are correspondingly stored in one content library, so as to obtain the first content library in advance. In addition, related contents can be searched from the internet aiming at different expansion intentions in advance, and the searched related contents corresponding to the expansion intentions are correspondingly stored in another content library, so that the second content library is obtained in advance.
It will be appreciated that since the content in the first content repository is created manually, the associated content is typically finer. The content in the second content repository is typically coarser since it is obtained from the internet. Therefore, the priority of the content in the first content library can be set to be higher priority, and the priority of the content in the second content library can be set to be lower priority, so that when the augmented content is displayed for the user for the first time, the augmented content can be displayed based on the high-low order of the priority of the related content. When the augmented writing contents are displayed for the user for the first time, the user never triggers any augmented writing contents in history, so that no historical behavior characteristic exists, and the current candidate augmented writing contents cannot be sequenced based on the characteristics of the triggered historical augmented writing contents of the user. By setting priorities for the contents in different content libraries, the candidate augmented and written contents can be reasonably ordered when the augmented and written contents are displayed for the user for the first time.
And 103, integrating the input content of the user with the acquired associated content to generate the expanded content.
In this embodiment, the execution subject may integrate the input content of the user with the acquired associated content to generate the augmented content. Here, the input content and the obtained associated content may be integrated in a splicing manner.
In this embodiment, the input content and the associated content may be integrated in various ways. For example, the input content may be spliced with each associated content to obtain a plurality of augmented content; or splicing the input content with one or more associated contents respectively to obtain at least one expanded writing content; the associated content can be spliced firstly, and then the spliced associated information and the input are spliced and the like.
In practice, for a certain associated content, when the input content is spliced with the associated content, the associated content can be spliced behind the input content, so as to obtain an augmented content. As an example, the input content is "Xiaoming, wish you happy birthday! "one associated content is" wish all happiness, all warmth, all good luck around you! "the obtained expanded writing content after splicing is" Xiaoming, wish you happy birthday! Wish all happiness, all warmth, all good luck around you! "
It should be noted that, during the splicing process, local character adjustment may also be performed. For example, punctuation marks may be added, redundant punctuation marks may be deleted, some text may be deleted, etc. The other character processing operations executed in the splicing process of the present embodiment are not limited.
In some optional implementations of this embodiment, the executing agent may generate the augmented written content according to the following steps:
firstly, determining the similarity of the input content and each associated content respectively by using a text matching algorithm. The text matching algorithm employed herein may include, but is not limited to, any of the following: jacard distance, Jaro Winkler distance, N-gram language model, cosine law, euclidean distance, TF-IDF (Term Frequency-Inverse text Frequency index) sum, and the like.
And secondly, filtering the associated content with the similarity smaller than a preset threshold value with the input content. Therefore, associated content similar to and smaller than the input content can be filtered out, so that the data volume is reduced, and meanwhile, the rationality of the expanded content is ensured.
And thirdly, splicing the input content and the filtered associated content to generate at least one spliced content.
And fourthly, sequencing all the spliced contents, and selecting a preset number of spliced contents as the expanded contents according to the sequencing sequence.
Here, the sorting of the spliced contents may be performed in various sorting manners. As an example, the spliced content can be ordered based on similarity of the associated content to the input content. As another example, when the augmented content is first presented to the user, the content may be ranked based on the source of the associated content, the spliced content created by the associated content is ranked first, and the spliced content created by the associated content from the internet is ranked later.
Optionally, the content may be sorted and obtained according to the following steps:
first, input features of a user are extracted from input contents. In practice, the input features may include a word vector of a keyword in the input content, and may also include a word vector of a synonym of the keyword.
And then, acquiring historical behavior characteristics of the user and scene characteristics of the current input scene. The historical behavior feature herein may refer to a text feature of the history augmented content clicked by the user, such as a style feature (e.g., humor, artistic, formal, etc.), a text richness feature (e.g., concise, moderate, rich, etc.), a source feature (e.g., created manually, obtained from the internet, etc.), and the like. The scene information herein may include, but is not limited to, features of the input scene, such as time, scene category, and the like.
And then, acquiring the text characteristics of each spliced content. The text features are basically the same as the text features of the history content, and are not described in detail here.
And then, inputting the input features, the historical behavior features, the scene features and the text features into a pre-trained sequencing model to obtain the score of each spliced content. The ranking model can be obtained by pre-training various existing model structures with ranking functions by using a machine learning method, such as FNN (factorization-machine supported Neural Networks, Neural Networks supporting decomposers), DIN (Deep Interest Networks), and the like.
And then, sequencing the spliced contents based on the scores of the spliced contents. In practice, generally, the higher the score output by the ranking model is, the better the spliced contents are, and therefore, the spliced contents can be ranked in the order of the scores from the first to the last.
And finally, selecting a preset number of spliced contents as the expanded contents according to the sequencing order. Therefore, when the number of the spliced contents is large, part of the spliced contents can be preferentially selected as the augmented written contents, and the quality of the displayed augmented written contents is improved.
And 104, displaying the generated at least one augmented writing content.
In this embodiment, the execution subject may present the generated at least one augmented write content. As an example, all of the augmented write content may be presented simultaneously. As yet another example, partial augmented content (e.g., one augmented content or two augmented contents, etc.) may be presented each time, and the presented augmented content may be replaced when a user triggers a refresh function on the presented augmented content.
In practice, when detecting any one of the displayed augmented contents triggered by the user, the execution main body may take the augmented contents triggered by the user as target contents, and display or send the target contents. Thereby improving the input efficiency of the user.
According to the method provided by the embodiment of the application, when it is detected that the user has the augmented writing intention based on the input content input by the user, the associated content associated with the augmented writing intention of the user is obtained, and then the input content and the obtained associated content are integrated to generate the augmented writing content, so that at least one generated augmented writing content can be displayed. Therefore, the input content can be expanded through the associated content associated with the expansion intention of the user, and the richness of the input content is ensured. Meanwhile, when the user is detected to have the expanding and writing intention, the expanding and writing content can be automatically generated and displayed, and compared with an expanding and writing mode of manual searching or creation, the convenience and the input efficiency of content acquisition are improved.
With further reference to fig. 2, a flow 200 of yet another embodiment of an input method is shown. The process 200 of the input method comprises the following steps:
step 201, detecting whether the user has the expansion intention based on the input related information of the user.
Step 202, if the augmented writing intention exists, acquiring the associated content associated with the augmented writing intention of the user.
And step 203, integrating the input content of the user with the acquired associated content to generate the expanded content.
Steps 201 to 203 of this embodiment can refer to steps 101 to 103 of the corresponding embodiment in fig. 1, which is not described again in this embodiment.
And step 204, displaying the expansion prompt information.
In this embodiment, after obtaining the augmented content, the execution main body may display the augmented content prompt message. The augmented text prompt may be used to indicate that the user has automatically generated the augmented text based on the original text entered by the user and may be viewed by clicking.
The augmented writing prompt information here includes at least one of: identification, keys, word chains. The identification or key here may be, for example, "help you chat" or some visual icon in the input method interface, etc., to indicate the automatically generated augmented text. The text chain here may contain the thumbnail content of any generated augmentation content (e.g., the first-ranked augmentation content). For example, the thumbnail content occupies 10 characters in total. When the expanded contents displayed by the text chain exceed 10 characters, the abbreviated contents of the expanded contents can be obtained by using partial texts and ellipses.
As an example, fig. 3 shows a schematic diagram of an augmented content presentation process. As shown in fig. 3, after the user inputs "plum, you are happy in the new year", a plurality of augmented contents are automatically generated. One of the contents is 'Xiaoli, wish you happy new year, wish the new year to win old year, you happy more, you peaceful, i want to accompany you one year to another year'. Then an abbreviated content in the form of a text chain, such as "plum, congratulatory happy new year …" may be presented at this time.
And step 205, when detecting that the user triggers the augmentation writing prompting information, presenting an augmentation writing content panel, and displaying the generated at least one augmentation writing content in the augmentation writing content panel.
In this embodiment, the executing body may present an augmented content panel when detecting that the user triggers the augmented content prompt message, and display the generated at least one augmented content in the augmented content panel. The augmented content panel is used for showing augmented content.
Taking fig. 3 as an example, after the user clicks the abbreviated content "plum, wish you happy new year …" in the form of a text chain, the augmented content panel may be presented in the input method interface, and an augmented content may be presented in the augmented content panel, such as: replace it with "Xiaoli, wish you happy new year, wish new year prevail old year, you happy, Chang' anning, i want to accompany you one year after another.
It should be noted that, when displaying the augmented content in the augmented content panel, a part of the augmented content, such as the augmented content of a preset character, may also be displayed first. Meanwhile, an 'expansion' key can be displayed so that a user can view complete expanded writing contents.
As shown in fig. 4, after the user clicks on the abbreviated content "plum, wish you happy new year …" in the form of a text chain, an augmented content panel may be presented in the input method interface, and partial augmented content may be presented in the augmented content panel, such as: replace with "plum, wish you happy new year, wish new year win old year, happy, peace always, i want accompany you one …". After the user clicks the 'expand' button, a complete piece of expanded contents can be presented, such as: "Xiaoli, wish you happy new year, wish new year prevails old year, you happy, you peace always, i want to accompany you one year after another.
It should be noted that, in addition to displaying one augmented content, other numbers of augmented contents or all augmented contents may be displayed in the augmented content panel, which is not limited in this embodiment.
And step 206, when detecting that the user triggers a refreshing function of the displayed augmented writing content, replacing the displayed augmented writing content.
In this embodiment, the augmented content panel may further include a refresh button, and the refresh button may be used to trigger a refresh function for the presented augmented content. When the user triggers the refresh button, the displayed augmented writing content can be replaced, for example, the content is replaced by the next augmented writing content.
In addition, if the condition that the user continuously triggers the refreshing key for a plurality of times is detected, if the triggering times reach the preset value but any augmented content is not selected, the user can be considered to be dissatisfied with the augmented content of the current style. At this time, the augmented writing contents of other styles can be switched. For example, another augmented content with a different style from each displayed augmented content can be selected from the non-displayed augmented content for displaying.
And step 207, when any one of the displayed expanded contents triggered by the user is detected, taking the expanded contents triggered by the user as target contents, and displaying the target contents on a screen or sending the target contents.
In this embodiment, the executing entity may take the augmented content triggered by the user as the target content, and display or send the target content when detecting any one of the augmented contents displayed by the user trigger. Therefore, content can be automatically expanded and written in the input process of the user, and compared with an expansion and writing mode of manual searching or creation, the input efficiency is improved. Meanwhile, the expanded writing content can be actively displayed to the user, and can be displayed to the user after the user triggers expanded writing prompt information in the forms of identification, keys, text chains and the like. At the moment, whether the display of the expanded contents is triggered or not can be automatically selected by a user, so that the flexibility of the display of the expanded contents is improved.
As can be seen from fig. 2, compared with the embodiment corresponding to fig. 1, the process 200 of the input method in this embodiment involves the steps of displaying a candidate content panel by triggering the augmented content prompt message, and displaying augmented content in the candidate content panel. Therefore, in the process of user input, the user can select whether to check the automatically generated expanded contents, and the flexibility of expanded content display is improved.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present application provides an embodiment of an input device, which corresponds to the embodiment of the method shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the input device 500 according to the present embodiment includes: a detection unit 501 configured to detect whether there is an augmented writing intention of a user based on input related information of the user; an obtaining unit 502 configured to obtain, if there is an augmented writing intention, associated content associated with the augmented writing intention of the user; an integration unit 503 configured to integrate the input content of the user with the acquired associated content, and generate an augmented content; a presentation unit 504 configured to present the generated at least one augmented write content.
In some optional implementations of this embodiment, the presentation unit 504 is further configured to: displaying the expanded writing prompt information, wherein the expanded writing prompt information comprises at least one of the following items: identification, keys, text chains; and when detecting that the user triggers the augmentation writing prompt information, presenting an augmentation writing content panel, and displaying the generated at least one augmentation writing content in the augmentation writing content panel.
In some optional implementations of this embodiment, the apparatus further includes: the replacing unit is configured to replace the displayed augmented writing content when the user is detected to trigger a refreshing function of the displayed augmented writing content; or, the triggering unit is configured to take the augmented written content triggered by the user as target content, and screen or send the target content when any augmented written content displayed by the user trigger is detected.
In some optional implementation manners of this embodiment, the input related information includes input content of the user; and the user's augmented writing intention is detected by the following steps: extracting keywords from the input content, and acquiring similar words of the keywords; determining input features based on the keywords and the similar meaning words; acquiring the user characteristics of the user; and inputting the input features and the user features into a pre-trained augmented writing intention detection model respectively to generate an augmented writing intention detection result of the user.
In some optional implementations of the present embodiment, the obtaining unit 502 is further configured to: and acquiring the associated content associated with the user's augmented writing intention from a first content library and a second content library respectively, wherein the content in the first content library is created manually in advance, and the content in the second content library is searched and generated from the Internet in advance.
In some optional implementations of this embodiment, the integration unit 503 is further configured to: respectively determining the similarity between the input content and each associated content by using a text matching algorithm; filtering the associated content with the similarity smaller than a preset threshold value with the input content; splicing the input content and each filtered associated content to generate at least one spliced content; and sequencing all the spliced contents, and selecting a preset number of spliced contents as the expanded contents according to the sequencing order.
In some optional implementations of this embodiment, the integration unit 503 is further configured to: extracting the input features of the user from the input content; acquiring historical behavior characteristics of the user and scene characteristics of a current input scene; acquiring text characteristics of each spliced content; inputting the input features, the historical behavior features, the scene features and the text features into a pre-trained sequencing model to obtain scores of all spliced contents; sequencing all spliced contents based on the scores of all the spliced contents; and selecting a preset number of spliced contents as the expanded contents according to the sequencing order.
According to the device provided by the embodiment of the application, when it is detected that the user has the augmented writing intention based on the input related content of the user, the associated content associated with the augmented writing intention of the user is obtained, and then the input content of the user is integrated with the obtained associated content to generate the augmented writing content, so that at least one generated augmented writing content can be displayed. Therefore, the original input content can be augmented through the associated content associated with the augmented writing intention of the user, and the richness of the input content is guaranteed. Meanwhile, when the user is detected to have the expanding and writing intention, the expanding and writing content can be automatically generated and displayed, and compared with an expanding and writing mode of manual searching or creation, the input efficiency is improved.
Fig. 6 is a block diagram illustrating an apparatus 600 for inputting according to an exemplary embodiment, where the apparatus 600 may be an intelligent terminal or a server. For example, the apparatus 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, apparatus 600 may include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614, and communication component 616.
The processing component 602 generally controls overall operation of the device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing elements 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the apparatus 600. Examples of such data include instructions for any application or method operating on device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply component 606 provides power to the various components of device 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 600.
The multimedia component 608 includes a screen that provides an output interface between the device 600 and a user as described above. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or slide action but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 600 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a Microphone (MIC) configured to receive external audio signals when apparatus 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing status assessment of various aspects of the apparatus 600. For example, the sensor component 614 may detect an open/closed state of the device 600, the relative positioning of components, such as a display and keypad of the apparatus 600, the sensor component 614 may also detect a change in position of the apparatus 600 or a component of the apparatus 600, the presence or absence of user contact with the apparatus 600, orientation or acceleration/deceleration of the apparatus 600, and a change in temperature of the apparatus 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the apparatus 600 and other devices in a wired or wireless manner. The apparatus 600 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the apparatus 600 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 7 is a schematic diagram of a server in some embodiments of the present application. The server 700 may vary significantly depending on configuration or performance, and may include one or more Central Processing Units (CPUs) 722 (e.g., one or more processors) and memory 732, one or more storage media 730 (e.g., one or more mass storage devices) storing applications 742 or data 744. Memory 732 and storage medium 730 may be, among other things, transient storage or persistent storage. The program stored in the storage medium 730 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Further, the central processor 722 may be configured to communicate with the storage medium 730, and execute a series of instruction operations in the storage medium 730 on the server 700.
The server 700 may also include one or more power supplies 726, one or more wired or wireless network interfaces 750, one or more input-output interfaces 758, one or more keyboards 756, and/or one or more operating systems 741, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of an apparatus (smart terminal or server), enable the apparatus to perform an input method, the method comprising: detecting whether the user has the handwriting expansion intention or not based on input related information of the user; if the augmented writing intention exists, acquiring the associated content associated with the augmented writing intention of the user; integrating the input content of the user with the acquired associated content to generate expanded writing content; and displaying the generated at least one augmented writing content.
Optionally, the presenting the generated at least one augmented written content includes: displaying expanded writing prompt information, wherein the expanded writing prompt information comprises at least one of the following items: identification, keys, text chains; and when detecting that the user triggers the augmentation writing prompt information, presenting an augmentation writing content panel, and displaying the generated at least one augmentation writing content in the augmentation writing content panel.
Optionally, the device being configured to execute the one or more programs by the one or more processors includes instructions for: when the user is detected to trigger a refreshing function of the displayed expanded writing contents, replacing the displayed expanded writing contents; or when any one of the displayed expanded contents triggered by the user is detected, taking the expanded contents triggered by the user as target contents, and uploading or sending the target contents.
Optionally, the input related information includes input content of the user; and, the user's augmented writing intent is detected by: extracting keywords from the input content, and acquiring similar words of the keywords; determining an input feature based on the keyword and the synonym; acquiring user characteristics of the user; and respectively inputting the input features and the user features into a pre-trained augmented writing intention detection model to generate an augmented writing intention detection result of the user.
Optionally, the obtaining the associated content associated with the augmented writing intention of the user includes: and acquiring associated content associated with the user's augmented writing intention from a first content library and a second content library respectively, wherein the content in the first content library is created manually in advance, and the content in the second content library is searched from the Internet in advance to generate the content.
Optionally, the integrating the input content of the user and the acquired associated content to generate the augmented content includes: respectively determining the similarity of the input content and each associated content by using a text matching algorithm; filtering the associated content with the similarity of the input content smaller than a preset threshold; splicing the input content with each filtered associated content to generate at least one spliced content; and sequencing all the spliced contents, and selecting a preset number of spliced contents as the expanded contents according to the sequencing order.
Optionally, the sorting the spliced contents, and selecting a preset number of spliced contents as the augmented content according to the sorting order, includes: extracting input features of the user from the input content; acquiring historical behavior characteristics of the user and scene characteristics of a current input scene; acquiring text characteristics of each spliced content; inputting the input features, the historical behavior features, the scene features and the text features into a pre-trained sequencing model to obtain scores of all spliced contents; sequencing all spliced contents based on the scores of all the spliced contents; and selecting a preset number of spliced contents as the expanded contents according to the sequencing order.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice in the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
The present application provides an input method, an input device and an input device, and the principles and embodiments of the present application are described herein using specific examples, and the descriptions of the above examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An input method, characterized in that the method comprises:
detecting whether the user has the handwriting expansion intention or not based on input related information of the user;
if the augmented writing intention exists, acquiring the associated content associated with the augmented writing intention of the user;
integrating the input content of the user with the acquired associated content to generate expanded writing content;
and displaying the generated at least one augmented writing content.
2. The method of claim 1, wherein the exposing the generated at least one augmented written content comprises:
displaying expanded writing prompt information, wherein the expanded writing prompt information comprises at least one of the following items: identification, keys, text chains;
and when detecting that the user triggers the augmentation writing prompt information, presenting an augmentation writing content panel, and displaying the generated at least one augmentation writing content in the augmentation writing content panel.
3. The method of claim 2, wherein after the presenting the generated at least one augmented content in the augmented content panel, the method further comprises:
when the user is detected to trigger a refreshing function of the displayed expanded writing contents, replacing the displayed expanded writing contents; alternatively, the first and second electrodes may be,
and when any one of the displayed expanded contents triggered by the user is detected, taking the expanded contents triggered by the user as target contents, and uploading or sending the target contents.
4. The method of claim 1, wherein the input-related information includes input content of the user; and
the user's augmented writing intention is detected by the following steps:
extracting keywords from the input content, and acquiring similar words of the keywords;
determining an input feature based on the keyword and the synonym;
acquiring user characteristics of the user;
and respectively inputting the input features and the user features into a pre-trained augmented writing intention detection model to generate an augmented writing intention detection result of the user.
5. The method of claim 1, wherein obtaining the associated content associated with the augmented writing intent of the user comprises:
and acquiring associated content associated with the user's augmented writing intention from a first content library and a second content library respectively, wherein the content in the first content library is created manually in advance, and the content in the second content library is searched from the Internet in advance to generate the content.
6. The method of claim 1, wherein integrating the input content of the user with the obtained associated content to generate the augmented content comprises:
respectively determining the similarity of the input content and each associated content by using a text matching algorithm;
filtering the associated content with the similarity of the input content smaller than a preset threshold;
splicing the input content with each filtered associated content to generate at least one spliced content;
and sequencing all the spliced contents, and selecting a preset number of spliced contents as the expanded contents according to the sequencing order.
7. The method according to claim 6, wherein the sorting the spliced contents, and selecting a preset number of spliced contents as the augmented contents according to the sorting order comprises:
extracting input features of the user from the input content;
acquiring historical behavior characteristics of the user and scene characteristics of a current input scene;
acquiring text characteristics of each spliced content;
inputting the input features, the historical behavior features, the scene features and the text features into a pre-trained sequencing model to obtain scores of all spliced contents;
sequencing all spliced contents based on the scores of all the spliced contents;
and selecting a preset number of spliced contents as the expanded contents according to the sequencing order.
8. An input device, the device comprising:
a detection unit configured to detect whether there is an augmented writing intention of a user based on input related information of the user;
an acquisition unit configured to acquire associated content associated with the augmented writing intention of the user if the augmented writing intention exists;
the integration unit is configured to integrate the input content of the user with the acquired associated content to generate the expanded writing content;
a presentation unit configured to present the generated at least one augmented write content.
9. An apparatus for input, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for:
detecting whether the user has the handwriting expansion intention or not based on input related information of the user;
if the augmented writing intention exists, acquiring the associated content associated with the augmented writing intention of the user;
integrating the input content of the user with the acquired associated content to generate expanded writing content;
and displaying the generated at least one augmented writing content.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202010209469.1A 2020-03-23 2020-03-23 Input method, input device and input device Pending CN113434045A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010209469.1A CN113434045A (en) 2020-03-23 2020-03-23 Input method, input device and input device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010209469.1A CN113434045A (en) 2020-03-23 2020-03-23 Input method, input device and input device

Publications (1)

Publication Number Publication Date
CN113434045A true CN113434045A (en) 2021-09-24

Family

ID=77753354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010209469.1A Pending CN113434045A (en) 2020-03-23 2020-03-23 Input method, input device and input device

Country Status (1)

Country Link
CN (1) CN113434045A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014199A (en) * 2010-09-16 2011-04-13 宇龙计算机通信科技(深圳)有限公司 Information display method and terminal
CN108446316A (en) * 2018-02-07 2018-08-24 北京三快在线科技有限公司 Recommendation method, apparatus, electronic equipment and the storage medium of associational word
CN110187780A (en) * 2019-06-10 2019-08-30 北京百度网讯科技有限公司 Long text prediction technique, device, equipment and storage medium
CN110389667A (en) * 2018-04-17 2019-10-29 北京搜狗科技发展有限公司 A kind of input method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014199A (en) * 2010-09-16 2011-04-13 宇龙计算机通信科技(深圳)有限公司 Information display method and terminal
CN108446316A (en) * 2018-02-07 2018-08-24 北京三快在线科技有限公司 Recommendation method, apparatus, electronic equipment and the storage medium of associational word
CN110389667A (en) * 2018-04-17 2019-10-29 北京搜狗科技发展有限公司 A kind of input method and device
CN110187780A (en) * 2019-06-10 2019-08-30 北京百度网讯科技有限公司 Long text prediction technique, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110998560A (en) Method and system for customizing suggestions using user-specific information
CN109582768B (en) Text input method and device
CN111708943A (en) Search result display method and device and search result display device
CN107665218B (en) Searching method and device and electronic equipment
CN108628461B (en) Input method and device and method and device for updating word stock
KR20210050484A (en) Information processing method, device and storage medium
CN108614830B (en) Search result display method and device
CN112052395B (en) Data processing method and device
CN112083811B (en) Candidate item display method and device
CN114610163A (en) Recommendation method, apparatus and medium
CN113434045A (en) Input method, input device and input device
CN112612442A (en) Input method and device and electronic equipment
CN114115550A (en) Method and device for processing association candidate
CN109426359B (en) Input method, device and machine readable medium
CN112306251A (en) Input method, input device and input device
CN108983992B (en) Candidate item display method and device with punctuation marks
CN114253404A (en) Input method, input device and input device
CN110929122A (en) Data processing method and device and data processing device
US20230196001A1 (en) Sentence conversion techniques
CN113342183A (en) Input method, input device and input device
CN113625885A (en) Input method, input device and input device
CN110716653B (en) Method and device for determining association source
CN110858100B (en) Method and device for generating association candidate words
CN114510154A (en) Input method, input device and input device
CN109271094B (en) Text editing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination