WO2019214132A1 - 信息处理方法、装置及设备 - Google Patents

信息处理方法、装置及设备 Download PDF

Info

Publication number
WO2019214132A1
WO2019214132A1 PCT/CN2018/106729 CN2018106729W WO2019214132A1 WO 2019214132 A1 WO2019214132 A1 WO 2019214132A1 CN 2018106729 W CN2018106729 W CN 2018106729W WO 2019214132 A1 WO2019214132 A1 WO 2019214132A1
Authority
WO
WIPO (PCT)
Prior art keywords
entity
emotional
current application
application interface
picture
Prior art date
Application number
PCT/CN2018/106729
Other languages
English (en)
French (fr)
Inventor
宋雨濛
Original Assignee
北京金山安全软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京金山安全软件有限公司 filed Critical 北京金山安全软件有限公司
Publication of WO2019214132A1 publication Critical patent/WO2019214132A1/zh
Priority to US16/792,368 priority Critical patent/US20200234008A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Definitions

  • the present application relates to the field of information processing technologies, and in particular, to an information processing method, apparatus, and device.
  • the user When the user uses the input method to chat, the user encounters the following scenario.
  • the user refers to something in the input content, and the user wants to send the content related to the transaction to the recipient.
  • the transaction is a named entity, and the sender hopes. Send the content related to the named entity to the recipient.
  • the user in order to obtain related information of a named entity in a chat scenario, the user needs to exit the current application interface, and then open a browser to search the named entity, and after obtaining the related content of the named entity, copy the related content or After the screenshot is sent to the receiver, the operation is cumbersome and the input efficiency is low.
  • the present application aims to solve at least one of the technical problems in the related art to some extent.
  • an object of the present application is to provide an information processing method, so as to display the named entity content on the current application interface without being switched, and send the content to the target user, thereby reducing the user switching during the chat process.
  • the frequency of the application increases the input efficiency.
  • a second object of the present application is to propose an information processing apparatus.
  • a third object of the present application is to propose a terminal device.
  • a fourth object of the present application is to propose a non-transitory computer readable storage medium.
  • the first aspect of the present application provides an information processing method, including:
  • the input information includes a named entity, acquiring the entity content corresponding to the named entity and displaying the current content on the current application interface;
  • the method before the acquiring the entity content corresponding to the named entity and displaying the current content, the method further includes: acquiring an entity type corresponding to the named entity; displaying, in the current application interface, the entity type Entity icon;
  • the obtaining the entity content corresponding to the named entity and displaying the content on the current application interface includes: detecting whether the user triggers the entity icon;
  • the entity content corresponding to the named entity is obtained and displayed on the current application interface.
  • the method further includes:
  • the method further includes:
  • the input information includes an emotional entity
  • the method before the acquiring the emotional picture corresponding to the emotional entity and displaying the current image in the current application interface, the method further includes: displaying a picture identifier on the current application interface;
  • the obtaining an emotional picture corresponding to the emotional entity and displaying the current image on the current application interface includes: detecting whether the user performs a trigger operation on the picture identifier;
  • an emotion picture corresponding to the emotion entity is acquired and displayed on the current application interface.
  • the method further includes:
  • the second aspect of the present application provides an information processing apparatus, including:
  • a first detecting module configured to detect whether the information input by the current application interface includes a named entity
  • a first display module configured to: when the information that the input is detected includes a named entity, acquire the entity content corresponding to the named entity and display the current content on the current application interface;
  • the first sending module is configured to acquire a sending instruction, and send the entity content to the target user.
  • the device further includes:
  • the second display module is configured to acquire an entity type corresponding to the named entity, and display an entity icon corresponding to the entity type in the current application interface.
  • the first display module is specifically configured to detect whether the user performs a trigger operation on the entity icon. If a trigger operation is performed on the entity icon, the entity content corresponding to the named entity is obtained and displayed in the current application. interface.
  • the first display module is further configured to: detect whether a sending instruction is acquired within a preset time;
  • the device further includes:
  • a second detecting module configured to detect whether an information entity is included in the information input by the current application interface
  • a third display module configured to: if the information that the input is detected includes an emotional entity, acquire an emotional picture corresponding to the emotional entity and display the current image in the current application interface;
  • the second sending module is configured to acquire a sending instruction, and send the emotional picture to the target user.
  • the device further includes:
  • the fourth display module is configured to display a picture identifier on the current application interface.
  • the third display module is specifically configured to detect whether the user performs a trigger operation on the image identifier. If a trigger operation is performed on the image identifier, the emotion image corresponding to the emotion entity is acquired and displayed in the current application. interface.
  • the third display module is further configured to: detect whether a sending instruction is acquired within a preset time; if the detecting is informed that the sending instruction is not acquired, canceling displaying the emotional picture.
  • the third aspect of the present application provides a terminal device including a processor and a memory; wherein the processor operates and reads the executable program code stored in the memory A program corresponding to the program code is executed for implementing the information processing method as described in the first aspect.
  • the fourth aspect of the present application provides a non-transitory computer readable storage medium having a computer program stored thereon, wherein the program is executed by the processor to implement the first aspect embodiment.
  • the entity content corresponding to the named entity is obtained and displayed on the current application interface, and the sending instruction is further obtained, and the entity content is sent.
  • the named entity content is displayed on the current application interface and sent to the target user, which reduces the frequency of the user switching the application during the chat process, and improves the input efficiency.
  • FIG. 1 is a schematic flowchart diagram of an information processing method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of displaying physical content provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart diagram of another information processing method according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram showing the display of an entity icon according to an embodiment of the present application.
  • FIG. 8 is a schematic flowchart diagram of another information processing method according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of displaying a picture identifier according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of another information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of another information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram of another information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 14 shows a block diagram of an exemplary terminal device suitable for implementing embodiments of the present application.
  • FIG. 1 is a schematic flowchart of an information processing method according to an embodiment of the present disclosure. As shown in FIG. 1 , the information processing method includes:
  • Step 101 Detect whether a name entity is included in the information input by the current application interface.
  • the information processing method of the embodiment of the present application can be applied to a terminal device such as a smart phone, a tablet computer, a personal digital assistant, a wearable device, or the like.
  • a terminal device such as a smart phone, a tablet computer, a personal digital assistant, a wearable device, or the like.
  • an application for example, QQ, WeChat, etc.
  • the information input by the current application interface may be identified and analyzed according to the semantic analysis algorithm, thereby identifying whether the information input by the current application interface includes a named entity.
  • the named entity may be stored in a local or cloud server, and then the stored named entity is directly matched with the input information according to the NER (Named Entity Recognition) algorithm, so as to The named entity in the information entered by the current application interface is identified.
  • the named entity may be a person name, an organization name, a place name, and all other entities identified by a name, or may be a song name, a movie name, a date, a currency, an address, and the like.
  • the semantic analysis algorithm and the NER algorithm may be set locally on the terminal device or may be set in the cloud server.
  • the server can be requested to detect whether the currently input information includes a named entity, or a predetermined length of the character can be input during the input process to request the server to detect the current input. Whether the named entity is included in the information, there is no limit here.
  • Step 102 If it is detected that the input information includes a named entity, the entity content corresponding to the named entity is obtained and displayed on the current application interface.
  • the database may be set in advance on the local or cloud server, and the named entity and the corresponding entity content are stored in the database, and then the detected naming is detected when the named information is included in the input information.
  • the entity matches the named entity in the database, and further obtains the entity content corresponding to the successfully matched named entity, and displays the current application interface.
  • the information of the current application interface input "Would you like to watch Kingsman 2 together?" includes the named entity "Kingsman 2", and then "Kingsman 2"
  • the named entities stored in the database are matched.
  • the named entity button is generated on the keyboard interface of the input method. By clicking the button, the entity content corresponding to “Kingsman 2” can be obtained and displayed in the current application interface.
  • the search engine is invoked through the background calling interface of the terminal device, and the named entity is directly searched by the search engine to obtain the corresponding entity.
  • the content is displayed in the current application interface.
  • the physical content can be displayed on the keyboard interface of the input method, or can be displayed in other areas according to actual needs, and can also be displayed in a floating layer manner, which is not limited herein.
  • the entity entity may be directly displayed after the entity content is obtained, or the named entity button may be generated in the application interface, and the corresponding entity content is obtained and displayed by triggering the button, which is not limited herein.
  • Step 103 Acquire a sending instruction to send the entity content to the target user.
  • the implementation manner of the sending instruction includes, but is not limited to, a click instruction, a voice instruction, and the like, and is not limited herein.
  • the entity content is sent to the target user by acquiring the sending instruction, so that the entity content is acquired and sent without switching the application during the chat process.
  • the input efficiency is improved, and the user can directly send the entity content without copying or taking screenshots, which simplifies the user operation.
  • the sending instruction is acquired within the preset time
  • the physical content is cancelled. Therefore, when the sending instruction is not acquired within the preset time, the physical content is cancelled, and the physical content is occupied for a long time to occupy the display interface space, thereby further improving the user experience.
  • the information processing method of the embodiment of the present application detects whether the named entity is included in the information input by the current application interface, and when the detected input information includes the named entity, acquires the entity content corresponding to the named entity and displays the current content.
  • the application interface further acquires a sending instruction to send the entity content to the target user. Therefore, under the premise of not switching the application, the named entity content is displayed on the current application interface and sent to the target user, which reduces the frequency of the user switching the application during the chat process, and improves the input efficiency.
  • the entity type corresponding to the naming entity may be obtained, and the entity type is displayed on the current application interface.
  • FIG. 3 is a schematic flowchart of another information processing method according to an embodiment of the present disclosure. As shown in FIG. 3, after detecting the input information, the information processing method further includes:
  • Step 201 Obtain an entity type corresponding to the named entity.
  • the database may be set in the terminal device local or the cloud server in advance, and the named entity and the corresponding entity type are stored in the database, and then when the named entity is detected in the input information, the detected The named entity matches the named entity in the database to further obtain the entity type corresponding to the successfully named named entity.
  • the entity type includes but is not limited to a person name, a place name, a song name, a movie name, and the like.
  • Step 202 Display an entity icon corresponding to the entity type in the current application interface.
  • the mapping relationship table may be preset in the terminal device or the cloud server, and the correspondence between the entity type and the entity icon may be stored in the mapping relationship table, and then the entity type is obtained, and then the query is mapped.
  • the relationship table obtains the entity icon for the entity, and the entity icon can be further displayed on the current application interface.
  • the named entity is “Elon Musk”, and the corresponding entity type is obtained as a person name, and then an entity icon representing the name of the person is obtained, and displayed on the input method keyboard interface; for example, as shown in FIG.
  • the name is The entity is "We are the brave”, and the corresponding entity type is the song name, and then the entity icon representing the song name is obtained and displayed on the input method keyboard interface; for example, as shown in FIG. 6, the named entity is "Coco" Obtain the corresponding entity type as the movie name, and then obtain the entity icon representing the movie name, and display it on the input method keyboard interface; for example, as shown in FIG. 7, the named entity is “West Hollywood”, and the corresponding entity type is obtained as The name of the place, and then the entity icon representing the place name is displayed and displayed on the input method keyboard interface.
  • entity icon can be displayed on the keyboard interface of the input method, and can also be displayed in other areas according to actual needs, and can also be displayed in a floating layer manner, which is not limited herein.
  • the entity type corresponding to the named entity is obtained, and the entity type is provided to the user in the form of an entity icon, so that the user can identify the first time and improve the user input experience.
  • the named entity "Harry Potter” may be a movie name, a book title, or a person name, and when the user talks about a movie-related topic, he or she would prefer to acquire a movie.
  • the multiple entity icons may be displayed side by side, or may be displayed in parallel, or may be arranged in an arbitrary arrangement according to actual needs, and are not limited herein.
  • Step 203 Detect whether the user performs a trigger operation on the entity icon.
  • Step 204 If a trigger operation is performed on the entity icon, the entity content corresponding to the named entity is obtained and displayed on the current application interface.
  • the user may perform a trigger operation on the entity icon to obtain the corresponding entity content.
  • the related algorithm detects whether the user triggers the entity icon, and then detects After the triggering operation on the entity icon, the entity content corresponding to the named entity is obtained and displayed in the current application interface. Since the physical icon only needs to occupy less space, the interface can be made more beautiful and the user input experience can be improved.
  • the triggering operation of the entity icon on the user may be a click, a double click, a slide, etc., and is not limited herein. It should be noted that the description of the foregoing embodiment for obtaining the entity content corresponding to the named entity and displaying the current application interface is also applicable to the embodiment, and details are not described herein again.
  • the information processing method of the embodiment of the present application obtains an entity type corresponding to the named entity, and then displays an entity icon corresponding to the entity type on the current application interface, further detecting whether the user triggers the entity icon, and detects the entity icon.
  • the triggering operation is performed, the entity content corresponding to the named entity is obtained and displayed in the current application interface. Therefore, when the named entity is included in the information, the entity type corresponding to the named entity is obtained, and the entity type is provided to the user in the form of an entity icon, so that the user can identify the first time and improve the user input experience.
  • the emotion entity in the input information may be detected, and when the emotion entity is detected, the emotion breakthrough corresponding to the emotion entity is displayed on the current application interface.
  • FIG. 8 is a schematic flowchart of another information processing method according to an embodiment of the present disclosure. As shown in FIG. 8, the information processing method includes:
  • Step 301 Detect whether an emotion entity is included in the information input by the current application interface.
  • the information input by the current application interface may be identified and analyzed according to a semantic analysis algorithm, thereby identifying whether the information input by the current application interface includes an emotional entity.
  • the emotional entity is stored in a local or cloud server, and then the stored emotional entity is directly matched with the input information according to the NER algorithm, so as to perform the emotional entity in the information input by the current application interface. Identification.
  • the emotional entity may be a greeting (such as good night), or a word indicating a mood (such as a smile, sadness).
  • Step 302 If it is detected that the input information includes an emotional entity, obtain an emotional picture corresponding to the emotional entity and display it on the current application interface.
  • the emotional picture may be a static picture or a dynamic picture.
  • the picture identifier may be displayed on the current application interface.
  • the detected information includes an emotional entity “Thank you”, and then displays a picture identifier “GIF” on the current application interface, so that the user can identify the emotional entity at the first time and improve the user input experience.
  • the emotion image corresponding to the emotion entity is acquired and displayed on the current application interface, thereby making the interface more beautiful and improving. User input experience.
  • the triggering operation of the image identifier by the user may be a click, a double click, a slide, etc., and is not limited herein.
  • Step 303 Acquire a sending instruction, and send the emotional picture to the target user.
  • the implementation manner of the sending instruction includes, but is not limited to, a click instruction, a voice instruction, and the like, and is not limited herein.
  • the emotion picture since the emotion picture has been displayed on the current application interface, the emotion picture is sent to the target user by acquiring the sending instruction, so that the emotion picture is acquired and sent without switching the application during the chat process.
  • the input efficiency is improved, and the user can directly send the emotion picture without copying or screenshot, which simplifies the user operation.
  • the emotional picture corresponding to the emotional entity after the emotional picture corresponding to the emotional entity is displayed in the current application interface, it may also detect whether the sending instruction is acquired within the preset time, and if it is detected that the sending instruction is not acquired within the preset time, Cancel the display of emotional pictures. Therefore, when the sending instruction is not acquired within the preset time, the display of the emotional picture is cancelled, and the emotional picture is occupied for a long time to occupy the display interface space, thereby further improving the user experience.
  • the information processing method of the embodiment of the present application detects whether the emotional entity is included in the information input by the current application interface, and then, when the detected information includes the emotional entity, acquires the emotional image corresponding to the emotional entity and displays the current image in the current application interface. And further obtaining a sending instruction to send the emotional picture to the target user.
  • the emotion entity in the input information is detected without switching the application, the emotion picture is displayed on the current application interface, and is sent to the target user, which reduces the frequency of the user switching the application during the chat process, and improves the frequency. Input efficiency.
  • FIG. 10 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application. As shown in FIG. 10, the information processing apparatus includes: The module 100, the first display module 200, and the first sending module 300.
  • the first detecting module 100 is configured to detect whether the information input by the current application interface includes a named entity.
  • the first display module 200 is configured to: if the detected information includes the named entity, acquire the entity content corresponding to the named entity and display the current content on the current application interface.
  • the first sending module 300 is configured to acquire a sending instruction, and send the entity content to the target user.
  • the first display module 200 is further configured to: detect whether a sending instruction is acquired within a preset time; if the detecting is informed that the sending instruction is not obtained, cancel displaying the physical content.
  • the information processing apparatus provided in FIG. 11 further includes: a second display module 400.
  • the second display module 400 is configured to: obtain an entity type corresponding to the named entity; and display an entity icon corresponding to the entity type in the current application interface.
  • the first display module 200 is specifically configured to: detect whether the user triggers the entity icon; if the triggering operation on the entity icon is detected, the entity content corresponding to the named entity is obtained and displayed on the current application interface.
  • FIG. 12 is a schematic structural diagram of another information processing apparatus according to an embodiment of the present disclosure. As shown in FIG. 12, the information processing apparatus includes: a second detection module 500, a third display module 600, and a second sending module 700.
  • the second detecting module 500 is configured to detect whether the information input by the current application interface includes an emotional entity.
  • the third display module 600 is configured to: if the detected information includes an emotional entity, acquire an emotional picture corresponding to the emotional entity and display the current application interface.
  • the second sending module 700 is configured to acquire a sending instruction, and send the sentiment picture to the target user.
  • the third display module 600 is further configured to: detect whether a sending instruction is acquired within a preset time; if the detecting is informed that the sending instruction is not obtained, cancel displaying the emotional picture.
  • the information processing apparatus provided in FIG. 13 further includes: a fourth display module 800.
  • the fourth display module 800 is configured to display a picture identifier on the current application interface.
  • the third display module 600 is configured to: detect whether the user performs a trigger operation on the image identifier; if the triggering operation on the image identifier is detected, obtain an emotional image corresponding to the emotional entity and display the current image on the current application interface.
  • the information processing apparatus of the embodiment of the present application detects whether the named entity is included in the information input by the current application interface, and when the detected input information includes the named entity, acquires the entity content corresponding to the named entity and displays the current content on the current application interface. Further obtaining a sending instruction to send the entity content to the target user. Therefore, under the premise of not switching the application, the named entity content is displayed on the current application interface and sent to the target user, which reduces the frequency of the user switching the application during the chat process, and improves the input efficiency.
  • the present application further provides a terminal device including a processor and a memory; wherein the processor runs a program corresponding to the executable program code by reading executable program code stored in the memory for use in The information processing method according to any of the foregoing embodiments is implemented.
  • the present application also provides a computer program product that implements an information processing method as described in any of the foregoing embodiments when executed by an instruction processor in a computer program product.
  • the present application further provides a non-transitory computer readable storage medium having stored thereon a computer program, the program being executed by the processor to implement the information processing method according to any of the foregoing embodiments.
  • FIG. 14 shows a block diagram of an exemplary terminal device suitable for implementing embodiments of the present application.
  • the terminal device 12 shown in FIG. 14 is merely an example and should not impose any limitation on the function and scope of use of the embodiments of the present application.
  • terminal device 12 is represented in the form of a general purpose computing device.
  • the components of terminal device 12 may include, but are not limited to, one or more processors or processing units 16, system memory 28, and bus 18 that connects different system components, including system memory 28 and processing unit 16.
  • Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include, but are not limited to, an Industry Standard Architecture (hereinafter referred to as ISA) bus, a Micro Channel Architecture (MAC) bus, an enhanced ISA bus, and video electronics.
  • ISA Industry Standard Architecture
  • MAC Micro Channel Architecture
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnection
  • Terminal device 12 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by terminal device 12, including volatile and non-volatile media, removable and non-removable media.
  • Memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32.
  • Terminal device 12 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
  • storage system 34 may be used to read and write non-removable, non-volatile magnetic media (not shown in Figure 14, commonly referred to as "hard disk drives").
  • a disk drive for reading and writing to a removable non-volatile disk for example, a "floppy disk”
  • a removable non-volatile disk for example, a compact disk read-only memory (Compact)
  • each drive can be coupled to bus 18 via one or more data medium interfaces.
  • Memory 28 can include at least one program product having a set (e.g., at least one) of program modules configured to perform the functions of the various embodiments of the present application.
  • a program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including but not limited to an operating system, one or more applications, other program modules, and program data. An implementation of the network environment may be included in each or some of these examples.
  • Program module 42 typically performs the functions and/or methods of the embodiments described herein.
  • the terminal device 12 can also communicate with one or more external devices 14 (e.g., a keyboard, pointing device, display 24, etc.), and can also communicate with one or more devices that enable the user to interact with the computer system/server 12, and/ Or communicate with any device (eg, a network card, modem, etc.) that enables the computer system/server 12 to communicate with one or more other computing devices.
  • This communication can take place via an input/output (I/O) interface 22.
  • the terminal device 12 can also pass through the network adapter 20 and one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet. ) Communication.
  • network adapter 20 communicates with other modules of terminal device 12 via bus 18.
  • terminal device 12 includes but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives. And data backup storage systems, etc.
  • the processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, implementing the methods mentioned in the foregoing embodiments.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种信息处理方法、装置及设备,其中,方法包括:检测当前应用界面输入的信息中是否包含命名实体(101);若检测到输入的信息中包含命名实体,则获取与命名实体对应的实体内容并显示在当前应用界面(102);获取发送指令,将实体内容发送给目标用户(103)。由此,实现了在不切换应用的前提下,将命名实体内容显示在当前应用界面,并发送给目标用户,降低了用户在聊天过程中切换应用的频率,提高了输入效率。

Description

信息处理方法、装置及设备
相关申请的交叉引用
本申请要求北京金山安全软件有限公司于2018年05月08日提交的、申请名称为“信息处理方法、装置及设备”的、中国专利申请号“201810432516.1”的优先权。
技术领域
本申请涉及信息处理技术领域,尤其涉及一种信息处理方法、装置及设备。
背景技术
用户在使用输入法进行聊天时,会遇到如下场景,用户在输入内容提到了某个事物,用户希望把该事物相关的内容发送给接收方,比如这个事物是某个命名实体,发送方希望把该命名实体相关的内容发送给接收方。
相关技术中,用户为了得到聊天场景中的某个命名实体的相关信息,需要退出当前的应用界面,进而打开浏览器对该命名实体进行搜索,在获得命名实体相关内容后,将相关内容复制或截图后发送给接收方,操作繁琐,输入效率低下。
发明内容
本申请旨在至少在一定程度上解决相关技术中的技术问题之一。
为此,本申请的一个目的在于提出一种信息处理方法,以实现在不切换应用的前提下,将命名实体内容显示在当前应用界面,并发送给目标用户,降低了用户在聊天过程中切换应用的频率,提高了输入效率。
本申请的第二个目的在于提出一种信息处理装置。
本申请的第三个目的在于提出一种终端设备。
本申请的第四个目的在于提出一种非临时性计算机可读存储介质。
为达上述目的,本申请第一方面实施例提出了一种信息处理方法,包括:
检测当前应用界面输入的信息中是否包含命名实体;
若检测到所述输入的信息中包含命名实体,则获取与所述命名实体对应的实体内容并显示在当前应用界面;
获取发送指令,将所述实体内容发送给目标用户。
另外,根据本申请上述实施例的信息处理方法还可以具有如下附加技术特征:
可选地,在所述获取与所述命名实体对应的实体内容并显示在当前应用界面之前,还包括:获取与所述命名实体对应的实体类型;在当前应用界面显示与所述实体类型对应的 实体图标;
所述获取与所述命名实体对应的实体内容,并显示在当前应用界面,包括:检测用户对所述实体图标是否进行触发操作;
若检测到对所述实体图标进行触发操作,则获取与所述命名实体对应的实体内容并显示在当前应用界面。
可选地,在所述获取与所述命名实体对应的实体内容并显示在当前应用界面之后,还包括:
检测在预设时间内是否获取发送指令;
若检测获知没有获取所述发送指令,则取消显示所述实体内容。
可选地,所述的方法,还包括:
检测当前应用界面输入的信息中是否包含情感实体;
若检测到所述输入的信息中包含情感实体,获取与所述情感实体对应的情感图片并显示在当前应用界面;
获取发送指令,将所述情感图片发送给目标用户。
可选地,在所述获取与所述情感实体对应的情感图片并显示在当前应用界面之前,还包括:在当前应用界面显示图片标识;
所述获取与所述情感实体对应的情感图片并显示在当前应用界面,包括:检测用户对所述图片标识是否进行触发操作;
若检测到对所述图片标识进行触发操作,则获取与所述情感实体对应的情感图片并显示在当前应用界面。
可选地,在所述获取与所述情感实体对应的情感图片并显示在当前应用界面之后,还包括:
检测在预设时间内是否获取发送指令;
若检测获知没有获取所述发送指令,则取消显示所述情感图片。
为达上述目的,本申请第二方面实施例提出了一种信息处理装置,包括:
第一检测模块,用于检测当前应用界面输入的信息中是否包含命名实体;
第一显示模块,用于若检测到所述输入的信息中包含命名实体,则获取与所述命名实体对应的实体内容并显示在当前应用界面;
第一发送模块,用于获取发送指令,将所述实体内容发送给目标用户。
可选地,所述的装置还包括:
第二显示模块,用于获取与所述命名实体对应的实体类型;在当前应用界面显示与所述实体类型对应的实体图标。
所述第一显示模块,具体用于检测用户对所述实体图标是否进行触发操作;若检测到对所述实体图标进行触发操作,则获取与所述命名实体对应的实体内容并显示在当前应用 界面。
可选地,所述第一显示模块还用于:检测在预设时间内是否获取发送指令;
若检测获知没有获取所述发送指令,则取消显示所述实体内容。
可选地,所述的装置还包括:
第二检测模块,用于检测当前应用界面输入的信息中是否包含情感实体;
第三显示模块,用于若检测到所述输入的信息中包含情感实体,获取与所述情感实体对应的情感图片并显示在当前应用界面;
第二发送模块,用于获取发送指令,将所述情感图片发送给目标用户。
可选地,所述的装置还包括:
第四显示模块,用于在当前应用界面显示图片标识。
所述第三显示模块,具体用于检测用户对所述图片标识是否进行触发操作;若检测到对所述图片标识进行触发操作,则获取与所述情感实体对应的情感图片并显示在当前应用界面。
可选地,所述第三显示模块还用于:检测在预设时间内是否获取发送指令;若检测获知没有获取所述发送指令,则取消显示所述情感图片。
为达上述目的,本申请第三方面实施例提出了一种终端设备,包括处理器和存储器;其中,所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于实现如第一方面实施例所述的信息处理方法。
为达上述目的,本申请第四方面实施例提出了一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如第一方面实施例所述的信息处理方法。
本申请实施例提供的技术方案可以包括以下有益效果:
检测当前应用界面输入的信息中是否包含命名实体,进而在检测到输入的信息中包含命名实体时,获取与命名实体对应的实体内容并显示在当前应用界面,进一步获取发送指令,将实体内容发送给目标用户。由此,实现了在不切换应用的前提下,将命名实体内容显示在当前应用界面,并发送给目标用户,降低了用户在聊天过程中切换应用的频率,提高了输入效率。
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
图1为本申请实施例所提供的一种信息处理方法的流程示意图;
图2为本申请实施例所提供的实体内容显示示意图;
图3为本申请实施例所提供的另一种信息处理方法的流程示意图;
图4-图7为本申请实施例所提供的实体图标显示示意图;
图8为本申请实施例所提供的另一种信息处理方法的流程示意图;
图9为本申请实施例所提供的图片标识显示示意图;
图10为本申请实施例所提供的一种信息处理装置的结构示意图;
图11为本申请实施例所提供的另一种信息处理装置的结构示意图;
图12为本申请实施例所提供的另一种信息处理装置的结构示意图;
图13为本申请实施例所提供的另一种信息处理装置的结构示意图;
图14示出了适于用来实现本申请实施例的示例性终端设备的框图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
下面参考附图描述本申请实施例的信息处理方法、装置及设备。
图1为本申请实施例所提供的一种信息处理方法的流程示意图,如图1所示,该信息处理方法包括:
步骤101,检测当前应用界面输入的信息中是否包含命名实体。
本申请实施例的信息处理方法,可以应用于智能手机、平板电脑、个人数字助理、穿戴式设备等终端设备。在用户通过终端设备上的应用程序(例如QQ、微信等)进行聊天通信时,可以检测当前应用界面输入的信息中是否包含命名实体。
作为一种可能的实现方式,可以根据语义分析算法,对当前应用界面输入的信息进行识别和分析,从而识别出当前应用界面输入的信息中是否包含命名实体。
作为另一种可能的实现方式,还可以将命名实体存储于本地或者云端服务器,进而根据NER(Named Entity Recognition,命名实体识别)算法,直接将存储的命名实体与输入的信息进行匹配,以对当前应用界面输入的信息中的命名实体进行识别。其中,命名实体可以是人名、机构名、地名以及其他所有以名称为标识的实体,也可以是歌曲名、电影名、日期、货币、地址等。
需要说明的是,语义分析算法和NER算法可以设置在终端设备本地,也可以设置在云端服务器。可以在输入过程中每输入一个字符,请求一次服务端,以检测当前输入的信息中是否包含命名实体,也可以在输入过程中每输入预设的字符长度,请求一次服务端,以检测当前输入的信息中是否包含命名实体,此处不作限制。
步骤102,若检测到输入的信息中包含命名实体,则获取与命名实体对应的实体内容并显示在当前应用界面。
在本申请的一个实施例中,可以预先在本地或者云端服务器设置数据库,将命名实体 和对应的实体内容存储在数据库中,进而在检测到输入的信息中包含命名实体时,将检测到的命名实体与数据库中的命名实体进行匹配,进一步获取与匹配成功的命名实体对应的实体内容,并显示在当前应用界面。
例如,如图2所示,检测当前应用界面输入的信息“Would you like to watch Kingsman 2 together?”中包含命名实体“Kingsman 2(电影名:王牌特工2)”,进而将“Kingsman 2”与数据库中存储的命名实体进行匹配,匹配成功后在输入法的键盘界面生成命名实体按键,通过点击该按键可以获取与“Kingsman 2”对应的实体内容并显示在当前应用界面。
在本申请的一个实施例中,还可以在检测到输入的信息中包含命名实体之后,通过终端设备的后台调用接口调用搜索引擎,进而通过搜索引擎直接对命名实体进行搜索,从而获取对应的实体内容并显示在当前应用界面。
需要说明的是,实体内容可以显示在输入法的键盘界面,也可以根据实际需求设置在其他区域进行显示,还可以以浮层的方式进行显示,此处不作限制。可以在获取实体内容之后直接进行显示,也可以在应用界面生成命名实体按键,通过触发该按键获取并显示对应的实体内容,此处不作限制。
步骤103,获取发送指令,将实体内容发送给目标用户。
其中,发送指令的实现方式包括但不限于点击指令、语音指令等,此处不作限制。
本实施例中,由于已经在当前应用界面显示了实体内容,因此,通过获取发送指令,将实体内容发送给目标用户,实现了在聊天过程中在不切换应用的情况下,获取实体内容并发送给目标用户,提高了输入效率,并且,使用户不需要对实体内容进行复制或截图就可以直接发送,简化了用户操作。
在本申请的一个实施例中,在实体内容显示在当前应用界面之后,还可以检测在预设时间内是否获取发送指令,若检测到在预设时间内没有获取发送指令,则取消显示实体内容。由此,实现了在预设时间内没有获取发送指令时,取消显示实体内容,避免了实体内容长时间占据显示界面空间,进一步提升了用户体验。
可以理解,在聊天过程中,会存在用户希望把某个事物的内容和介绍等相关信息发送给对方的场景,比如,在聊到电影时,用户会希望把该电影的相关信息发送给对方,此时用户需要退出聊天界面,并通过搜索引擎对该电影进行搜索,进而将搜索得到的内容复制或截图后发送给对方,操作繁琐,用户体验差。因此,本申请实施例的信息处理方法,通过检测当前应用界面输入的信息中是否包含命名实体,进而在检测到输入的信息中包含命名实体时,获取与命名实体对应的实体内容并显示在当前应用界面,进一步获取发送指令,将实体内容发送给目标用户。由此,实现了在不切换应用的前提下,将命名实体内容显示在当前应用界面,并发送给目标用户,降低了用户在聊天过程中切换应用的频率,提高了输入效率。
基于上述实施例,进一步地,在检测到输入的信息中包含命名实体时,还可以获取命 名实体对应的实体类型,进而将实体类型显示在当前应用界面。
图3为本申请实施例所提供的另一种信息处理方法的流程示意图,如图3所示,在检测到输入的信息中包含命名实体之后,该信息处理方法还包括:
步骤201,获取与命名实体对应的实体类型。
在本申请的一个实施例中,可以预先在终端设备本地或云端服务器设置数据库,将命名实体和对应的实体类型存储在数据库中,进而在检测到输入的信息中包含命名实体时,将检测到的命名实体与数据库中的命名实体进行匹配,进一步获取与匹配成功的命名实体对应的实体类型。
其中,实体类型包括但不限于人名、地名、歌曲名、电影名等。
步骤202,在当前应用界面显示与实体类型对应的实体图标。
在本申请的一个实施例中,可以在终端设备本地或者云端服务器预先设置映射关系表,并将实体类型与实体图标的对应关系存储于映射关系表中,进而再获取实体类型之后,通过查询映射关系表获取对于的实体图标,进一步可以在当前应用界面显示实体图标。例如,如图4所示,命名实体为“Elon Musk”,获取对应的实体类型为人名,进而获取代表人名的实体图标,并在输入法键盘界面显示;再例如,如图5所示,命名实体为“We are the brave”,获取对应的实体类型为歌曲名,进而获取代表歌曲名的实体图标,并在输入法键盘界面显示;再例如,如图6所示,命名实体为“Coco”,获取对应的实体类型为电影名,进而获取代表电影名的实体图标,并在输入法键盘界面显示;再例如,如图7所示,命名实体为“West Hollywood”,获取对应的实体类型为地名,进而获取代表地名的实体图标,并在输入法键盘界面显示。
需要说明的是,实体图标可以显示在输入法的键盘界面,也可以根据实际需求设置在其他区域进行显示,还可以以浮层的方式进行显示,此处不作限制。
由此,实现了在检测到输入的信息中包含命名实体时,获取命名实体对应的实体类型,并将实体类型以实体图标的形式提供给用户,方便用户第一时间识别,提升用户输入体验。
进一步地,由于同一命名实体可能具有不同的含义,比如命名实体“Harry Potter”可能是电影名,也可能是书名,还可能是人名,当用户谈论电影相关的话题时,会更希望获取电影“Harry Potter”的实体内容。因此,在根据同一命名实体获取了多个实体类型时,还可以分别获取每个实体类型对应的实体图标,进而将多个实体图标分别在当前应用界面显示,从而更直观的对同名的命名实体进行区分,方便用户选择。其中,多个实体图标可以并排进行显示,也可以并列进行显示,或者根据实际需求任意排列进行显示,此处不作限制。
步骤203,检测用户对实体图标是否进行触发操作。
步骤204,若检测到对实体图标进行触发操作,则获取与命名实体对应的实体内容并显示在当前应用界面。
在本申请的一个实施例中,用户可以对实体图标进行触发操作以获取对应的实体内容,在当前应用界面显示实体图标之后,通过相关算法检测用户对实体图标是否进行触发操作,进而在检测到对实体图标的触发操作之后,获取与命名实体对应的实体内容并显示在当前应用界面。由于实体图标只需要占据较少的空间,可以使界面更加美观,提升用户输入体验。
其中,用户对实体图标的触发操作可以为单击、双击、滑动等,此处不作限制。需要说明的是,前述实施例对获取与命名实体对应的实体内容并显示在当前应用界面的解释说明同样适用于本实施例,此处不再赘述。
本申请实施例的信息处理方法,通过获取与命名实体对应的实体类型,进而在当前应用界面显示与实体类型对应的实体图标,进一步检测用户对实体图标是否进行触发操作,在检测到对实体图标进行触发操作时,获取与命名实体对应的实体内容并显示在当前应用界面。由此,实现了在检测到输入的信息中包含命名实体时,获取命名实体对应的实体类型,并将实体类型以实体图标的形式提供给用户,方便用户第一时间识别,提升用户输入体验。
基于上述实施例,进一步地,还可以对输入信息中的情感实体进行检测,进而在检测到情感实体时,将与情感实体对应的情感突破显示在当前应用界面。
图8为本申请实施例所提供的另一种信息处理方法的流程示意图,如图8所示,该信息处理方法包括:
步骤301,检测当前应用界面输入的信息中是否包含情感实体。
作为一种可能的实现方式,可以根据语义分析算法,对当前应用界面输入的信息进行识别和分析,从而识别出当前应用界面输入的信息中是否包含情感实体。
作为另一种可能的实现方式,将情感实体存储于本地或者云端服务器,进而根据NER算法,直接将存储的情感实体与输入的信息进行匹配,以对当前应用界面输入的信息中的情感实体进行识别。
其中,情感实体可以为问候语(例如晚安),也可以为表示心情的词语(例如微笑、伤心)等。
步骤302,若检测到输入的信息中包含情感实体,获取与情感实体对应的情感图片并显示在当前应用界面。
需要说明的是,前述实施例中针对获取与命名实体对应的实体内容并显示在当前应用界面的解释说明,同样适用于本实施例中获取与情感实体对应的情感图片并显示在当前应用界面,此处不再赘述。
其中,情感图片可以为静态图片,也可以为动态图片。
在本申请的一个实施例中,在检测到输入的信息中包含情感实体时,可以在当前应用界面显示图片标识。例如,如图9所示,检测到输入的信息中包含情感实体“Thank you”, 进而在当前应用界面显示图片标识“GIF”,方便用户第一时间识别情感实体,提升用户输入体验。
进一步地,还可以检测用户对图片标识是否进行触发操作,进而,在检测到对图片标识进行触发操作时,获取与情感实体对应的情感图片并显示在当前应用界面,从而使界面更加美观,提升用户输入体验。
其中,用户对图片标识的触发操作可以为单击、双击、滑动等,此处不作限制。
步骤303,获取发送指令,将情感图片发送给目标用户。
其中,发送指令的实现方式包括但不限于点击指令、语音指令等,此处不作限制。
本实施例中,由于已经在当前应用界面显示了情感图片,因此,通过获取发送指令,将情感图片发送给目标用户,实现了在聊天过程中在不切换应用的情况下,获取情感图片并发送给目标用户,提高了输入效率,并且,使用户不需要对情感图片进行复制或截图就可以直接发送,简化了用户操作。
在本申请的一个实施例中,在情感实体对应的情感图片显示在当前应用界面之后,还可以检测在预设时间内是否获取发送指令,若检测到在预设时间内没有获取发送指令,则取消显示情感图片。由此,实现了在预设时间内没有获取发送指令时,取消显示情感图片,避免了情感图片长时间占据显示界面空间,进一步提升了用户体验。
本申请实施例的信息处理方法,通过检测当前应用界面输入的信息中是否包含情感实体,进而在检测到输入的信息中包含情感实体时,获取与情感实体对应的情感图片并显示在当前应用界面,进一步获取发送指令,将情感图片发送给目标用户。由此,实现了在不切换应用的前提下,检测输入信息中的情感实体,将情感图片显示在当前应用界面,并发送给目标用户,降低了用户在聊天过程中切换应用的频率,提高了输入效率。
为了实现上述实施例,本申请还提出一种信息处理装置,图10为本申请实施例所提供的一种信息处理装置的结构示意图,如图10所示,该信息处理装置包括:第一检测模块100,第一显示模块200,第一发送模块300。
其中,第一检测模块100,用于检测当前应用界面输入的信息中是否包含命名实体。
第一显示模块200,用于若检测到输入的信息中包含命名实体,则获取与命名实体对应的实体内容并显示在当前应用界面。
第一发送模块300,用于获取发送指令,将实体内容发送给目标用户。
进一步地,第一显示模块200还用于:检测在预设时间内是否获取发送指令;若检测获知没有获取发送指令,则取消显示实体内容。
在图10的基础上,图11提供的信息处理装置还包括:第二显示模块400。
其中,第二显示模块400,用于:获取与命名实体对应的实体类型;在当前应用界面显示与实体类型对应的实体图标。
第一显示模块200,具体用于:检测用户对实体图标是否进行触发操作;若检测到对实 体图标进行触发操作,则获取与命名实体对应的实体内容并显示在当前应用界面。
图12为本申请实施例所提供的另一种信息处理装置的结构示意图,如图12所示,该信息处理装置包括:第二检测模块500,第三显示模块600,第二发送模块700。
其中,第二检测模块500,用于检测当前应用界面输入的信息中是否包含情感实体。
第三显示模块600,用于若检测到输入的信息中包含情感实体,获取与情感实体对应的情感图片并显示在当前应用界面。
第二发送模块700,用于获取发送指令,将情感图片发送给目标用户。
进一步地,第三显示模块600还用于:检测在预设时间内是否获取发送指令;若检测获知没有获取发送指令,则取消显示情感图片。
在图12的基础上,图13提供的信息处理装置还包括:第四显示模块800。
其中,第四显示模块800,用于在当前应用界面显示图片标识。
第三显示模块600,具体用于:检测用户对图片标识是否进行触发操作;若检测到对图片标识进行触发操作,则获取与情感实体对应的情感图片并显示在当前应用界面。
需要说明的是,前述实施例对信息处理方法的解释说明同样适用于本实施例的信息处理装置,此处不再赘述。
本申请实施例的信息处理装置,通过检测当前应用界面输入的信息中是否包含命名实体,进而在检测到输入的信息中包含命名实体时,获取与命名实体对应的实体内容并显示在当前应用界面,进一步获取发送指令,将实体内容发送给目标用户。由此,实现了在不切换应用的前提下,将命名实体内容显示在当前应用界面,并发送给目标用户,降低了用户在聊天过程中切换应用的频率,提高了输入效率。
为了实现上述实施例,本申请还提出一种终端设备,包括处理器和存储器;其中,处理器通过读取存储器中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于实现如前述任一实施例所述的信息处理方法。
为了实现上述实施例,本申请还提出一种计算机程序产品,当计算机程序产品中的指令处理器执行时实现如前述任一实施例所述的信息处理方法。
为了实现上述实施例,本申请还提出一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如前述任一实施例所述的信息处理方法。
图14示出了适于用来实现本申请实施例的示例性终端设备的框图。图14显示的终端设备12仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图14所示,终端设备12以通用计算设备的形式表现。终端设备12的组件可以包括但不限于:一个或者多个处理器或者处理单元16,系统存储器28,连接不同系统组件(包括系统存储器28和处理单元16)的总线18。
总线18表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例 来说,这些体系结构包括但不限于工业标准体系结构(Industry Standard Architecture;以下简称:ISA)总线,微通道体系结构(Micro Channel Architecture;以下简称:MAC)总线,增强型ISA总线、视频电子标准协会(Video Electronics Standards Association;以下简称:VESA)局域总线以及外围组件互连(Peripheral Component Interconnection;以下简称:PCI)总线。
终端设备12典型地包括多种计算机系统可读介质。这些介质可以是任何能够被终端设备12访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
存储器28可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(Random Access Memory;以下简称:RAM)30和/或高速缓存存储器32。终端设备12可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统34可以用于读写不可移动的、非易失性磁介质(图14未显示,通常称为“硬盘驱动器”)。尽管图14中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如:光盘只读存储器(Compact Disc Read Only Memory;以下简称:CD-ROM)、数字多功能只读光盘(Digital Video Disc Read Only Memory;以下简称:DVD-ROM)或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线18相连。存储器28可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请各实施例的功能。
具有一组(至少一个)程序模块42的程序/实用工具40,可以存储在例如存储器28中,这样的程序模块42包括但不限于操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块42通常执行本申请所描述的实施例中的功能和/或方法。
终端设备12也可以与一个或多个外部设备14(例如键盘、指向设备、显示器24等)通信,还可与一个或者多个使得用户能与该计算机系统/服务器12交互的设备通信,和/或与使得该计算机系统/服务器12能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口22进行。并且,终端设备12还可以通过网络适配器20与一个或者多个网络(例如局域网(Local Area Network;以下简称:LAN),广域网(Wide Area Network;以下简称:WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器20通过总线18与终端设备12的其它模块通信。应当明白,尽管图中未示出,可以结合终端设备12使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元16通过运行存储在系统存储器28中的程序,从而执行各种功能应用以及数据处理,例如实现前述实施例中提及的方法。
在本申请的描述中,需要理解的是,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (20)

  1. 一种信息处理方法,其特征在于,包括以下步骤:
    检测当前应用界面输入的信息中是否包含命名实体;
    若检测到所述输入的信息中包含命名实体,则获取与所述命名实体对应的实体内容并显示在当前应用界面;
    获取发送指令,将所述实体内容发送给目标用户。
  2. 如权利要求1所述的方法,其特征在于,在所述获取与所述命名实体对应的实体内容并显示在当前应用界面之前,还包括:
    获取与所述命名实体对应的实体类型;
    在当前应用界面显示与所述实体类型对应的实体图标;
    所述获取与所述命名实体对应的实体内容,并显示在当前应用界面,包括:
    检测用户对所述实体图标是否进行触发操作;
    若检测到对所述实体图标进行触发操作,则获取与所述命名实体对应的实体内容并显示在当前应用界面。
  3. 如权利要求1所述的方法,其特征在于,在所述获取与所述命名实体对应的实体内容并显示在当前应用界面之后,还包括:
    检测在预设时间内是否获取发送指令;
    若检测获知没有获取所述发送指令,则取消显示所述实体内容。
  4. 如权利要求2所述的方法,其特征在于,在所述获取与所述命名实体对应的实体内容并显示在当前应用界面之后,还包括:
    检测在预设时间内是否获取发送指令;
    若检测获知没有获取所述发送指令,则取消显示所述实体内容。
  5. 如权利要求1所述的方法,其特征在于,还包括:
    检测当前应用界面输入的信息中是否包含情感实体;
    若检测到所述输入的信息中包含情感实体,获取与所述情感实体对应的情感图片并显示在当前应用界面;
    获取发送指令,将所述情感图片发送给目标用户。
  6. 如权利要求2所述的方法,其特征在于,还包括:
    检测当前应用界面输入的信息中是否包含情感实体;
    若检测到所述输入的信息中包含情感实体,获取与所述情感实体对应的情感图片并显示在当前应用界面;
    获取发送指令,将所述情感图片发送给目标用户。
  7. 如权利要求3所述的方法,其特征在于,还包括:
    检测当前应用界面输入的信息中是否包含情感实体;
    若检测到所述输入的信息中包含情感实体,获取与所述情感实体对应的情感图片并显示在当前应用界面;
    获取发送指令,将所述情感图片发送给目标用户。
  8. 如权利要求4所述的方法,其特征在于,还包括:
    检测当前应用界面输入的信息中是否包含情感实体;
    若检测到所述输入的信息中包含情感实体,获取与所述情感实体对应的情感图片并显示在当前应用界面;
    获取发送指令,将所述情感图片发送给目标用户。
  9. 如权利要求5所述的方法,其特征在于,在所述获取与所述情感实体对应的情感图片并显示在当前应用界面之前,还包括:
    在当前应用界面显示图片标识;
    所述获取与所述情感实体对应的情感图片并显示在当前应用界面,包括:
    检测用户对所述图片标识是否进行触发操作;
    若检测到对所述图片标识进行触发操作,则获取与所述情感实体对应的情感图片并显示在当前应用界面。
  10. 如权利要求6所述的方法,其特征在于,在所述获取与所述情感实体对应的情感图片并显示在当前应用界面之前,还包括:
    在当前应用界面显示图片标识;
    所述获取与所述情感实体对应的情感图片并显示在当前应用界面,包括:
    检测用户对所述图片标识是否进行触发操作;
    若检测到对所述图片标识进行触发操作,则获取与所述情感实体对应的情感图片并显示在当前应用界面。
  11. 如权利要求5所述的方法,其特征在于,在所述获取与所述情感实体对应的情感图片并显示在当前应用界面之后,还包括:
    检测在预设时间内是否获取发送指令;
    若检测获知没有获取所述发送指令,则取消显示所述情感图片。
  12. 如权利要求9所述的方法,其特征在于,在所述获取与所述情感实体对应的情感图片并显示在当前应用界面之后,还包括:
    检测在预设时间内是否获取发送指令;
    若检测获知没有获取所述发送指令,则取消显示所述情感图片。
  13. 一种信息处理装置,其特征在于,包括:
    第一检测模块,用于检测当前应用界面输入的信息中是否包含命名实体;
    第一显示模块,用于若检测到所述输入的信息中包含命名实体,则获取与所述命名实 体对应的实体内容并显示在当前应用界面;
    第一发送模块,用于获取发送指令,将所述实体内容发送给目标用户。
  14. 如权利要求13所述的装置,其特征在于,还包括:
    第二显示模块,用于获取与所述命名实体对应的实体类型;在当前应用界面显示与所述实体类型对应的实体图标;
    所述第一显示模块,具体用于检测用户对所述实体图标是否进行触发操作;若检测到对所述实体图标进行触发操作,则获取与所述命名实体对应的实体内容并显示在当前应用界面。
  15. 如权利要求13所述的装置,其特征在于,所述第一显示模块还用于:
    检测在预设时间内是否获取发送指令;
    若检测获知没有获取所述发送指令,则取消显示所述实体内容。
  16. 如权利要求13所述的装置,其特征在于,还包括:
    第二检测模块,用于检测当前应用界面输入的信息中是否包含情感实体;
    第三显示模块,用于若检测到所述输入的信息中包含情感实体,获取与所述情感实体对应的情感图片并显示在当前应用界面;
    第二发送模块,用于获取发送指令,将所述情感图片发送给目标用户。
  17. 如权利要求16所述的装置,其特征在于,还包括:
    第四显示模块,用于在当前应用界面显示图片标识;
    所述第三显示模块,具体用于检测用户对所述图片标识是否进行触发操作;若检测到对所述图片标识进行触发操作,则获取与所述情感实体对应的情感图片并显示在当前应用界面。
  18. 如权利要求16所述的装置,其特征在于,所述第三显示模块还用于:
    检测在预设时间内是否获取发送指令;
    若检测获知没有获取所述发送指令,则取消显示所述情感图片。
  19. 一种终端设备,其特征在于,包括处理器和存储器;
    其中,所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于实现如权利要求1-12中任一项所述的信息处理方法。
  20. 一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-12中任一项所述的信息处理方法。
PCT/CN2018/106729 2018-05-08 2018-09-20 信息处理方法、装置及设备 WO2019214132A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/792,368 US20200234008A1 (en) 2018-05-08 2020-02-17 Information processing method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810432516.1 2018-05-08
CN201810432516.1A CN108595438A (zh) 2018-05-08 2018-05-08 信息处理方法、装置及设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/792,368 Continuation US20200234008A1 (en) 2018-05-08 2020-02-17 Information processing method and device

Publications (1)

Publication Number Publication Date
WO2019214132A1 true WO2019214132A1 (zh) 2019-11-14

Family

ID=63636207

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/106729 WO2019214132A1 (zh) 2018-05-08 2018-09-20 信息处理方法、装置及设备

Country Status (3)

Country Link
US (1) US20200234008A1 (zh)
CN (1) CN108595438A (zh)
WO (1) WO2019214132A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11397770B2 (en) * 2018-11-26 2022-07-26 Sap Se Query discovery and interpretation
US11580310B2 (en) * 2019-08-27 2023-02-14 Google Llc Systems and methods for generating names using machine-learned models
CN111243700B (zh) * 2020-01-15 2023-09-29 创业慧康科技股份有限公司 一种电子病历输入方法及装置
CN112433623A (zh) * 2020-11-25 2021-03-02 维沃移动通信有限公司 显示方法和电子设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915204A (zh) * 2015-06-08 2015-09-16 小米科技有限责任公司 网页处理方法及装置
CN107315827A (zh) * 2017-07-05 2017-11-03 广州阿里巴巴文学信息技术有限公司 一种电子阅读中的关联推荐的方法及其装置
CN107609174A (zh) * 2017-09-27 2018-01-19 珠海市魅族科技有限公司 一种内容检索的方法及装置、终端和可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130082578A (ko) * 2011-12-09 2013-07-22 매디슨에비뉴 주식회사 사용자 단말기에 실행 중인 채팅 대화창의 입력 키워드에 기초한 광고 방법
CN103760991B (zh) * 2014-01-13 2017-02-15 北京搜狗科技发展有限公司 一种实体输入方法和装置
CN104298429B (zh) * 2014-09-25 2018-05-04 北京搜狗科技发展有限公司 一种基于输入的信息展示方法和输入法系统
CN107193396B (zh) * 2017-05-31 2019-03-05 维沃移动通信有限公司 一种输入方法和移动终端

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915204A (zh) * 2015-06-08 2015-09-16 小米科技有限责任公司 网页处理方法及装置
CN107315827A (zh) * 2017-07-05 2017-11-03 广州阿里巴巴文学信息技术有限公司 一种电子阅读中的关联推荐的方法及其装置
CN107609174A (zh) * 2017-09-27 2018-01-19 珠海市魅族科技有限公司 一种内容检索的方法及装置、终端和可读存储介质

Also Published As

Publication number Publication date
CN108595438A (zh) 2018-09-28
US20200234008A1 (en) 2020-07-23

Similar Documents

Publication Publication Date Title
US10904183B2 (en) Point in time expression of emotion data gathered from a chat session
CN107430858B (zh) 传送标识当前说话者的元数据
US10444971B2 (en) Displaying related content in a content stream
US10514876B2 (en) Gallery of messages from individuals with a shared interest
WO2019214132A1 (zh) 信息处理方法、装置及设备
CN110168537B (zh) 上下文和社交距离感知的快速活性人员卡片
US20170357661A1 (en) Providing content items in response to a natural language query
CN106575361B (zh) 提供视觉声像的方法和实现该方法的电子设备
CN105453612B (zh) 消息服务提供装置以及经由其提供内容的方法
US9565223B2 (en) Social network interaction
US20110099464A1 (en) Mechanism for adding content from a search to a document or message
JP2017010567A (ja) イメージ・パニングおよびズーミング効果
JP7158478B2 (ja) 画像選択提案
US11430211B1 (en) Method for creating and displaying social media content associated with real-world objects or phenomena using augmented reality
US10897442B2 (en) Social media integration for events
US8954894B2 (en) Gesture-initiated symbol entry
JP2023554519A (ja) 電子文書の編集方法と装置及びコンピュータ機器とプログラム
KR20140099837A (ko) 터치 센서티브 디스플레이를 포함하는 컴퓨팅 디바이스에서 통신을 개시하는 방법 및 컴퓨팅 디바이스
US20230410811A1 (en) Augmented reality-based translation of speech in association with travel
WO2023274124A1 (zh) 信息回复方法、装置、电子设备、计算机存储介质和产品
WO2019085625A1 (zh) 表情图片推荐方法及设备
WO2018222423A1 (en) Task creation and completion with bi-directional user interactions
WO2021218680A1 (zh) 互动信息处理方法、装置、电子设备及存储介质
WO2017076027A1 (zh) 一种壁纸处理方法及装置
CN107749892B (zh) 会议记录的网络读取方法、装置、智能平板和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18918257

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.03.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18918257

Country of ref document: EP

Kind code of ref document: A1