CN110019906B - Method and apparatus for displaying information - Google Patents

Method and apparatus for displaying information Download PDF

Info

Publication number
CN110019906B
CN110019906B CN201711176106.7A CN201711176106A CN110019906B CN 110019906 B CN110019906 B CN 110019906B CN 201711176106 A CN201711176106 A CN 201711176106A CN 110019906 B CN110019906 B CN 110019906B
Authority
CN
China
Prior art keywords
target
node
information
nodes
keywords
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711176106.7A
Other languages
Chinese (zh)
Other versions
CN110019906A (en
Inventor
王德夫
刘鹏飞
张钰
魏苓
高志昌
马酩
王文韬
郑讯
刘超
狄涛
张高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN201711176106.7A priority Critical patent/CN110019906B/en
Publication of CN110019906A publication Critical patent/CN110019906A/en
Application granted granted Critical
Publication of CN110019906B publication Critical patent/CN110019906B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5846Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a method and a device for displaying information. One embodiment of the method comprises: acquiring a target image of a target object acquired by a terminal; extracting a character set from the target image; determining target keywords matched with nodes in a target knowledge graph from the character set; and sending the target keyword to the terminal, wherein the terminal displays the target keyword on the real-time image of the target object in an augmented reality mode. This embodiment enriches the way information is displayed.

Description

Method and apparatus for displaying information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of internet, and particularly relates to a method and a device for displaying information.
Background
With the development of computers and the internet, users can acquire information by using terminals anytime and anywhere.
However, in the existing information display mode, a user directly obtains or directly sees a real object through the internet, and the internet content and the real object are difficult to combine.
Disclosure of Invention
The embodiment of the application provides a method and a device for displaying information.
In a first aspect, an embodiment of the present application provides a method for displaying information, where the method includes: acquiring a target image of a target object acquired by a terminal; extracting a character set from the target image; determining target keywords matched with nodes in a target knowledge graph from the character set; and sending the target keyword to the terminal, wherein the terminal displays the target keyword on a real-time image of the target object in an augmented reality mode.
In a second aspect, an embodiment of the present application provides a method for displaying information, where the method includes: acquiring a target image of a target object; sending the target image to a server; receiving a target keyword sent by the server, wherein the target keyword is obtained by the server through the following steps: acquiring the target image; extracting a character set from the target image; determining target keywords matched with nodes in a target knowledge graph from the character set; and displaying the target keyword on the real-time image of the target object in an augmented reality mode.
In a third aspect, an embodiment of the present application provides an apparatus for displaying information, where the apparatus includes: the first acquisition unit is used for acquiring a target image of a target object acquired by the terminal; an extracting unit for extracting a character set from the target image; a first determining unit, configured to determine, from the text set, a target keyword that matches a node in a target knowledge graph; and a first sending unit, configured to send the target keyword to the terminal, where the terminal displays the target keyword on a real-time image of the target object in an augmented reality manner.
In a fourth aspect, an embodiment of the present application provides an apparatus for displaying information, where the apparatus includes: the acquisition module is used for acquiring a target image of a target object; the sending module is used for sending the target image to a server; a first receiving module, configured to receive a target keyword sent by the server, where the target keyword is obtained by the server through the following steps: acquiring the target image; extracting a character set from the target image; determining target keywords matched with nodes in a target knowledge graph from the character set; and the first display module is used for displaying the target keyword on the real-time image of the target object in an augmented reality mode.
In a fifth aspect, an embodiment of the present application provides a server, where the server includes: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method according to the first aspect.
In a sixth aspect, an embodiment of the present application provides a terminal, where the terminal includes: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method according to the second aspect.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the program, when executed by a processor, implements the method according to the first aspect.
In an eighth aspect, embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements the method according to the second aspect.
In a ninth aspect, an embodiment of the present application provides a system for displaying information, including the server shown in the fifth aspect and the terminal shown in the second aspect.
According to the method and the device for displaying information, the target image of the target object acquired by the terminal is acquired; extracting a character set from the target image; determining target keywords matched with nodes in a target knowledge graph from the character set; and sending the target keywords to the terminal, wherein the terminal displays the target keywords on a real-time image of the target object in an augmented reality mode, so that the information display mode is enriched.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for displaying information according to the present application;
FIG. 3A is a schematic diagram of an application scenario according to an embodiment of the application;
FIG. 3B is a schematic diagram of another application scenario in accordance with an embodiment of the present application;
FIG. 3C is a schematic diagram of yet another application scenario in accordance with an embodiment of the present application;
FIG. 3D is a schematic diagram of yet another application scenario in accordance with an embodiment of the present application;
FIG. 4 is a flow chart of yet another embodiment of a method for displaying information according to the present application;
FIG. 5 is a schematic block diagram of one embodiment of an apparatus for displaying information in accordance with the present application;
FIG. 6 is a schematic diagram of an arrangement of yet another embodiment of an apparatus for displaying information according to the present application;
fig. 7 is a schematic structural diagram of a computer system suitable for implementing the terminal device or the server according to the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for displaying information or the apparatus for displaying information of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a library application, an instant messaging tool, an image acquisition application, a search application, a mailbox client, social platform software, and the like, may be installed on the terminal devices 101, 102, and 103.
The terminal devices 101, 102, and 103 may be various electronic devices having a display screen and supporting a video capture function, including but not limited to a smart phone, a tablet computer, an e-book reader, an MP3 player (Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4), a laptop portable computer, a desktop computer, and the like.
The server 105 may be a server providing various services, such as a background server providing support for library-like applications on the terminal devices 101, 102, 103. The background server may analyze and otherwise process data such as the knowledge point extraction request, and feed back a processing result (e.g., the knowledge points extracted from the video) to the terminal device.
It should be noted that the method for displaying information provided by the embodiment corresponding to the flow shown in fig. 2 of the present application is generally executed by the server 105, and accordingly, an apparatus for displaying information corresponding to the flow shown in fig. 2 is generally disposed in the server 105. The method for displaying information provided by the embodiment corresponding to the flow shown in fig. 4 of the present application is generally executed by the terminal devices 101, 102, 103, and accordingly, the apparatus for displaying information corresponding to the flow shown in fig. 4 is generally disposed in the terminal devices 101, 102, 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for displaying information in accordance with the present application is shown. The method for displaying information comprises the following steps:
step 201, acquiring a target image of a target object acquired by a terminal.
In this embodiment, an electronic device (e.g., a server shown in fig. 1) on which the method for displaying information operates may acquire a target image of a target object captured by a terminal.
In this embodiment, the electronic device (e.g., a server) may acquire the target image from a local or other electronic device. For example, the electronic device may receive the target image from the terminal to obtain the target image. The electronic equipment can also receive the target image from a terminal, then store the target image into a local memory, and then read the target image from the memory to obtain the target image. The electronic equipment can also receive the target image forwarded by other electronic equipment from other servers to acquire the target image.
In this embodiment, the terminal may be a terminal with which the user performs image acquisition.
In the present embodiment, the target object may be any object. As an example, the target object may be a text print, and may be a solid real object. The text print may be a book, newspaper, magazine, or the like.
In this embodiment, the target image may be a still image, such as a picture; but also dynamic images, such as video. Here, when the continuous image change exceeds 24 frames (frames) of pictures per second or more, human eyes cannot distinguish a single still picture according to the principle of persistence of vision; it appears as a smooth continuous visual effect, so that the continuous picture is called a video.
In this embodiment, the video target image may be a video image captured by a user on the target object. The user sees the text on the target object, and desires to know the text. In this case, the terminal may be used to capture an image of the target object, and the captured video image naturally includes an image corresponding to the text.
Step 202, extracting a character set from the target image.
In the present embodiment, an electronic device (e.g., a server shown in fig. 1) on which the method for displaying information operates may extract a set of characters from the above-described target image.
In this embodiment, the text set may include one text or a plurality of texts. The characters can be characters of any language, for example, Chinese characters and English characters can be used.
In this embodiment, extracting the text set from the target image may be achieved in various ways.
In some optional implementations of this embodiment, if the target image is a picture, the picture may be recognized by using an Optical Character Recognition (OCR) technique to generate a text set.
In some optional implementations of this embodiment, if the target image is a video, a text set may be extracted from the target image by using a video text extraction technique. Although manufacturers provide products that can extract text from a video, the products are capable of extracting text sets from the video, and most of the products are directed to extracting text from a specific position in the video (for example, extracting text in subtitles below the video). Therefore, the existing products on the market are utilized to extract the characters from the universal video, and the effect is poor.
In some optional implementations of this embodiment, if the target image is a video, step 202 may be implemented by: intercepting a video frame in the video; and identifying the intercepted video frame by using an optical character identification technology to generate the character set.
It should be noted that, the video is divided into frames, and then characters in the video frames are recognized by using the OCR technology. As long as the video comprises the image corresponding to the characters, the characters can be extracted no matter where the image corresponding to the characters is located. Therefore, the accuracy of character extraction can be improved.
And step 203, determining target keywords matched with the nodes in the target knowledge graph from the character set.
In this embodiment, an electronic device (e.g., a server shown in fig. 1) on which the method for displaying information operates may determine target keywords that match nodes in the target knowledge-graph from the above-described corpus of words.
In this embodiment, the Knowledge Graph (also called as a scientific Knowledge Graph) can be understood as a semantic network formed by connecting Knowledge points. The network may be made up of nodes and node relationship information indicating relationships between the nodes. In a knowledge graph, nodes may be used to indicate entities. An entity may be represented by a plurality of attributes. The association relationship between different entities can be established through the attributes of the entities.
In this embodiment, a node may include a node identifier and a node content, the node identifier may be used to record a location of the node in the knowledge-graph, and the node content may be used to indicate an entity. The node content may be the same as the name of the entity.
By way of example, please refer to FIG. 3A, which illustrates an exemplary knowledge-graph. In the knowledge-graph shown in fig. 3A, "zhang-three" in the circle may be node content indicating an entity named zhang-three and "lie-four" in the circle may be node content indicating an entity named lie-four. The "brother" in the graph may indicate a relationship between the entity zhang san and the entity lie si, or may indicate a node relationship between a node whose node content is zhang san and a node whose node content is lie si. It is to be understood that a knowledge graph may include a large number of nodes, and fig. 3A is a simple illustration and is not intended to limit the knowledge graph to which the present application is applicable.
In this embodiment, the target knowledge-graph may be a pre-established knowledge-graph prior to step 201. The target knowledge-graph may include nodes that may be used to mine keywords in a collection of words. As an example, the target knowledge-graph may be an educational knowledge-graph.
Alternatively, the matching can be performed using the content of the node and the words in the word set.
It should be noted that when determining the target keywords matched with the nodes of the target knowledge graph, the target knowledge graph can be obtained for matching; and determining the target keywords matched with the nodes by using an interface provided by the packaged target knowledge graph.
By way of example, the word set includes A, B, C and the nodes in the target knowledge-graph include node content B, D, E, such that the keywords in the word set that match the nodes of the target knowledge-graph are B, and the target keywords can be determined to be B.
In some optional implementation manners of this embodiment, words of the words in the word set may be cut first, and then word cutting results with a higher matching degree with nodes in the target knowledge graph among the word cutting results are calculated to determine the target keyword.
And step 204, sending the target keywords to the terminal.
In the present embodiment, an electronic device (e.g., a server shown in fig. 1) on which the method for displaying information operates may transmit the target keyword to the terminal.
In this embodiment, the terminal may display the target keyword on a real-time image of the target object in an augmented reality manner.
In some optional implementations of this embodiment, the terminal may upload the video to the server in real time while collecting the video. When the target keyword is returned by the server, the terminal can be in the process of continuously acquiring and displaying the real-time image of the target object.
As an example, the terminal, after receiving the target keyword, may build a three-dimensional model of the target keyword. Then, the position of the target keyword in the constructed three-dimensional model of the target object may be determined in combination with the actual position of the target keyword on the target object. And finally, superposing the three-dimensional model of the target keyword into the three-dimensional model of the target object, and displaying the target keyword in an augmented reality mode.
As shown in fig. 3B, which shows a real-time image of the page read by the user. In fig. 3C, the terminal is shown to display the target keyword "birdie" in the real-time image of the page in an augmented reality manner.
In some existing scenarios, a user may encounter various situations while reading a book. For example, the number of characters in the currently viewed page is large, and the key point of the current page is difficult to find; the concepts in the sheet currently being viewed may be less understood. At this point, the user may need to read the current page multiple times. Alternatively, the user may need to open another tool book to search for concepts one by one, and the user may need to input concepts one by one and then search for concepts one by one using a search engine by using other electronic devices. The limitation of the prior art can lead to low efficiency when a user reads books.
By the method of the embodiment, when a user can encounter any character which the user wants to deeply understand, the user can acquire the image of the object bearing the character by using the terminal. Then, the terminal can send the image to the server, the server can deeply mine knowledge points in the characters by combining with the target knowledge graph, and the knowledge points are displayed in an augmented reality mode through the terminal. In this way, the user can be helped to quickly understand the knowledge points in the current page.
According to the method provided by the embodiment of the application, the target image of the target object acquired by the terminal is acquired; extracting a character set from the target image; determining target keywords matched with nodes in a target knowledge graph from the character set; and sending the target keywords to the terminal, wherein the terminal displays the target keywords on a real-time image of the target object in an augmented reality mode, so that the information display mode is enriched.
In some optional implementations of the embodiment, the target knowledge-graph further includes node relationship information indicating an association relationship between nodes.
In some optional implementations of this embodiment, the method shown in this embodiment may further include: first, the electronic device (e.g., a server) may determine, from the target knowledge graph, a related node having a related relationship with a target node according to the node relationship information, where the target node is a node matching the target keyword. Then, the electronic device (e.g., server) may determine and transmit the secondary keyword to the terminal according to the determined associated node. Finally, the terminal can realize the secondary keyword by an augmented reality method.
As an example, the target knowledge-graph includes a node a, a node B and a node C, where the node a and the node B have an association relationship a therebetween, the node B and the node C have an association relationship B therebetween, and the node relationship information may be information indicating two associations, namely, the association relationship a and the association relationship B. It can be seen that the node relationship information may be a collection of information indicating respective incidence relationships.
As an example, the target node indicates a "debtor" entity, and the node relationship information indicates: the node debtor and the node creditor have an incidence relation. Then, the node creditor may be an associated node, and the creditor may be determined as a secondary keyword. And then the server sends the secondary keyword creditor to the terminal, and the terminal realizes the secondary keyword creditor in an augmented reality method.
Alternatively, the step of the electronic device (for example, a server) determining, from the target knowledge graph, an associated node having an association relationship with the target node according to the node relationship information may be triggered in response to a request sent by the terminal, or may be performed autonomously by the server.
It should be noted that, the method provided by this implementation may search the secondary keyword obtained by the target keyword extension on the basis of the target knowledge graph, and thus may provide a deeper information for the user.
In some optional implementations of this embodiment, the method shown in this embodiment may further include: first, the electronic device (e.g., a server) may determine whether the target nodes have an association relationship therebetween according to the node relationship information. Then, the electronic device (e.g., the server) may determine, in response to determining that the target nodes have an association relationship therebetween, an association relationship between the target keywords that match the target nodes having an association relationship therebetween. Then, the electronic device (e.g., server) may generate and transmit the instruction information to the terminal. Here, the above-mentioned indication information is used to indicate the association relationship between the target keywords having the association relationship. Finally, the terminal may display the indication information between the displayed target keywords in an augmented reality manner.
As an example, the target nodes are a target node a, a target node B, and a target node C, and the target keywords are a target keyword a, a target keyword B, and a target keyword C. The target node A is matched with the target keyword A, the target node B is matched with the target keyword B, and the target node C is matched with the target keyword C. The electronic device (e.g., a server) may determine whether the target node has an association relationship between each of the target node a, the target node B, and the target node C according to the node relationship information. If it is determined that there is an association between the target node a and the target node B, an association between the target keyword a and the target keyword B may be determined. Then, indication information for indicating the association relationship between the target keyword a and the target keyword B is generated.
Optionally, the server may further determine relationship information between the target keyword and the secondary keyword, and the relationship information is displayed by the terminal.
It should be noted that, the method provided by this implementation may provide the relationship information between the target keywords in the current video for the user on the basis of the target knowledge graph.
In some scenes, through the implementation mode, when a user reads the current page, the relation information among the knowledge points in the current page can be provided for the user. Therefore, the user can be helped to quickly acquire and grasp the knowledge context.
In some optional implementations of this embodiment, the method shown in this embodiment may further include: first, the electronic device (e.g., server) may obtain the extension information stored in association with the target node and/or the associated node. Then, the electronic device (e.g., a server) may transmit the acquired extension information to the terminal. Finally, the terminal can display the acquired extension information.
As shown in fig. 3D, a schematic diagram of the terminal displaying the extended information is shown. In fig. 3D, debtors are the target keywords. The contents other than the title "debtor" shown in fig. 3D may be regarded as the extension information.
Optionally, the extension information may include, but is not limited to, one or more of the following: resource acquisition paths, paraphrases, and related document names.
Alternatively, the resource acquisition path may be an item bank link address, a document link address, or the like. Paraphrasing may be information that explains the target keyword. The related document name may be a name of a document related to the target keyword.
It should be noted that, by the method shown in this implementation manner, further extended resources can be provided to the user. After the user grasps the target keyword, the impression of the target keyword can be consolidated and the comprehension of the target keyword can be deepened through various ways provided by resource expansion, such as a way of paraphrasing the target keyword, further consulting documents or training test questions.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a method for displaying information is shown. The process 400 of the method for displaying information includes the steps of:
step 401, a target image of a target object is acquired.
In this embodiment, an electronic device (e.g., the terminal device shown in fig. 1) on which the method for displaying information operates may capture a target image of a target object.
Step 402, sending the target image to a server.
In the present embodiment, an electronic device (e.g., a terminal device shown in fig. 1) on which the method for displaying information operates may transmit the above-described target image to a server.
Step 403, receiving the target keyword sent by the server.
In this embodiment, an electronic device (e.g., a terminal device shown in fig. 1) on which the method for displaying information operates may receive the target keyword transmitted by the server described above.
In this embodiment, the target keyword is obtained by the server through the following steps: acquiring the target image; extracting a character set from the target image; and determining the target keywords matched with the nodes in the target knowledge graph from the character set.
Step 404, displaying the target keyword on the real-time image of the target object in an augmented reality mode.
In this embodiment, an electronic device (for example, the terminal device shown in fig. 1) on which the method for displaying information is executed may display the target keyword on the real-time image of the target object in an augmented reality manner.
In some optional implementations of this embodiment, the target knowledge-graph further includes node relationship information indicating an association relationship between nodes; and the above method further comprises: receiving a secondary keyword sent by the server, wherein the secondary keyword is determined by the server through the following steps: according to the node relation information, determining an associated node having an associated relation with a target node from the target knowledge graph, wherein the target node is a node matched with the target keyword; determining a secondary keyword according to the determined associated node; and displaying the secondary keywords in an augmented reality mode.
In some optional implementation manners of this embodiment, the number of the target keywords is at least two; and the above method further comprises: receiving indication information sent by the server, wherein the indication information is used for indicating the association relationship between target keywords with association relationship, and the indication information is obtained by the server through the following steps: determining whether the target nodes have an association relation or not according to the node relation information; in response to determining that the target nodes have an incidence relation, determining an incidence relation between target keywords matched with the target nodes having the incidence relation; generating the indication information; and displaying the indication information among the displayed target keywords in an augmented reality mode.
In some optional implementation manners of this embodiment, the node association of the target knowledge graph stores extension information; and the above method further comprises: receiving the extended information sent by the server, wherein the received extended information is obtained by the server through the following steps: and acquiring the extension information stored in association with the target node and/or the associated node.
In some optional implementations of this embodiment, the method further includes: detecting a first trigger operation aiming at the displayed target keyword and/or the secondary keyword; and displaying the received extension information in response to the detection of the first trigger operation.
Optionally, the first trigger operation may be a predefined operation, which may include, but is not limited to, a click operation, a slide operation, and the like.
As an example, the target keyword is a creditor and the secondary keyword is a debtor. The user can perform a first trigger operation on the displayed target keyword creditor, and then the terminal can display paraphrases of the creditor.
It should be noted that the logic for displaying the extended information may also be as follows: the terminal detects a first trigger operation aiming at the displayed target keyword and/or the secondary keyword; and the terminal responds to the detection of the first trigger operation and sends an extended information acquisition request to the server. The server then obtains the extension information stored in association with the target node and/or the associated node. And the terminal receives the extended information sent by the server and displays the received extended information.
In some optional implementations of the embodiment, the target knowledge graph is an educational knowledge graph, and the extension information stored in association with the nodes of the target knowledge graph includes at least one of: resource acquisition paths, paraphrases, and related document names.
It should be noted that the educational knowledge graph may be a pre-established graph associated with education. The education knowledge map is more concerned with the content of the professional field compared with the universal map. The knowledge system and concept definitions are emphatically constructed. Therefore, the user can be helped to realize quick learning.
In some optional implementations of this embodiment, the acquiring a target image of a target object includes: detecting a second trigger operation aiming at a target control in a target application, wherein the target application is a library application; and collecting the video in response to the detection of the second trigger operation.
Alternatively, the library-like application may be an application that queries various professional materials, provided by a knowledge provider that collects a large amount of professional materials.
Alternatively, the user may use the library-like application when it is necessary to obtain more specialized knowledge. The function of triggering and acquiring the video and the subsequent steps are set in the library application, so that a user can conveniently and quickly acquire knowledge through the knowledge map when needing to acquire professional knowledge.
In the method provided by the above embodiment of the present application, a target image of a target object is acquired; sending the target image to a server; receiving a target keyword sent by the server, wherein the target keyword is obtained by the server through the following steps: acquiring the target image; extracting a character set from the target image; determining target keywords matched with nodes in a target knowledge graph from the character set; and displaying the target keywords on the real-time image of the target object in an augmented reality mode, thereby enriching the mode of displaying information.
It should be noted that details of implementation and technical effects of the embodiment corresponding to fig. 4 may refer to the description in the embodiment corresponding to fig. 2, and are not described herein again.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for displaying information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for displaying information of the present embodiment includes: a first acquisition unit 501, an extraction unit 502, a first determination unit 503, and a first transmission unit 504. The first acquisition unit is used for acquiring a target image of a target object acquired by the terminal; an extracting unit for extracting a character set from the target image; a first determining unit, configured to determine, from the text set, a target keyword that matches a node in a target knowledge graph; and a first sending unit, configured to send the target keyword to the terminal, where the terminal displays the target keyword on a real-time image of the target object in an augmented reality manner.
In this embodiment, specific processing of the first obtaining unit 501, the extracting unit 502, the first determining unit 503 and the first sending unit 504 and technical effects thereof may refer to related descriptions of step 201, step 202, step 203 and step 204 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, the target knowledge-graph further includes node relationship information indicating an association relationship between nodes; and the above apparatus further comprises: a second determining unit (not shown) configured to determine, from the target knowledge graph, an associated node having an associated relationship with a target node according to the node relationship information, where the target node is a node matching the target keyword; a second transmitting unit (not shown) for: and determining and sending a secondary keyword to the terminal according to the determined associated node, wherein the secondary keyword is displayed by the terminal in an augmented reality mode.
In some optional implementation manners of this embodiment, the number of the target keywords is at least two; and the above apparatus further comprises: a third determination unit (not shown) for: determining whether the target nodes have an association relation or not according to the node relation information; a fourth determining unit (not shown) for determining an association relationship between the target keywords matching the target nodes having the association relationship in response to determining that the target nodes have the association relationship therebetween; a third transmitting unit (not shown) for: and generating and transmitting instruction information to the terminal, the instruction information being used for indicating the association relationship between the target keywords having the association relationship, wherein the terminal displays the instruction information among the displayed target keywords in an augmented reality manner.
In some optional implementation manners of this embodiment, the node association of the target knowledge graph stores extension information; and the above apparatus further comprises: a second acquisition unit (not shown) for acquiring extension information stored in association with the target node and/or the associated node; a third transmitting unit (not shown) for transmitting the acquired extension information to the terminal, wherein the terminal displays the acquired extension information.
In some optional implementations of the embodiment, the target knowledge graph is an educational knowledge graph, and the extension information stored in association with the nodes of the target knowledge graph includes at least one of: resource acquisition paths, paraphrases, and related document names.
In some optional implementations of this embodiment, the target image is a video; and the extraction unit is further configured to: intercepting a video frame in the video; and identifying the intercepted video frame by using an optical character identification technology to generate the character set.
It should be noted that, for details of implementation and technical effects of each unit in the apparatus for displaying information provided in this embodiment, reference may be made to descriptions of other embodiments in this application, and details are not described herein again.
With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present application provides an embodiment of an apparatus for displaying information, which corresponds to the method embodiment shown in fig. 4, and which is specifically applicable to various electronic devices.
As shown in fig. 6, the apparatus 600 for displaying information according to the embodiment includes: the system comprises an acquisition module 601, a sending module 602, a first receiving module 603 and a first display module 604. The system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a target image of a target object; the sending module is used for sending the target image to a server; a first receiving module, configured to receive a target keyword sent by the server, where the target keyword is obtained by the server through the following steps: acquiring the target image; extracting a character set from the target image; determining target keywords matched with nodes in a target knowledge graph from the character set; and the first display module is used for displaying the target keyword on the real-time image of the target object in an augmented reality mode.
In this embodiment, the detailed processing of the acquisition module 601, the sending module 602, the first receiving module 603, and the first displaying module 604 and the technical effects thereof can refer to the related descriptions of step 401, step 402, step 403, and step 404 in the corresponding embodiment of fig. 4, respectively, and are not described herein again
In some optional implementations of this embodiment, the target knowledge-graph further includes node relationship information indicating an association relationship between nodes; and the above apparatus further comprises: a second receiving module (not shown) for: receiving a secondary keyword sent by the server, wherein the secondary keyword is determined by the server through the following steps: according to the node relation information, determining an associated node having an associated relation with a target node from the target knowledge graph, wherein the target node is a node matched with the target keyword; determining a secondary keyword according to the determined associated node; and a second display module (not shown) for displaying the secondary keyword in an augmented reality manner.
In some optional implementation manners of this embodiment, the number of the target keywords is at least two; and the above apparatus further comprises: a third receiving module (not shown) configured to receive indication information sent by the server, where the indication information is used to indicate an association relationship between target keywords having an association relationship, where the indication information is obtained by the server through the following steps: determining whether the target nodes have an association relation or not according to the node relation information; in response to determining that the target nodes have an incidence relation, determining an incidence relation between target keywords matched with the target nodes having the incidence relation; generating the indication information; and a third display module (not shown) for displaying the indication information among the displayed target keywords in an augmented reality manner.
In some optional implementation manners of this embodiment, the node association of the target knowledge graph stores extension information; and the above apparatus further comprises: a fourth receiving module (not shown) configured to receive the extension information sent by the server, where the received extension information is obtained by the server through the following steps: and acquiring the extension information stored in association with the target node and/or the associated node.
In some optional implementations of this embodiment, the apparatus further includes: a detection module (not shown) for detecting a first trigger operation for the displayed target keyword and/or the secondary keyword; and a fourth display module (not shown) for displaying the received extension information in response to the detection of the first trigger operation.
In some optional implementations of the embodiment, the target knowledge graph is an educational knowledge graph, and the extension information stored in association with the nodes of the target knowledge graph includes at least one of: resource acquisition paths, paraphrases, and related document names.
In some optional implementation manners of this embodiment, the acquisition module is further configured to: detecting a second trigger operation aiming at a target control in a target application, wherein the target application is a library application; and collecting the video in response to the detection of the second trigger operation.
It should be noted that, for details of implementation and technical effects of each unit in the apparatus for displaying information provided in this embodiment, reference may be made to descriptions of other embodiments in this application, and details are not described herein again.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use in implementing a terminal device or server of an embodiment of the present application. The terminal device or the server shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by a Central Processing Unit (CPU)701, performs the above-described functions defined in the method of the present application.
It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first acquisition unit, an extraction unit, a first determination unit, and a first transmission unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the extraction unit may also be described as a "unit that extracts a text set from the above-described target image".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a target image of a target object acquired by a terminal; extracting a character set from the target image; determining target keywords matched with nodes in a target knowledge graph from the character set; and sending the target keyword to the terminal, wherein the terminal displays the target keyword on a real-time image of the target object in an augmented reality mode.
The modules described in the embodiments of the present application may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprises an acquisition module, a sending module, a first receiving module and a first display module. The names of these modules do not in some cases constitute a limitation on the modules themselves, and for example, the sending module may also be described as a "module that sends the above-described target image to the server".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a target image of a target object; sending the target image to a server; receiving a target keyword sent by the server, wherein the target keyword is obtained by the server through the following steps: acquiring the target image; extracting a character set from the target image; determining target keywords matched with nodes in a target knowledge graph from the character set; and displaying the target keyword on the real-time image of the target object in an augmented reality mode.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (16)

1. A method for displaying information, comprising:
acquiring a target image of a target object acquired by a terminal;
extracting a character set from the target image;
determining target keywords matched with nodes in a target knowledge graph from the character set, wherein the target knowledge graph further comprises node relation information used for indicating association relations among the nodes, the target knowledge graph is an education knowledge graph emphasizing construction of a knowledge system and concept definition, and the number of the target keywords is at least two;
determining an associated node having an associated relation with a target node from the target knowledge graph according to the node relation information;
determining and sending a secondary keyword to the terminal according to the determined associated node;
determining whether an incidence relation exists between target nodes according to the node relation information, wherein the target nodes are nodes matched with the target keywords;
in response to determining that the target nodes have an incidence relation, determining an incidence relation between target keywords matched with the target nodes having the incidence relation;
and generating indication information for indicating the incidence relation between target keywords having the incidence relation, and displaying the target keywords and the indication information on a real-time image of the target object so that a user acquires and grasps a knowledge context in the character set, wherein the terminal displays the target keywords, the indication information and the secondary keywords in an augmented reality manner.
2. The method of claim 1, wherein node associations of the target knowledge-graph store extended information; and
the method further comprises the following steps:
acquiring extension information stored in association with a target node and/or an associated node;
and sending the acquired extended information to the terminal, wherein the terminal displays the acquired extended information.
3. The method of any of claims 1-2, wherein the expanded information stored by the node associations of the target knowledge-graph includes at least one of: resource acquisition paths, paraphrases, and related document names.
4. The method of claim 3, wherein the target image is a video; and
the extracting of the character set from the target image comprises:
intercepting a video frame in the video;
and identifying the intercepted video frame by utilizing an optical character identification technology to generate the character set.
5. A method for displaying information, comprising:
acquiring a target image of a target object;
sending the target image to a server;
receiving at least two target keywords, indication information and secondary keywords sent by the server, wherein the indication information is used for indicating an incidence relation between the target keywords with the incidence relation, and the target keywords, the indication information and the secondary keywords are obtained by the server through the following steps: acquiring the target image; extracting a character set from the target image; determining target keywords matched with nodes in a target knowledge graph from the character set; according to the node relation information, determining an associated node having an associated relation with a target node from the target knowledge graph; determining a secondary keyword according to the determined associated node; determining whether the target nodes have an association relation or not according to the node relation information; in response to determining that the target nodes have an incidence relation, determining an incidence relation between target keywords matched with the target nodes having the incidence relation; generating indication information, wherein the target knowledge graph further comprises node relation information used for indicating the incidence relation between nodes, the target knowledge graph is an education knowledge graph emphasizing a construction knowledge system and concept definitions, and the target nodes are nodes matched with the target keywords;
and displaying the target keyword and the indication information on the real-time image of the target object so that a user can acquire and master the knowledge context in the character set, wherein the terminal displays the target keyword, the indication information and the secondary keyword in an augmented reality manner.
6. The method of claim 5, wherein node associations of the target knowledge-graph store extended information; and
the method further comprises the following steps:
receiving the extended information sent by the server, wherein the received extended information is obtained by the server through the following steps: and acquiring the extension information stored in association with the target node and/or the associated node.
7. The method of claim 6, wherein the method further comprises:
detecting a first trigger operation aiming at the displayed target keyword and/or the secondary keyword;
and displaying the received extension information in response to detecting the first trigger operation.
8. The method of any of claims 5-7, wherein the expanded information stored by the node associations of the target knowledge-graph comprises at least one of: resource acquisition paths, paraphrases, and related document names.
9. The method of claim 8, wherein said acquiring a target image of a target object comprises:
detecting a second trigger operation aiming at a target control in a target application, wherein the target application is a library application;
in response to detecting the second trigger operation, capturing a video.
10. An apparatus for displaying information, comprising:
the first acquisition unit is used for acquiring a target image of a target object acquired by the terminal;
an extracting unit configured to extract a set of characters from the target image;
a first determining unit, configured to determine, from the word set, a target keyword that matches a node in a target knowledge graph, where the target knowledge graph further includes node relationship information indicating an association relationship between nodes, the target knowledge graph is an educational knowledge graph that emphasizes a construction of a knowledge system and concept definitions, and the target keyword is at least two;
the second determining unit is used for determining an associated node which has an associated relation with a target node from the target knowledge graph according to the node relation information;
the second sending unit is used for determining and sending the secondary keywords to the terminal according to the determined associated nodes;
a third determining unit, configured to determine whether a target node has an association relationship according to the node relationship information, where the target node is a node that matches the target keyword;
a fourth determination unit, configured to determine, in response to determining that there is an association between the target nodes, an association between the target keywords that match the target nodes having the association;
a display unit configured to generate indication information indicating an association relationship between target keywords having an association relationship, and display the target keywords and the indication information on a real-time image of the target object so that a user acquires and grasps a knowledge context in the text set, wherein the terminal displays the target keywords, the indication information, and the secondary keywords in an augmented reality manner.
11. An apparatus for displaying information, comprising:
the acquisition module is used for acquiring a target image of a target object;
the sending module is used for sending the target image to a server;
the first receiving module is configured to receive at least two target keywords sent by the server, where the target keywords are obtained by the server through the following steps: acquiring the target image; extracting a character set from the target image; determining target keywords matched with nodes in a target knowledge graph from the character set, wherein the target knowledge graph further comprises node relation information used for indicating association relations among the nodes, the target knowledge graph is an education knowledge graph emphasizing construction of a knowledge system and concept definition, and the target nodes are nodes matched with the target keywords;
a second receiving module, configured to receive a secondary keyword sent by the server, where the secondary keyword is determined by the server through the following steps: according to the node relation information, determining an associated node having an associated relation with a target node from the target knowledge graph; determining the secondary keywords according to the determined associated nodes;
a third receiving module, configured to receive indication information sent by the server, where the indication information is used to indicate an association relationship between target keywords having an association relationship, and the indication information is obtained by the server through the following steps: determining whether the target nodes have an association relation or not according to the node relation information; in response to determining that the target nodes have an incidence relation, determining an incidence relation between target keywords matched with the target nodes having the incidence relation; generating the indication information;
the first display module is used for displaying the target keyword on the real-time image of the target object in an augmented reality mode;
the second display module is used for displaying the secondary keywords in an augmented reality mode;
and the third display module is used for displaying the indication information on the real-time image of the target object in an augmented reality mode so that the user can acquire and master the knowledge context in the character set.
12. A server, comprising:
one or more processors;
a storage device for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-4.
13. A terminal, comprising:
one or more processors;
a storage device for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 5-9.
14. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-4.
15. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 5-9.
16. A system for displaying information, comprising: a server according to claim 12 and a terminal according to claim 13.
CN201711176106.7A 2017-11-22 2017-11-22 Method and apparatus for displaying information Active CN110019906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711176106.7A CN110019906B (en) 2017-11-22 2017-11-22 Method and apparatus for displaying information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711176106.7A CN110019906B (en) 2017-11-22 2017-11-22 Method and apparatus for displaying information

Publications (2)

Publication Number Publication Date
CN110019906A CN110019906A (en) 2019-07-16
CN110019906B true CN110019906B (en) 2022-07-08

Family

ID=67186487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711176106.7A Active CN110019906B (en) 2017-11-22 2017-11-22 Method and apparatus for displaying information

Country Status (1)

Country Link
CN (1) CN110019906B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110457488B (en) * 2019-08-10 2020-11-20 海南大学 Content transmission modeling and processing optimization method based on data map, information map and knowledge map
CN111401347B (en) * 2020-06-05 2020-11-10 支付宝(杭州)信息技术有限公司 Information positioning method and device based on picture
CN114527899B (en) * 2020-10-30 2024-05-24 北京中地泓科环境科技有限公司 Method for displaying environment information based on drawing
CN113596562B (en) * 2021-08-06 2023-03-28 北京字节跳动网络技术有限公司 Video processing method, apparatus, device and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103050025A (en) * 2012-12-20 2013-04-17 广东欧珀移动通信有限公司 Mobile terminal learning method and learning system thereof
US8682879B2 (en) * 2010-04-16 2014-03-25 Bizmodeline Co., Ltd. Marker search system for augmented reality service
CN103929653A (en) * 2014-04-30 2014-07-16 成都理想境界科技有限公司 Enhanced real video generator and player, generating method of generator and playing method of player
CN105631051A (en) * 2016-02-29 2016-06-01 华南理工大学 Character recognition based mobile augmented reality reading method and reading system thereof
CN106650727A (en) * 2016-12-08 2017-05-10 宇龙计算机通信科技(深圳)有限公司 Information display method and AR (augmented reality) device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227735A (en) * 2016-07-11 2016-12-14 苏州天梯卓越传媒有限公司 A kind of word cloud Topic Selection for Publishing Industry and system
CN106886594B (en) * 2017-02-21 2020-06-02 北京百度网讯科技有限公司 Method and device for displaying information
CN107273079B (en) * 2017-05-18 2020-06-02 网易有道信息技术(杭州)有限公司 Associated information display method, associated information map processing method, associated information display device, associated information map processing device, associated information map display medium, associated information map processing device and associated information map processing system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8682879B2 (en) * 2010-04-16 2014-03-25 Bizmodeline Co., Ltd. Marker search system for augmented reality service
CN103050025A (en) * 2012-12-20 2013-04-17 广东欧珀移动通信有限公司 Mobile terminal learning method and learning system thereof
CN103929653A (en) * 2014-04-30 2014-07-16 成都理想境界科技有限公司 Enhanced real video generator and player, generating method of generator and playing method of player
CN105631051A (en) * 2016-02-29 2016-06-01 华南理工大学 Character recognition based mobile augmented reality reading method and reading system thereof
CN106650727A (en) * 2016-12-08 2017-05-10 宇龙计算机通信科技(深圳)有限公司 Information display method and AR (augmented reality) device

Also Published As

Publication number Publication date
CN110019906A (en) 2019-07-16

Similar Documents

Publication Publication Date Title
CN107346336B (en) Information processing method and device based on artificial intelligence
CN110019906B (en) Method and apparatus for displaying information
CN107590255B (en) Information pushing method and device
WO2020000879A1 (en) Image recognition method and apparatus
CN109543058B (en) Method, electronic device, and computer-readable medium for detecting image
KR20210053825A (en) Method and apparatus for processing video
US10372716B2 (en) Automatic discovery and presentation of topic summaries related to a selection of text
US20170177623A1 (en) Method and apparatus for using business-aware latent topics for image captioning in social media
CN111800671B (en) Method and apparatus for aligning paragraphs and video
CN109919244B (en) Method and apparatus for generating a scene recognition model
CN106844685B (en) Method, device and server for identifying website
CN110149265B (en) Message display method and device and computer equipment
WO2019056821A1 (en) Method and apparatus for information interaction
CN111897950A (en) Method and apparatus for generating information
CN108038172B (en) Search method and device based on artificial intelligence
CN113204691A (en) Information display method, device, equipment and medium
CN111491209A (en) Video cover determining method and device, electronic equipment and storage medium
CN112446214A (en) Method, device and equipment for generating advertisement keywords and storage medium
CN112148962B (en) Method and device for pushing information
CN109947526B (en) Method and apparatus for outputting information
CN107483595B (en) Information pushing method and device
CN111898595A (en) Information display method and device, electronic equipment and storage medium
CN111859970A (en) Method, apparatus, device and medium for processing information
CN112487164A (en) Artificial intelligence interaction method
CN112307324A (en) Information processing method, apparatus, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant