CN112015973A - Relation reasoning method and terminal for heterogeneous network - Google Patents

Relation reasoning method and terminal for heterogeneous network Download PDF

Info

Publication number
CN112015973A
CN112015973A CN201910472285.1A CN201910472285A CN112015973A CN 112015973 A CN112015973 A CN 112015973A CN 201910472285 A CN201910472285 A CN 201910472285A CN 112015973 A CN112015973 A CN 112015973A
Authority
CN
China
Prior art keywords
inference
network
rule
network layer
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910472285.1A
Other languages
Chinese (zh)
Other versions
CN112015973B (en
Inventor
张阳
熊云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910472285.1A priority Critical patent/CN112015973B/en
Publication of CN112015973A publication Critical patent/CN112015973A/en
Application granted granted Critical
Publication of CN112015973B publication Critical patent/CN112015973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides a relation reasoning method and a terminal of a heterogeneous network, wherein the method comprises the following steps: receiving query information input by a user; reasoning the query information in the heterogeneous network based on a preset reasoning strategy to obtain a target reasoning result; the heterogeneous network comprises a basic network layer and a high-level relation network layer, wherein the basic network layer takes people, affairs, places and objects as network nodes, and network layers are established by taking human-to-human, human-to-affairs, human-to-ground and human-to-object as network relations; the high-level relation network layer is a network layer with probability weight established by mining generated relations. The embodiment of the invention does not need to establish a plurality of network topologies aiming at different types of data, thereby reducing the difficulty of reasoning.

Description

Relation reasoning method and terminal for heterogeneous network
Technical Field
The invention relates to the technical field of communication, in particular to a relation reasoning method and a terminal for a heterogeneous network.
Background
Under the wide environment of the popularization of the internet, the online and offline behavior data of users are increasing day by day, and PB level data is generated every day. Under the police scene, screening and reasoning suspicious information on mass data become a technical and business problem. If data types are classified as human-thing-ground-thing, a complex heterogeneous network topology is generated every day. In order to learn about user intention, analyze user behavior, and analyze an association relationship between a user and everything, a common relationship inference method in the prior art is a relationship inference based on a homogeneous network. However, for different types (heterogeneous) of nodes, multiple network topologies need to be constructed, and thus the inference difficulty is large.
Disclosure of Invention
The embodiment of the invention provides a relation inference method and a relation inference terminal of a heterogeneous network, aiming at solving the problem of higher inference difficulty.
In a first aspect, an embodiment of the present invention provides a relationship inference method for a heterogeneous network, including:
receiving query information input by a user;
reasoning the query information in the heterogeneous network based on a preset reasoning strategy to obtain a target reasoning result;
the heterogeneous network comprises a basic network layer and a high-level relation network layer, wherein the basic network layer takes people, affairs, places and objects as network nodes, and network layers are established by taking human-to-human, human-to-affairs, human-to-ground and human-to-object as network relations; the high-level relation network layer is a network layer with probability weight established by mining generated relations.
In a second aspect, an embodiment of the present invention further provides a terminal, including:
the receiving module is used for receiving query information input by a user;
the reasoning module is used for reasoning the query information in the heterogeneous network based on a preset reasoning strategy to obtain a target reasoning result;
the heterogeneous network comprises a basic network layer and a high-level relation network layer, wherein the basic network layer takes people, affairs, places and objects as network nodes, and network layers are established by taking human-to-human, human-to-affairs, human-to-ground and human-to-object as network relations; the high-level relation network layer is a network layer with probability weight established by mining generated relations.
In a third aspect, an embodiment of the present invention further provides a terminal, which includes a processor, a memory, and a computer program that is stored in the memory and is executable on the processor, and when the computer program is executed by the processor, the steps of the relationship inference method for a heterogeneous network are implemented.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the above-mentioned relationship inference method for heterogeneous networks.
In the embodiment of the invention, different types of data are abstracted into network nodes and relations to form a heterogeneous network, and at least one of an event reasoning strategy, a rule reasoning strategy and a condition reasoning strategy is used for querying in the heterogeneous network according to query information input by a user to obtain a corresponding reasoning result, so that the relation reasoning of the heterogeneous network is realized. And a plurality of network topologies do not need to be established for different types of data, so that the difficulty of reasoning is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of a relationship inference method for a heterogeneous network according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a Rete algorithm in the relationship inference method for a heterogeneous network according to the embodiment of the present invention;
fig. 3 is an architecture diagram of a rule inference policy of a relationship inference method for a heterogeneous network according to an embodiment of the present invention;
fig. 4 is an exemplary diagram of a conditional inference engine in a relationship inference method for a heterogeneous network according to an embodiment of the present invention;
fig. 5 is a block diagram of a terminal according to an embodiment of the present invention;
fig. 6 is a block diagram of a terminal according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a relationship inference method for a heterogeneous network according to an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
step 101, receiving query information input by a user;
102, reasoning the query information in a heterogeneous network based on a preset reasoning strategy to obtain a target reasoning result;
the heterogeneous network comprises a basic network layer and a high-level relation network layer, wherein the basic network layer takes people, affairs, places and objects as network nodes, and network layers are established by taking human-to-human, human-to-affairs, human-to-ground and human-to-object as network relations; the high-level relation network layer is a network layer with probability weight established by mining generated relations.
In the embodiment of the invention, data can be organized and managed based on a graph database (a large-scale heterogeneous network which is mapped), people, things, places and objects are taken as network nodes, a basic network layer is established by taking people-to-people, people-to-things, people-to-places and people-to-objects as network relations, and a second layer of a high-level relation network layer with probability weight is formed for the relation generated by mining, such as suspected group-to-group relation. For example, if the user a watches a movie D in the mall C, the user a, the mall C, and the movie D may be set as network nodes in the basic network layer, and the hierarchical relationship of the network is formed by the relationship between the user a and other network nodes. In addition, the user A and the user B take one flight simultaneously, and the probability that the user A and the user B are in the same row is determined to be the first preset percentage in the advanced relationship network layer. Specifically, the determination of the relationship and the determination of the probability weight may be determined according to different strategies, which are not further described herein.
The event inference policy refers to analyzing specific facts, for example, in an alternative embodiment, the process of inference can be converted into a process of matching the input SPO in the base network layer by regarding both the input condition and the data as an event (Subject predicate object, SPO) triple. Specifically, the query information input by the user is the input SPO, and at this time, the query information input by the user needs to be subjected to semantic analysis to obtain a query statement of the SPO structure that can be identified by the system, and then the query is performed to obtain a corresponding result.
The rule reasoning strategy is to screen out data matched with the rule corresponding to the query information in the basic network layer.
The conditional inference strategy is based on Bayesian inference and carries out probabilistic inference on a high-level network.
In the embodiment of the invention, different types of data are abstracted into network nodes and relations to form a heterogeneous network, and at least one of an event reasoning strategy, a rule reasoning strategy and a condition reasoning strategy is used for querying in the heterogeneous network according to query information input by a user to obtain a corresponding reasoning result, so that the relation reasoning of the heterogeneous network is realized. And a plurality of network topologies do not need to be established for different types of data, so that the difficulty of reasoning is reduced.
It should be noted that the preset inference policy may include some or all of an event inference policy, a rule inference policy, and a conditional inference policy. One result for each inference strategy. Specifically, it may be determined to use one or more of the event inference policy, the rule inference policy, and the conditional inference policy to infer according to the query information input by the user, for example, similar to what a specific fact (e.g., a father of the wife of the user a) is queried, which may be inferred through the event inference policy. Similar to the inquiry of possible events (people known by the inquiry user A and the user B), three inference results can be obtained according to the event inference strategy, the rule inference strategy and the condition inference strategy, and the three inference results jointly form the target inference result.
In an optional embodiment, when the preset inference policy includes the event inference policy, the step 102 includes:
step 1021, performing semantic analysis on the query information to obtain a query statement composed of a Supper-predicate object (SPO) triple;
step 1022, matching in the basic network layer according to the query statement to obtain a matching result;
wherein the target inference result comprises the matching result.
In this embodiment, the event inference policy mainly converts an event (query information) expression into a triple form, and traverses a basic network layer through an event inference engine to obtain a final inference result. The event inference engine is a component embedded in an application program, and realizes the separation of business decisions from application program codes, saves various complex if-else, judges and uses a predefined semantic module to write the business decisions. And receiving data input of a user, explaining the business rules, and making business decisions according to the business rules.
The event reasoning engine mainly performs a lot of optimization work based on the RETE algorithm, and provides an efficient implementation of an expert system, and the RETE algorithm is a forward rule fast matching algorithm. Black box interpretation of the system as shown in fig. 2: the left side is the addition and deletion of rules; the right side is the addition and deletion of facts. The rule is composed of a left piece and a right piece, the left piece is composed of a rule unit or a function unit, and the right piece is composed of an action unit. The fact that the firing of a rule satisfies the left condition triggers the action on the right.
It should be understood that in the Rete algorithm, the structure of the Rete network may be referred to related techniques, e.g., the Rete network includes an alpha part and a beta part, the alpha part performing a test of constants on WME. The output of Alpha nodes is stored in Alpha Memories (AM). For example (< x > on < y >) alpha memories for this condition would hold WMEs whose attribute field is equal to on. The beta part mainly contains the join nodes and beta memories. Join nodes are used to perform consistency checks of variable bindings between conditions. Beta memories holds an instance of a partial production formula (a combination of WMEs that satisfies the conditions of the partial production formula but not all). These partial instantiations are called tokens. Alpha networks perform all the tests that contain one WME (greater than, equal to, less than, variable, fixed), Beta networks perform the tests that contain more than two WMEs.
In another optional embodiment, when the preset inference policy includes the rule inference policy, the step 102 includes:
step 1023, identifying the query information to obtain a first fact, where the first fact includes an entity corresponding to the network node, relationship information of the entity, and attribute information of the entity;
step 1024, screening the first facts according to attribute constraint conditions to obtain second facts meeting the attribute constraint conditions in the first facts;
step 1025, based on the basic network layer, transmitting the first fact in a rule network to obtain a rule inference result;
and the target inference result comprises the rule inference result, and the rule network is established according to a target rule matched with the query information in a preset rule base.
For example, the query information includes information of the user E, the first fact may include the singer E, the actor E, and the writer E. If the attribute constraint condition is singer, the actor E and the writer E can be filtered according to the attribute constraint condition. The attribute constraint may be determined based on the query information. For example, when inquiring about information of the user E performing the same as the singer a, the attribute constraint may be definitely the singer. In addition, the attribute constraint condition may further include other attribute constraint information, which is not further limited herein.
Further, based on the foregoing embodiment, in this embodiment, in the process of transferring the first fact in a rule network, the method further includes:
and performing attribute constraint on intermediate facts generated in the process of transmitting the first fact in the regular network according to the attribute constraint conditions.
It should be understood that, in the embodiment of the present invention, the process of transferring the first fact in the regular network will generate an intermediate fact, and the intermediate fact may include an entity corresponding to the network node, relationship information of the entity, and attribute information of the entity. After adding the new intermediate facts, the intermediate facts can be constrained according to the attribute constraint conditions, that is, the intermediate facts are filtered out from the intermediate facts and then the intermediate facts are continuously transmitted in the rule network until the final facts are obtained, wherein the final facts are rule inference results.
In the embodiment of the invention, the facts transmitted into the network are dynamically constrained, so that the fact reasoning time can be reduced, and the reasoning efficiency is improved.
The specific implementation of the rule inference policy can be shown in fig. 3, and the following describes in detail each flow module in fig. 3:
when rule analysis is carried out, target rules matched with query information in the rule base can be read in one by one, the rules are analyzed according to the expression method of the rules, so that the target rules are converted into a data format required by a rule service system RuleBase, and an interface of the rule service system is called to add the rules, so that the rule service system can construct a rule network.
When the validity check is carried out, the validity check is carried out on the first fact and the intermediate fact generated in the rule execution process through the attribute constraint condition.
The core module of the inference strategy is a rule inference engine which completes the efficient matching of the fact and the rule. The specific implementation can use a plurality of rule matching algorithms, and mainly comprises two stages of rule editing and runtime execution. The rule editing uses the rules in the rule base to generate a rule network, and the runtime executes the transfer of the added facts in the network.
The Query module Query specifically transmits the fact to the rule service system to trigger the derivation of the rule.
The rule network has a plurality of rule instances, and a management module (for example, a distributed timing task management system based on a mongodb database, which can be implemented by Agenda, wherein Agenda is nodejs.) can determine the execution sequence of the rule instances in an activated state according to a certain strategy and execute the rule instances. Meanwhile, the Agenda may also perform conflict resolution, and in particular, the conflict resolution scheme may refer to the related technology, which is not described herein again.
The garbage collection module is used for cleaning the inferred intermediate data/memory resources in batch because the addition of the infrequent data can generate a lot of inferred intermediate data. Specifically, the recovery can be timed.
Further, in another embodiment, when the preset inference policy includes the conditional inference policy, the step 102 includes:
a conditional inference engine of Bayesian inference is utilized to infer from the basic network layer and the advanced relationship network layer to obtain a conditional inference result corresponding to the query information;
wherein the target inference result includes the rule inference result, and as shown in fig. 4, the conditional inference engine includes at least one of a multi-tree propagation inference engine, a clique-tree propagation inference engine, a method engine based on combinatorial optimization, a method engine based on search, and a monte Carlo algorithm engine.
In the embodiment of the invention, the multi-tree propagation inference engine, the clique-tree propagation inference engine and the method engine based on the combination optimization are used for realizing accurate inference, namely, the accurate value of the marginal distribution or the conditional distribution of the target variable is expected to be calculated, however, the calculation complexity of the algorithm increases exponentially with the increase of the maximum clique scale, so that the method is only suitable for the Bayesian network with smaller scale.
The search-based method engine and the Mante Carlo algorithm engine are used for realizing approximate reasoning, when the scale of the network is large, the approximate reasoning is mostly adopted, and the approximate reasoning algorithm can obtain an approximate solution of the original problem under low time complexity.
In particular, the specific implementation of the multi-tree propagation inference engine, the clique-tree propagation inference engine, the method engine based on combinatorial optimization, the method engine based on search, and the Mante Carlo algorithm engine may refer to the related art, and will not be described in detail herein.
It should be noted that, various optional implementations described in the embodiments of the present invention may be implemented in combination with each other or implemented separately, and the embodiments of the present invention are not limited thereto.
Referring to fig. 5, fig. 5 is a structural diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 5, a terminal 500 includes:
a receiving module 501, configured to receive query information input by a user;
the inference module 502 is configured to infer the query information in the heterogeneous network based on a preset inference policy to obtain a target inference result;
the heterogeneous network comprises a basic network layer and a high-level relation network layer, wherein the basic network layer takes people, affairs, places and objects as network nodes, and network layers are established by taking human-to-human, human-to-affairs, human-to-ground and human-to-object as network relations; the high-level relation network layer is a network layer with probability weight established by mining generated relations.
Optionally, when the preset inference policy includes the event inference policy, the inference module 502 includes:
the analysis unit is used for carrying out semantic analysis on the query information to obtain a query statement consisting of a principal and predicate object oriented Service (SPO) triple;
the matching unit is used for matching in the basic network layer according to the query statement to obtain a matching result;
wherein the target inference result comprises the matching result.
Optionally, when the preset inference policy includes the rule inference policy, the inference module 502 includes:
the identification unit is used for identifying the query information to obtain a first fact, wherein the first fact comprises an entity corresponding to the network node, the relationship information of the entity and the attribute information of the entity;
the control unit is used for screening the first facts according to attribute constraint conditions to obtain second facts meeting the attribute constraint conditions in the first facts;
the inference unit is used for transmitting the first fact in a rule network based on the basic network layer to obtain a rule inference result;
and the target inference result comprises the rule inference result, and the rule network is established according to a target rule matched with the query information in a preset rule base.
Optionally, the control unit is further configured to, during the process of transferring the first fact in the rule network, perform attribute constraint on an intermediate fact generated during the process of transferring the first fact in the rule network according to the attribute constraint condition.
Optionally, when the preset inference policy includes the conditional inference policy, the inference module 502 is configured to: a conditional inference engine of Bayesian inference is utilized to infer from the basic network layer and the advanced relationship network layer to obtain a conditional inference result corresponding to the query information;
wherein the target inference result comprises the rule inference result, and the conditional inference engine comprises at least one of a multi-tree propagation inference engine, a clique-tree propagation inference engine, a combinatorial optimization-based method engine, a search-based method engine, and a Monte Carlo algorithm engine.
The terminal provided by the embodiment of the present invention can implement each process implemented by the terminal in the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
Fig. 6 is a schematic diagram of a hardware structure of a terminal for implementing various embodiments of the present invention.
The terminal 600 includes but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. Those skilled in the art will appreciate that the terminal configuration shown in fig. 6 is not intended to be limiting, and that the terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 610 is configured to receive query information input by a user;
reasoning the query information in the heterogeneous network based on a preset reasoning strategy to obtain a target reasoning result;
the heterogeneous network comprises a basic network layer and a high-level relation network layer, wherein the basic network layer takes people, affairs, places and objects as network nodes, and network layers are established by taking human-to-human, human-to-affairs, human-to-ground and human-to-object as network relations; the high-level relation network layer is a network layer with probability weight established by mining generated relations.
Optionally, when the preset inference policy includes the event inference policy, the processor 610 is configured to: performing semantic analysis on the query information to obtain a query statement consisting of a Supper-predicate object (SPO) triple; matching in the basic network layer according to the query statement to obtain a matching result; wherein the target inference result comprises the matching result.
Optionally, when the preset inference policy includes the rule inference policy, the processor 610 is configured to:
identifying the query information to obtain a first fact, wherein the first fact comprises an entity corresponding to the network node, relationship information of the entity and attribute information of the entity;
screening the first facts according to attribute constraint conditions to obtain second facts meeting the attribute constraint conditions in the first facts;
transmitting the first fact in a rule network based on the basic network layer to obtain a rule reasoning result;
and the target inference result comprises the rule inference result, and the rule network is established according to a target rule matched with the query information in a preset rule base.
Optionally, the processor 610 is further configured to: and in the process of transmitting the first fact in the regular network, performing attribute constraint on an intermediate fact generated in the process of transmitting the first fact in the regular network according to the attribute constraint condition.
Optionally, when the preset inference policy includes the conditional inference policy, the processor 610 is configured to: a conditional inference engine of Bayesian inference is utilized to infer from the basic network layer and the advanced relationship network layer to obtain a conditional inference result corresponding to the query information; wherein the target inference result comprises the rule inference result, and the conditional inference engine comprises at least one of a multi-tree propagation inference engine, a clique-tree propagation inference engine, a combinatorial optimization-based method engine, a search-based method engine, and a Monte Carlo algorithm engine.
In the embodiment of the invention, different types of data are abstracted into network nodes and relations to form a heterogeneous network, and at least one of an event reasoning strategy, a rule reasoning strategy and a condition reasoning strategy is used for querying in the heterogeneous network according to query information input by a user to obtain a corresponding reasoning result, so that the relation reasoning of the heterogeneous network is realized. And a plurality of network topologies do not need to be established for different types of data, so that the difficulty of reasoning is reduced.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The terminal provides wireless broadband internet access to the user through the network module 602, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 can also provide audio output related to a specific function performed by the terminal 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The terminal 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the terminal 600 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although in fig. 6, the touch panel 6071 and the display panel 6061 are two independent components to realize the input and output functions of the terminal, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to realize the input and output functions of the terminal, and this is not limited here.
The interface unit 608 is an interface for connecting an external device to the terminal 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal 600 or may be used to transmit data between the terminal 600 and an external device.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 609 and calling data stored in the memory 609, thereby performing overall monitoring of the terminal. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The terminal 600 may further include a power supply 611 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 611 is logically connected to the processor 610 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the terminal 600 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal, which includes a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program, when executed by the processor 610, implements each process of the foregoing heterogeneous network relationship inference method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned heterogeneous network relationship inference method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A relationship inference method for a heterogeneous network, comprising:
receiving query information input by a user;
reasoning the query information in the heterogeneous network based on a preset reasoning strategy to obtain a target reasoning result;
the heterogeneous network comprises a basic network layer and a high-level relation network layer, wherein the basic network layer takes people, affairs, places and objects as network nodes, and network layers are established by taking human-to-human, human-to-affairs, human-to-ground and human-to-object as network relations; the high-level relation network layer is a network layer with probability weight established by mining generated relations.
2. The method according to claim 1, wherein when the preset inference policy includes the event inference policy, the inferring the query information in the heterogeneous network based on the preset inference policy, and obtaining a target inference result includes:
performing semantic analysis on the query information to obtain a query statement consisting of a Supper-predicate object (SPO) triple;
matching in the basic network layer according to the query statement to obtain a matching result;
wherein the target inference result comprises the matching result.
3. The method according to claim 1, wherein when the preset inference policy includes the rule inference policy, the inferring the query information in the heterogeneous network based on the preset inference policy, and obtaining a target inference result includes:
identifying the query information to obtain a first fact, wherein the first fact comprises an entity corresponding to the network node, relationship information of the entity and attribute information of the entity;
screening the first facts according to attribute constraint conditions to obtain second facts meeting the attribute constraint conditions in the first facts;
transmitting the first fact in a rule network based on the basic network layer to obtain a rule reasoning result;
and the target inference result comprises the rule inference result, and the rule network is established according to a target rule matched with the query information in a preset rule base.
4. The method of claim 3, wherein in communicating the first fact in a regular network, the method further comprises:
and performing attribute constraint on intermediate facts generated in the process of transmitting the first fact in the regular network according to the attribute constraint conditions.
5. The method according to claim 1, wherein when the preset inference policy includes the conditional inference policy, the inferring the query information in the heterogeneous network based on the preset inference policy, and obtaining a target inference result includes:
a conditional inference engine of Bayesian inference is utilized to infer from the basic network layer and the advanced relationship network layer to obtain a conditional inference result corresponding to the query information;
wherein the target inference result comprises the rule inference result, and the conditional inference engine comprises at least one of a multi-tree propagation inference engine, a clique-tree propagation inference engine, a combinatorial optimization-based method engine, a search-based method engine, and a Monte Carlo algorithm engine.
6. A terminal, comprising:
the receiving module is used for receiving query information input by a user;
the reasoning module is used for reasoning the query information in the heterogeneous network based on a preset reasoning strategy to obtain a target reasoning result;
the heterogeneous network comprises a basic network layer and a high-level relation network layer, wherein the basic network layer takes people, affairs, places and objects as network nodes, and network layers are established by taking human-to-human, human-to-affairs, human-to-ground and human-to-object as network relations; the high-level relation network layer is a network layer with probability weight established by mining generated relations.
7. The terminal according to claim 6, wherein when the preset inference policy comprises the event inference policy, the inference module comprises:
the analysis unit is used for carrying out semantic analysis on the query information to obtain a query statement consisting of a principal and predicate object oriented Service (SPO) triple;
the matching unit is used for matching in the basic network layer according to the query statement to obtain a matching result;
wherein the target inference result comprises the matching result.
8. The terminal according to claim 6, wherein when the preset inference policy comprises the rule inference policy, the inference module comprises:
the identification unit is used for identifying the query information to obtain a first fact, wherein the first fact comprises an entity corresponding to the network node, the relationship information of the entity and the attribute information of the entity;
the control unit is used for screening the first facts according to attribute constraint conditions to obtain second facts meeting the attribute constraint conditions in the first facts;
the inference unit is used for transmitting the first fact in a rule network based on the basic network layer to obtain a rule inference result;
and the target inference result comprises the rule inference result, and the rule network is established according to a target rule matched with the query information in a preset rule base.
9. The terminal according to claim 8, wherein the control unit is further configured to, during the transferring of the first fact in the regular network, perform attribute constraint on an intermediate fact generated during the transferring of the first fact in the regular network according to the attribute constraint condition.
10. The terminal of claim 6, wherein when the preset inference policy comprises the conditional inference policy, the inference module is configured to: a conditional inference engine of Bayesian inference is utilized to infer from the basic network layer and the advanced relationship network layer to obtain a conditional inference result corresponding to the query information;
wherein the target inference result comprises the rule inference result, and the conditional inference engine comprises at least one of a multi-tree propagation inference engine, a clique-tree propagation inference engine, a combinatorial optimization-based method engine, a search-based method engine, and a Monte Carlo algorithm engine.
11. A terminal, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the method of relational inference of heterogeneous networks according to any of claims 1-5.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for relational inference of heterogeneous networks according to any one of claims 1 to 5.
CN201910472285.1A 2019-05-31 2019-05-31 Relationship reasoning method and terminal of heterogeneous network Active CN112015973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910472285.1A CN112015973B (en) 2019-05-31 2019-05-31 Relationship reasoning method and terminal of heterogeneous network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910472285.1A CN112015973B (en) 2019-05-31 2019-05-31 Relationship reasoning method and terminal of heterogeneous network

Publications (2)

Publication Number Publication Date
CN112015973A true CN112015973A (en) 2020-12-01
CN112015973B CN112015973B (en) 2023-08-01

Family

ID=73506268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910472285.1A Active CN112015973B (en) 2019-05-31 2019-05-31 Relationship reasoning method and terminal of heterogeneous network

Country Status (1)

Country Link
CN (1) CN112015973B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590782A (en) * 2021-07-28 2021-11-02 北京百度网讯科技有限公司 Training method, reasoning method and device of reasoning model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090070323A1 (en) * 2007-09-12 2009-03-12 Nishith Parikh Inference of query relationships
US20100241644A1 (en) * 2009-03-19 2010-09-23 Microsoft Corporation Graph queries of information in relational database
CN101930437A (en) * 2009-06-19 2010-12-29 日电(中国)有限公司 Method and equipment for reasoning inconsistent and uncertain ontology associated with specific query
CN103942614A (en) * 2014-04-09 2014-07-23 清华大学 Method and system for predicting heterogeneous network linking relation
CN107092516A (en) * 2017-03-29 2017-08-25 东南大学 A kind of inference method for combining body and default program
CN109710737A (en) * 2018-12-21 2019-05-03 神思电子技术股份有限公司 A kind of intelligent inference method based on structuralized query

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090070323A1 (en) * 2007-09-12 2009-03-12 Nishith Parikh Inference of query relationships
US20100241644A1 (en) * 2009-03-19 2010-09-23 Microsoft Corporation Graph queries of information in relational database
CN101930437A (en) * 2009-06-19 2010-12-29 日电(中国)有限公司 Method and equipment for reasoning inconsistent and uncertain ontology associated with specific query
CN103942614A (en) * 2014-04-09 2014-07-23 清华大学 Method and system for predicting heterogeneous network linking relation
CN107092516A (en) * 2017-03-29 2017-08-25 东南大学 A kind of inference method for combining body and default program
CN109710737A (en) * 2018-12-21 2019-05-03 神思电子技术股份有限公司 A kind of intelligent inference method based on structuralized query

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOANNA GOLINSKA-PILAREK 等: "Relational Reasoning in Formal Concept Analysis", 2007 IEEE INTERNATIONAL FUZZY SYSTEMS CONFERENCE *
马雷雷;李宏伟;梁汝鹏;李丽;: "本体辅助的定性空间关系推理机制", 计算机应用研究, no. 01 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590782A (en) * 2021-07-28 2021-11-02 北京百度网讯科技有限公司 Training method, reasoning method and device of reasoning model
CN113590782B (en) * 2021-07-28 2024-02-09 北京百度网讯科技有限公司 Training method of reasoning model, reasoning method and device

Also Published As

Publication number Publication date
CN112015973B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN104967679B (en) Information recommendation system, method and device
CN111125523B (en) Searching method, searching device, terminal equipment and storage medium
CN111368290A (en) Data anomaly detection method and device and terminal equipment
CN109857494B (en) Message prompting method and terminal equipment
CN110995810B (en) Object identification method based on artificial intelligence and related device
CN108984066B (en) Application icon display method and mobile terminal
CN112001741A (en) Method and device for constructing multitask processing model, electronic equipment and storage medium
CN111753520B (en) Risk prediction method and device, electronic equipment and storage medium
CN106797336A (en) The method and apparatus of history chat record displaying
CN111125307A (en) Chat record query method and electronic equipment
CN115565236A (en) Face recognition attack processing method, device, equipment and storage medium
CN112464831B (en) Video classification method, training method of video classification model and related equipment
CN110378798B (en) Heterogeneous social network construction method, group recommendation method, device and equipment
CN112015973B (en) Relationship reasoning method and terminal of heterogeneous network
CN111050223A (en) Bullet screen information processing method and electronic equipment
CN111753047B (en) Text processing method and device
CN115240250A (en) Model training method and device, computer equipment and readable storage medium
CN110928443B (en) Touch position detection method and electronic equipment
CN111131605B (en) Message management method, electronic device, and computer-readable storage medium
CN111913942B (en) Data quality detection method and device
CN113112011B (en) Data prediction method and device
CN110113485B (en) Information processing method and mobile terminal
CN113518152A (en) Telephone number identification method and system and electronic equipment
CN116912352B (en) Picture generation method and device, electronic equipment and storage medium
CN109614483B (en) Information classification method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant