CN114817584B - Information processing method, computer-readable storage medium, and electronic device - Google Patents

Information processing method, computer-readable storage medium, and electronic device Download PDF

Info

Publication number
CN114817584B
CN114817584B CN202210745675.3A CN202210745675A CN114817584B CN 114817584 B CN114817584 B CN 114817584B CN 202210745675 A CN202210745675 A CN 202210745675A CN 114817584 B CN114817584 B CN 114817584B
Authority
CN
China
Prior art keywords
target
information
target operation
nodes
operation steps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210745675.3A
Other languages
Chinese (zh)
Other versions
CN114817584A (en
Inventor
裘虬
满远斌
董保华
王海滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210745675.3A priority Critical patent/CN114817584B/en
Publication of CN114817584A publication Critical patent/CN114817584A/en
Application granted granted Critical
Publication of CN114817584B publication Critical patent/CN114817584B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/489Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an information processing method, a computer readable storage medium and an electronic device. Wherein, the method comprises the following steps: acquiring target operation information and target multimedia information of a target object; extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; and generating a target operation flow based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps. The method and the device solve the technical problem that in the question and answer mode of the related technology, answers with good effects are difficult to give to operational problems.

Description

Information processing method, computer-readable storage medium, and electronic device
Technical Field
The present application relates to the field of information processing, and in particular, to an information processing method, a computer-readable storage medium, and an electronic device.
Background
At present, when a user queries a problem in a related scene, the problem is generally described by characters or voice, then the query is performed through a model, and the result of the query is given by the model.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides an information processing method, a computer readable storage medium and an electronic device, which are used for solving at least the technical problem that an effective answer is difficult to be given to an operational problem in a question-answering mode of the related art.
According to an aspect of an embodiment of the present application, there is provided an information processing method including: acquiring target operation information and target multimedia information of a target object; extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; and generating a target operation flow based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps.
According to another aspect of the embodiments of the present application, there is also provided an information processing method, including: acquiring farming operation information and farming multimedia information of target crops; extracting the farm work operation information to obtain a plurality of farm work operation steps of the farm work operation information and a target extraction sequence of the plurality of farm work operation steps; and generating a target operation flow based on the target crops, the plurality of farm work operation steps, the target extraction sequence and the farm work multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the plurality of nodes correspond to the plurality of farm work operation steps.
According to another aspect of the embodiments of the present application, there is also provided an information processing method, including: acquiring target operation information and target multimedia information of a target building; extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; and generating a target operation flow based on the target building, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps.
According to another aspect of the embodiments of the present application, there is also provided an information processing method, including: the cloud server acquires target operation information and target multimedia information of a target object; the cloud server extracts the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; the cloud server generates a target operation flow based on a target object, a plurality of target operation steps, a target extraction sequence and target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the plurality of nodes correspond to the plurality of target operation steps.
According to another aspect of the embodiments of the present application, there is also provided a computer-readable storage medium, which includes a stored program, wherein when the program runs, the apparatus on which the computer-readable storage medium is located is controlled to execute the method in any one of the above embodiments.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including: a memory and a processor for executing a program stored in the memory, wherein the program when executed performs the method of any of the above embodiments.
In the embodiment of the application, firstly, target operation information and target multimedia information of a target object are obtained; extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; and generating a target operation flow based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps, so that the operation steps can be better displayed through the target operation flow, and the target operation flow is convenient for a user to check. It is easy to notice that the target multimedia information is combined in the process of generating the target operation flow, so that clearer answers can be obtained through the target operation flow, the user can understand the answer conveniently, and the technical problem that the answer with better effect is difficult to be given to the operational problem in the question-answering mode of the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a computer terminal (or mobile device) for implementing an information processing method according to an embodiment of the present application;
fig. 2 is a flowchart of an information processing method according to embodiment 1 of the present application;
FIG. 3 is a schematic diagram of an alternative question-answering system in accordance with embodiments of the present application;
FIG. 4 is a schematic diagram of an alternative question answering system in accordance with an embodiment of the present application;
FIG. 5 is a schematic diagram of an alternative SOP operation according to an embodiment of the present application;
fig. 6 is a flowchart of an information processing method according to embodiment 2 of the present application;
fig. 7 is a flowchart of an information processing method according to embodiment 3 of the present application;
fig. 8 is a flowchart of an information processing method according to embodiment 4 of the present application;
FIG. 9 is a schematic view of an information processing apparatus according to embodiment 5 of the present application;
fig. 10 is a schematic view of an information processing apparatus according to embodiment 6 of the present application;
fig. 11 is a schematic view of an information processing apparatus according to embodiment 7 of the present application;
fig. 12 is a schematic view of an information processing apparatus according to embodiment 8 of the present application;
fig. 13 is a block diagram of a computer terminal according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
the knowledge graph is a semantic network formed by entities and relations, and is a mode for storing knowledge by using points and edges.
Knowledge question-answering is a question-answering process based on industry or field background knowledge, and returns corresponding field/industry knowledge according to natural language question sentences of users.
And (3) guidance of farming: the method is a specific and materialized knowledge question-answer, and the main aim is to provide expected relevant agricultural guidelines according to various question modes of users.
Example 1
According to an embodiment of the present application, there is also provided an information processing method embodiment, it should be noted that the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 1 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing an information processing method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, \8230; 102 n) processors (which may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 104 for storing data, and a transmission module 106 for communication functions. In addition, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of variable resistance termination paths connected to the interface).
The memory 104 can be used for storing software programs and modules of application software, such as program instructions/data storage devices corresponding to the information processing method in the embodiment of the present application, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory 104, that is, implementing the information processing method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located from the processor, which may be connected to the computer terminal 10 over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
It should be noted that in some alternative embodiments, the computer device (or mobile device) shown in fig. 1 may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computer device (or mobile device) described above.
Under the operating environment, the application provides an information processing method as shown in fig. 2. Fig. 2 is a flowchart of an information processing method according to embodiment 1 of the present application. The method comprises the following steps:
step S202, target operation information and target multimedia information of the target object are obtained.
The target object can be crops, plots, greenhouses, forests and the like in an agriculture and forestry scene, the target object can be buildings, road networks and the like in an urban planning scene, and the target object can also be water areas, barrages, culture areas and the like in a water conservancy scene.
The target operation information may be farm operation information, building construction information, and the like.
The target operation steps may be operation steps in the farm work operation information, and the target operation steps may be operation steps in the building construction information.
The target extraction order described above may be an operation order of a plurality of target operation steps.
The target operation information can be obtained from the current knowledge base, material base or expert question and answer process.
And step S204, extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps.
In an optional embodiment, the target operation steps may be extracted according to a flow of a plurality of target operation steps in the target operation information to obtain an extraction order, and the target job flow may be constructed according to the target object, the plurality of target operation steps, the target extraction order, and the target multimedia information.
Step S206, generating a target operation flow based on the target object, the plurality of target operation steps, the target extraction sequence and the target multimedia information.
The target operation process comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps.
The above-described series relationship of the plurality of nodes connected in series corresponds to the operation flow of the plurality of target operation steps. The plurality of nodes may be exposed through interaction with a user.
The target Operation flow may be a Standard Operation Procedure (SOP).
In an alternative embodiment, the target workflow may be triggered by querying the operation steps of the target object, and the user may trigger to display a plurality of nodes in the target workflow in an interactive manner, so that the plurality of operation steps are displayed in sequence through the plurality of nodes. Optionally, after the operation step corresponding to the node is displayed, the user may trigger the next node in the target work flow in an interactive manner.
In an alternative embodiment, multiple versions of the target workflow may be constructed based on the target object, the plurality of target operation steps, the target extraction order, and the target multimedia information.
In another optional embodiment, after the target workflow is generated, the target workflow may be fed back to the user, the user may adjust the target workflow to obtain a feedback result, the target workflow may be modified according to the feedback result to obtain a modified target workflow, and the modified target workflow is saved. The user can adjust the nodes in the target operation flow and can also adjust the multimedia information corresponding to the nodes.
For different users' problems, different versions of the target workflow can be used for feedback. For children, the target operation flow of the child version can be adopted to feed back the problems of the children. For adults, the target workflow of the adult version can be used to feed back the problems of the adult. For the groups with different professions, target operation flows with different versions can be adopted for feedback.
Taking a farming scene as an example, the farming steps can be extracted according to a farming instruction case and through semantic analysis and other related technologies, the farming operation standard of each step is fixed, the existing standard operation pictures/videos are associated with the corresponding operation steps through text matching, image-text matching and other semantic analysis/deep learning technologies, the travel SOP is connected in series according to the sequence extracted in the steps, and after the travel SOP passes the verification and verification of an expert, the target operation flow is generated.
Through the steps, firstly, target operation information and target multimedia information of a target object are obtained; extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; and generating a target operation flow based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps, so that the operation steps can be better displayed through the target operation flow, and the target operation flow is convenient for a user to check. It is easy to notice that the target multimedia information is combined in the process of generating the target operation flow, so that clearer answers can be obtained through the target operation flow, the user can understand the answer conveniently, and the technical problem that the answer with better effect is difficult to be given to the operational problem in the question-answering mode of the related technology is solved.
In the above embodiments of the present application, generating a target workflow based on a target object, a plurality of target operation steps, a target extraction sequence, and target multimedia information includes: associating the target operation steps with the target multimedia information to generate a plurality of nodes; connecting a plurality of nodes in series based on the target extraction sequence to obtain a series result; a target workflow is generated based on the target object and the concatenation result.
In an optional embodiment, a plurality of target operation steps and target multimedia information may be matched, so that when a plurality of target operation steps are displayed for a user, the user can know specific operation steps more clearly by viewing the target multimedia information corresponding to the target operation steps, thereby improving efficiency of guiding the user.
In another optional embodiment, a plurality of target operation steps and target multimedia information may be matched and associated to obtain a plurality of association results, a plurality of corresponding nodes may be generated according to the plurality of association results, then the plurality of nodes are connected in series according to the extraction sequence of the target operation steps to obtain a series result, and an SOP, that is, the target workflow may be obtained according to the target object and the series result, so that when the target object is subsequently queried, an SOP node corresponding to the target object may be queried and the target operation step may be displayed through the node of the SOP, and a user may also see the target multimedia information corresponding to the target operation step.
In the above embodiment of the present application, associating a plurality of target operation steps with target multimedia information to generate a plurality of nodes, includes: identifying the target multimedia information to obtain an identification result, wherein the identification result is used for representing the category of the target multimedia information; and associating the target operation steps with the target multimedia information based on the preset priority and the identification result to generate a plurality of nodes, wherein the preset priority is used for representing the display priority of the target multimedia information of different categories when the nodes are output.
The preset priority may be set by itself, or may be set according to actual use requirements, for example, the preset priority may be set such that the display priority of the text is greater than the display priority of the picture, the display priority of the picture is greater than the display priority of the video, or other manners, where the setting of the preset priority is defined by walking.
In an optional embodiment, multi-modal recognition can be performed on target multimedia information to obtain a recognition result of the target multimedia information, and the type of the target multimedia information can be determined according to the recognition result, so that a plurality of target operation steps and the target multimedia information can be displayed and associated according to a preset priority to obtain a plurality of nodes, wherein when the operation steps corresponding to the nodes are displayed, the multimedia information corresponding to the operation steps can be displayed according to the preset priority, and therefore a user can conveniently check the multimedia information.
In another optional embodiment, in the process of displaying the nodes, the nodes may be displayed in the form of cards, where the cards may have text descriptions of the operation steps corresponding to the nodes, and target multimedia information corresponding to the operation steps may be displayed in the cards according to a preset priority.
In the above embodiment of the present application, the method further includes: acquiring first remote sensing data and first meteorological data of a target object, wherein the first meteorological data are meteorological data of an area where the target object is located; identifying the first remote sensing data and the first meteorological data to obtain a first identification result, wherein the first identification result is used for indicating whether the target object is in a preset state or not; inquiring the first identification result to generate target guide information, wherein the target guide information is used for guiding the first user to execute operation corresponding to a preset state; target guidance information is output based on the target workflow.
The first remote sensing data may be a remote sensing image or a remote sensing image of the target object. The application takes crops as an example for explanation, and the first remote sensing data can be remote sensing images or remote sensing images of the area where the crops are located.
The first weather data may be weather data of an area where the target object is located, where the first weather data may be current weather data of the area where the target object is located, or may be weather data in a current time period and a future time period of the area where the target object is located.
In an optional embodiment, the first remote sensing data and the first meteorological data of the target object may be acquired periodically or periodically, so as to determine whether the target object is abnormal according to the first remote sensing data and the first meteorological data of the target object.
In another optional embodiment, the first meteorological data may be obtained periodically or periodically, and when the first meteorological data is abnormal, the first remote sensing data of the target object may be obtained, so that when the weather in the area where the target object is located is abnormal, it is determined through the first remote sensing data that the target object receives a meteorological factor, which results in the abnormality. For example, in rainy days, first remote sensing data of crops can be acquired to check whether the crops are influenced by the rainy days.
In yet another alternative embodiment, the first remote sensing data may be acquired periodically or periodically, and in a case where the first remote sensing data shows that the target object is abnormal, the first meteorological data of an area where the target object is located may be acquired, so that in a case where the first remote sensing data shows that the target object is abnormal, whether the abnormality of the target object appears in a future time period may be aggravated or may disappear may be determined by the first meteorological data. For example, when the first remote sensing data shows that the growth of the target object is not good, the first meteorological data of the area where the target object is located can be obtained, whether the growth is not good due to the meteorological data or not can be judged, and whether human intervention is needed or not can be judged according to the meteorological data in a period of time in the future.
The first recognition result is used for indicating whether the target object is in a preset state or not.
The preset state represents an abnormal state, and the preset state can be a poor growth condition, a drought condition, flooding, pest and disease damage and the like by taking crops as an example. Taking a building as an example, the preset state may be a collapse of the building, a tilt of the building, or the like.
The preset state can also represent a state requiring manual or machine intervention operation, and taking crops as an example, the preset state can be a state in which the crops need pollination, a state in which the crops need pruning, a state in which the crops are mature and need harvesting, and the like. Taking a building as an example, the preset state may be building construction.
In an optional embodiment, the first remote sensing data and the first meteorological data may be identified to determine whether the target object is in a preset state, and if the target object is in the preset state, the preset state may be queried by using a knowledge graph model to obtain guidance information for dealing with the preset state, so that a user may apply the preset state of the target object according to the guidance information.
Taking an agricultural scene as an example, the first remote sensing data of the crops and the first weather data of the area where the crops are located can be combined with the current time information to identify so as to determine whether the crops are abnormal, for example, whether the crops have poor growth, drought, flood, diseases and insect pests and the like in the corresponding time, if the crops are abnormal, the abnormal conditions can be inquired by using a knowledge graph model to obtain guidance information for dealing with the abnormal conditions, so that a user can apply the abnormal conditions of the crops according to the guidance information. The first remote sensing data of the crops and the first meteorological data of the area where the crops are located can be identified so as to determine whether the crops can be harvested, if the crops can be harvested, the harvesting mode of the crops can be inquired by using a knowledge graph model, and target guide information is output so that a farmer can harvest the crops according to the harvesting mode in the target guide information.
In another alternative embodiment, the first telemetric data and the first meteorological data may be identified in combination with current time information to determine whether the target object is in a preset state. Wherein the preset state may be determined by current time information.
Optionally, the corresponding relationship between the time and the preset state may be set according to the state that the target object should present at different periods, so as to obtain the preset state corresponding to the time according to the current time information, and if the current state of the target object reaches the preset state, the first recognition result may be queried by using the knowledge graph model, so as to obtain guidance information of the preset state, so as to guide the first user to perform an operation corresponding to the preset state.
For example, if the target object reaches a mature state during the harvesting period, the target object may be queried about the harvesting manner through the knowledge graph model, so as to guide the farmer to harvest the target object through the harvesting manner.
The target guidance information may be displayed in the form of operation steps, and the operation steps are displayed in the form of a list, so that the user can conveniently view the target guidance information.
The target guidance information can also be displayed in the modes of voice broadcasting, pictures, characters, image-text combination, videos and the like.
In an alternative embodiment, the target guidance information may be generated by querying the first recognition result through a knowledge graph model.
The knowledge graph model can be formed by constructing a concept mode layer of the knowledge graph based on current knowledge, and then completing construction of the knowledge graph by extracting knowledge through small sample learning.
The first user may be a user associated with the target object, wherein the first user may be a user for managing the target object. In an agricultural scenario, the first user may be a farmer; in a construction scenario, the first user may be a construction worker or a construction manager.
In an alternative embodiment, in the case that the target object is in the preset state, the preset state of the target object may be queried by using the knowledge graph model, so that the knowledge graph model can generate guidance information applied to the preset state of the target object. The first user can perform corresponding operation on the target object according to the target guide information, so that the target object is managed more effectively.
The method comprises the steps of explaining the abnormal condition of crops by taking the example that the crops are suffered from diseases and insect pests or are in poor growth, acquiring first remote sensing data and first meteorological data of the crops, wherein the first remote sensing data are current remote sensing images of the crops, the first meteorological data are meteorological data of an area where the crops are located, identifying the first remote sensing data and the first meteorological data, judging whether the crops are suffered from the diseases and insect pests or are in poor growth, inquiring a solution method that the crops are suffered from the diseases and the insect pests or are in poor growth by using a knowledge graph model if the corresponding condition occurs, generating target guide information, and outputting the target guide information according to a target operation process. The target guidance information is used for guiding a farmer how to deal with the pest and disease damage of crops or the poor growth condition of the crops in a mode of displaying operation steps.
The method comprises the steps of taking the situation that crops need pollination or need harvesting as an example, obtaining first remote sensing data and first meteorological data of the crops by using a preset state, wherein the first remote sensing data is a current remote sensing image of the crops, the first meteorological data is meteorological data of an area where the crops are located, identifying the first remote sensing data and the first meteorological data of the crops according to the current period, judging whether the crops can be pollinated or harvested, and if the corresponding situation occurs, inquiring a pollination method or a harvesting method of the crops by using a knowledge graph model to generate target guide information, and outputting the target guide information according to a target operation process. The target guidance information is used for guiding a farmer to pollinate the crops or guiding the farmer to harvest the crops in a mode of displaying the operation steps.
The method comprises the steps of taking a building scene as an example for explanation, obtaining first remote sensing data and first meteorological data of a building when the building collapses or fires in the building in a preset state, identifying the first remote sensing data and the first meteorological data, judging whether the building collapses in rainwater weather or fires in drought weather, inquiring a solution of the building collapsing or the building fires by using a knowledge graph model if corresponding conditions occur, generating target guidance information, and outputting the target guidance information according to a target operation flow. The target guidance information can be used for displaying and dialing the telephone of related personnel to deal with the situation, and can also be used for guiding the user to escape or save self.
In the above embodiment of the present application, querying the first identification result to generate target guidance information includes: the method comprises the following steps: generating target early warning information based on the first recognition result; outputting target early warning information; receiving first feedback information, wherein the first feedback information is obtained by a first user through information acquisition on a target object according to target early warning information; and generating target guide information based on the first feedback information and the target early warning information.
The target early warning information may be that the early warning target object is currently in a preset state, and is used for reminding a user to perform corresponding operation. The target early warning information can be pictures, videos, characters, voice and the like.
The first feedback information may be obtained by photographing the target object for the first user. The first feedback information may be obtained by the first user performing sound collection on the target object. The first feedback information may be obtained by video recording of the target object by the first user. The first feedback information may also be obtained by the first user performing a relevant description on the target object.
In an optional embodiment, the target early warning information may be sent to a terminal corresponding to the first user through information such as text, voice, image, and the like, so that the first user can perform information acquisition on the target object when receiving the target early warning information, so as to query the target early warning information and the content obtained by the information acquisition, enrich query information, and obtain more accurate guidance information.
In another optional embodiment, when the target object is in the preset state, target early warning information may be generated according to the preset state of the target object, and the target early warning information may be output, so that the first user may use the target early warning information as a corresponding protection or early warning measure, and the user may obtain an early warning object as the target object by combining with the early warning content of the target early warning information, so that the obtained guidance information is more accurate, the user may reach an area where the target object is located to perform information acquisition on the target object, and obtain first feedback information, so that the knowledge graph model may obtain more accurate guidance information according to the target early warning information and the first feedback information, and the user may perform an operation corresponding to the preset state.
Explaining by taking an agricultural scene as an example, when the first recognition result is that the crops are in a pollination state, pollination early warning information can be generated, the pollination early warning information is output to a client of a farmer, the farmer knows that a target object needs pollination at present after seeing the pollination early warning information, if the farmer does not know how the crops pollinate in the current growth vigor, the farmer can take pictures of the crops to obtain pictures of the crops, first feedback information can be generated according to the pictures of the crops, so that a knowledge graph model can inquire according to the pollination early warning information and detailed pictures of the crops, and the steps of guiding the farmer to execute pollination operation are output, and therefore the first user can be guided more accurately.
The system has the functions of farm work pushing and problem finding guiding. For inexperienced farmers, they are even hard to ask some practically useful questions. Therefore, the ability of actively initiating interaction is needed to really help the farmers to solve the problems, and the method mainly carries out active initiation interaction, active farm work push and guidance problem discovery from two aspects. The active farm work push can push information to the peasant household according to the farm work time point, the meteorological data and the remote sensing data in time, and the timed push can be terminated if the farm work which needs to be completed needs the feedback of the peasant household to be completed. In the aspect of guiding problem discovery, the scheme can discover abnormal states of crops according to remote sensing data and the like, if the growth condition does not meet expectations, the scheme can send assignment information to require farmers to take pictures of the crops, and the scheme determines problems according to the pictures taken by the farmers and guides corresponding farming operations.
In the above embodiment of the present application, querying the first feedback information and the target early warning information by using the knowledge graph model, and outputting target guidance information includes: receiving second feedback information, wherein the second feedback information is a problem related to the target object and is provided according to the target early warning information; and outputting target guidance information based on the first feedback information, the second feedback information and the target early warning information.
The second feedback information may be a problem that the first user presents for the target early warning information after seeing the target early warning information. The explanation is given by taking a crop pollination scene as an example, when the target early warning information is early warning information that pollination needs to be performed on crops, a user can provide, according to the information, how to pollinate the crops and what operation needs to be performed after the pollination is performed on the crops.
In an optional embodiment, after receiving the first feedback information, the first user may present a relevant problem for a preset state of the target object, and generate second feedback information according to the relevant problem, so that the knowledge graph model may query the first feedback information, the second feedback information, and the target early warning information, and obtain target guidance information that better meets the requirements of the first user in combination with the problem fed back by the user, thereby providing convenience for the first user.
In an optional embodiment, the target early warning information may be output after the target early warning information is generated according to the first recognition result, and the first user may only propose related problems according to the target early warning information, so that the knowledge graph model can query the target guidance information that better meets the current needs of the first user.
In the above embodiment of the present application, the method further includes: acquiring second remote sensing data and second meteorological data of the target object by using the knowledge graph model, wherein the second remote sensing data is used for representing the remote sensing data of the target object acquired at the preset moment, and the second meteorological data is used for representing the meteorological data of the region where the target object is located acquired at the preset moment; determining a target time period and target operation based on the preset time, the second remote sensing data and the second meteorological data; generating target task information based on the target time period and the target operation, wherein the target task information is used for guiding a first user to execute the target operation on the target object in the target time period; and outputting the target task information.
The preset time can be preset by a user, and in an agricultural scene, the preset time can be the time corresponding to each time period when crops need to be subjected to agricultural operation. In a building scene, the preset time may be a time corresponding to each node of the construction period.
The target task information can be displayed in the form of characters, images, events, videos, voices and the like.
In an optional embodiment, the second remote sensing data and the second meteorological data of the target object may be obtained at a preset time by using the knowledge graph model, a target operation and a target time period that need to be performed on the target object may be determined according to the preset time, the second remote sensing data and the second meteorological data, so as to generate target task information according to the target time period and the target operation, the target operation may be performed on the target object in the target time period by the first user according to the target task information, and the target task information may be output to the terminal of the first user. The target task information can be output to the first user at a selected period in the target time period according to the operation requirement of the target operation, so that the first user can directly perform the target operation on the target object when receiving the target task information.
Taking an agricultural scene as an example for illustration, the second remote sensing data and the second meteorological data of the crop can be acquired in a period (that is, the preset time) when the crop is to be harvested, the target time period in which the crop can be harvested can be judged by the second remote sensing data and the second meteorological data, and an operation required to be executed by the crop harvesting is queried by using the knowledge graph model, that is, the target operation, target task information can be generated according to the target time period and the target operation, and the target task information is output, so that the first user is notified to harvest the crop in the time period in which the crop can be harvested by using the target task information.
In the above embodiment of the present application, outputting the target task information includes: acquiring a plurality of third remote sensing data and a plurality of third meteorological data of a target object in a target time period, wherein the target time period comprises a plurality of moments, and the plurality of third remote sensing data and the plurality of third meteorological data correspond to the plurality of moments; identifying the plurality of third remote sensing data and the plurality of third meteorological data to obtain a second identification result, wherein the second identification result is used for indicating whether the plurality of third remote sensing data and the third meteorological data meet the preset condition or not; determining a target time in a target time period based on the second identification result, wherein the target time is a time corresponding to third remote sensing data and third meteorological data which meet preset conditions in the target time period; and outputting target task information based on the target time.
The plurality of third remote sensing data and the plurality of third meteorological data may be remote sensing data and meteorological data corresponding to a plurality of moments of the target time period.
The above-described target time may be a time suitable for a target operation on a target object. Taking an agricultural scene as an example, the target time may be a time corresponding to a period when the weather is clear or comfortable.
In an optional embodiment, the third remote sensing data and the third meteorological data at multiple times may be identified within the target time period to obtain a second identification result, and if the third remote sensing data and the third meteorological data load preset condition are identified at the current time within the target time period, the current time may be determined as the target time, and the target task information may be output at the current time, so that the user may perform the target operation on the target object at the current time.
Taking an agricultural scene as an example for explanation, the multiple third remote sensing data and the multiple third meteorological data in the target time period in which the crops need to be harvested can be acquired, the multiple third remote sensing data and the multiple third meteorological data can be identified, and a relatively clear day can be found to remind the farmers to harvest the crops. Target task information can be output to the user on the day of sunny day so that the farmer can harvest crops according to the target task information.
In the above embodiment of the present application, the method further includes: acquiring first question information of a second user; identifying the first question information to obtain a third identification result, wherein the third identification result is used for representing the type of the first question information; performing intention recognition on the first question information based on the third recognition result to obtain a target query text; the target query text is queried to generate target guide information, wherein the target guide information is also used for guiding a second user to execute the operation corresponding to the first question information; target guidance information is output based on the target workflow.
The second user may be a user who needs to ask a question.
The first question information may be a question asked by the user, wherein the first question information may be presented in the form of a picture, a video, a text, or the like. The first question information may be sent to a chat box, and the target guidance information may be fed back to the chat box.
The target query text described above may be structured query text.
In an optional embodiment, the first question information of the second user may be obtained, the first question information may be identified, the category of the first question information is determined, intention identification may be performed on the first question information according to the category of the first question information, so as to identify an inquiry intention of the first question information, a structured target inquiry text may be constructed according to the inquiry intention obtained through the intention identification, so as to query the structured target inquiry text by using a knowledge graph model, so as to obtain target guidance information corresponding to the first question information, so that the second user may perform an operation corresponding to the first question information according to the target guidance information, and the target guidance information may be output through a target operation flow.
In another optional embodiment, when the first question information is a picture, the picture can be subjected to intention recognition, if a flower is displayed in the picture, the intention recognition result can be how to pollinate the flower, a target query text can be constructed according to the flower and the pollination, the target query text is queried by using a knowledge graph model, the guide information for pollinating the flower is output, and the target guide information is displayed through a target operation flow.
At present, plant encyclopedia websites only support the query of Chinese names or alias names in plants, and cannot form direct questions and answers. Answers are provided to simple questions, but only tabulated search results are given for questions that ask complex questions such as "other species of the same genus of morning glories". In addition, the existing KBQA questions and answers based on the agricultural field solve specific problems based on a certain specific scene, and the input of knowledge of macroscopic view angles is lacked. In order to achieve the aim, the knowledge graph model is built by constructing a concept mode layer of the knowledge graph based on expert knowledge and then extracting knowledge through small sample learning. Meanwhile, the results of the artificial intelligence analysis of the remote sensing data and the meteorological data are fused with the existing knowledge and incorporated into a knowledge map model.
On the basis, common problem scenes such as how to prevent diseases and pests, what farming should be done today, what medicine needs to be taken in the young fruit period and the like can be collected based on the real appeal of the farmer user, an intention classification model is constructed through small sample learning, the scene where the user's problems belong is identified, for each scene, an entity identification model can be constructed based on the small samples, entities and relations in the user's problems are identified, and answers are fed back to the user based on a knowledge graph model. If the user asks what to do today, the intention is to inquire, then the entity is 'standing spring' and 'farming', the relation is 'doing', then all the triple combinations are obtained through inquiry, the fact that fertilization is needed in the standing spring can be known through reasoning, the specific operation mode of fertilization, the agricultural machinery and agricultural materials needed and the like are obtained based on the attributes or the relation of the fertilization entity, and finally the information is fused to form an answer to be returned to the user.
In the above embodiment of the present application, outputting target guidance information based on a target workflow includes: displaying each target operation step in the target guidance information according to the target operation flow; receiving target feedback information, wherein the target feedback information is obtained by confirming the target information in the target operation step; and displaying the target multimedia information corresponding to the target feedback information.
The target information may be link information, which is used to jump to a target multimedia information page corresponding to the target operation step.
The target feedback information may be touch information.
In an optional embodiment, a plurality of target operation steps may be displayed according to the target job flow, and when viewing the plurality of target operation steps, a user may click the link information corresponding to the target operation step to generate touch information, so as to call the multimedia information corresponding to the target operation step according to the touch information, thereby facilitating the user to view.
Fig. 3 is a schematic diagram of a question answering system according to an embodiment of the present application. The system comprises a knowledge data access module, a knowledge map storage module, a multi-mode question-answering intention identification module, a knowledge reasoning module and a farm SOP module. The application layer in fig. 3 comprises a farming instruction/early warning module and a question and answer robot, the service layer comprises a knowledge map service and a question and answer service based on the knowledge map, the model layer comprises an entity recognition model, an intention classification model and a text matching model, the algorithm layer comprises a text classification algorithm (bert-textcnn), a time sequence algorithm (bert-LSTM-CRF), an encoder (send Embedding), a bag-of-word model/text frequency and inverse document frequency index (BOW/TFIDF), a language model (bert) and an object storage service (CoSNET), and the data layer comprises an original text prediction, a remote sensing image, a knowledge triplet and a remote sensing image, which are driven by an enterprise graphic data storage engine (graph d8/Neo4 j).
The knowledge data access module mainly supports the import of various knowledge related data, such as remote sensing images, agricultural documents, structured tables and the like, wherein the knowledge map storage module mainly stores the existing map information, including but not limited to texts, videos, pictures, voices and the like, wherein the most important point is to utilize the property of the map, the SOP process of the farm affairs is originally stored in the map in a mode of decomposing the SOP process of the farm affairs into nodes and relations, and one node is directly returned when the SOP is to be inquired, so that the maintenance cost and the inquiry expense of the SOP are greatly simplified.
The multi-modal question-answer intention recognition module comprises 3 parts: the modal identification is used for classifying the request submitted by the user and judging whether the request is an image, a text or a voice; multi-mode standardization, namely carrying out standardized output on the input request by using corresponding processing according to the result of modal identification and unifying the input request into a text; intention classification, namely performing intention classification according to the standardized text, and returning and outputting a classification result; and the knowledge reasoning module is mainly used for reasoning functions during the knowledge question answering and the farm affair guidance, and can retrieve and reason related answers from the map according to the input of the user.
And the farm SOP module generates a standard operation SOP card according to the result of the reasoning module, attaches a picture and a video guide to each step, provides all-around planting guidance for farmers, and relates to the following sub-modules in SOP generation.
Fig. 4 is a schematic diagram of a question-answering system according to an embodiment of the present application. As shown in fig. 4, it mainly comprises a passive KBQA module and an active interaction module.
The KBQA module allows a user to input three modes of text, image and voice, performs intention identification and entity identification after multi-mode processing, constructs an inquiry statement, finds an answer in a map and returns the answer to the user.
Wherein, the initiative interaction module has two functions: early warning and guiding problem discovery and farm work pushing. The early warning and guiding problem finding module can confirm conditions such as crop growth, plant diseases and insect pests according to time information, meteorological information, remote sensing information and the like, if the conditions are abnormal, a user can be reminded and guided to carry out more detailed information acquisition, such as on-site photographing, crop state description and the like, the user is guided to carry out corresponding farming, such as fertilization promotion of growth, plant diseases and insect pests prevention and the like by searching a solution in a map in combination with the early warning after feedback. In the farming push module, farming required by the user in the period can be confirmed according to time information, meteorological information, remote sensing information and the like, for example, it is known from the farming SOP of the database that fertilization can be required in the last 3 months and in clear weather, the scheme can select a certain clear day in the last 3 months to remind the user of the need of fertilization in advance, the reminding can be sent in a task form, the user is required to confirm completion, and otherwise, the appropriate time can be selected to continue pushing.
Fig. 5 is a schematic diagram of an SOP operation in which a user asks how to do pollination, according to an embodiment of the application. The question-answering robot feeds back' first step: xxx ", user sends: and the next step, the question-answering robot feeds back images, and the user sends: and the next step, feeding back the video by the question and answer robot.
The application contains the following two innovation points:
1. the remote sensing data and the meteorological data can be used as priori knowledge and stored in a map for daily inference return or SOP (sequence of action), meanwhile, the growth situation of related crops can be monitored based on real-time remote sensing data, the growth period of the crops can be inferred based on an intelligent analysis result, and then the related crops and SOP operation are pushed/early-warned by the robot, so that the passive operation is changed into the active operation for reaching and guiding the daily crops of farmers.
2. In a traditional knowledge spectrogram system, the priori knowledge of the remote sensing image is brought in, and meanwhile, the steps of the traditional agricultural affairs are decomposed into operable SOP flow nodes by being assisted by the input of artificial experts. Due to the flow characteristic of the SOP, the SOP naturally conforms to a frame of map storage, and then the pre-generated SOP nodes are input into the map and then returned to the user through a service or KBQA robot mode.
The method and the device can also support the voice and picture input of the user side through a multi-mode recognition technology, and expand the input form that the traditional KBQA question and answer is limited to the text. Meanwhile, after reasoning and retrieving data, decomposing the data into corresponding SOP guidance according to a returned result, and assisting with pictures and videos to guide farmers to operate farming.
It should be noted that for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art will recognize that the embodiments described in this specification are preferred embodiments and that acts or modules referred to are not necessarily required for this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present application.
Example 2
There is also provided, in accordance with an embodiment of the present application, an embodiment of an information processing method, where the flowcharts of the figures show steps that may be performed in a computer system, such as a set of computer-executable instructions, and where logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than here.
Fig. 6 is a flowchart of an information processing method according to embodiment 2 of the present application, and as shown in fig. 6, the method may include the steps of:
step S602, farm work operation information and farm work multimedia information of the target crop are obtained.
Step S604, extracting the farm work operation information to obtain a plurality of farm work operation steps of the farm work operation information and a target extraction order of the plurality of farm work operation steps.
Step S606, generating a target operation flow based on the target crop, the plurality of farming operation steps, the target extraction sequence and the farming multimedia information.
The target operation process comprises a plurality of nodes connected in series, and the nodes correspond to the farm work operation steps.
It should be noted that the preferred embodiments described in the foregoing examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 3
There is also provided, in accordance with an embodiment of the present application, an embodiment of an information processing method, where the flowcharts of the figures show steps that may be performed in a computer system, such as a set of computer-executable instructions, and where logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than here.
Fig. 7 is a flowchart of an information processing method according to embodiment 3 of the present application, and as shown in fig. 7, the method may include the steps of:
step S702, obtaining the target operation information and the target multimedia information of the target building.
Step S704, extracting the target operation information, to obtain a plurality of target operation steps of the target operation information and a target extraction order of the plurality of target operation steps.
Step S706, generating a target operation flow based on the target building, the plurality of target operation steps, the target extraction sequence and the target multimedia information.
The target operation flow comprises a plurality of nodes connected in series, and the plurality of nodes correspond to the plurality of target operation steps.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 4
There is also provided in accordance with an embodiment of the present application an embodiment of an information processing method, with the understanding that the flow charts in the accompanying figures show steps that may be implemented in a computer system such as a set of computer-executable instructions, and that although a logical order is shown in the flow charts, in some cases, the steps shown or described may be performed in a different order than presented herein.
Fig. 8 is a flowchart of an information processing method according to embodiment 4 of the present application, and as shown in fig. 4, the method may include the steps of:
step S802, the cloud server obtains target operation information and target multimedia information of the target object.
Step S804, the cloud server extracts the target operation information, and obtains a plurality of target operation steps of the target operation information and a target extraction order of the plurality of target operation steps.
In step S806, the cloud server generates a target workflow based on the target object, the plurality of target operation steps, the target extraction order, and the target multimedia information.
The target operation flow comprises a plurality of nodes connected in series, and the plurality of nodes correspond to the plurality of target operation steps.
It should be noted that the preferred embodiments described in the foregoing examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 5
According to an embodiment of the present application, there is also provided an information processing apparatus for implementing the information processing method described above, and fig. 9 is a schematic diagram of an information processing apparatus according to embodiment 5 of the present application, and as shown in fig. 9, the apparatus 900 includes: an acquisition module 902, an extraction module 904, and a generation module 906.
The acquisition module is used for acquiring target operation information and target multimedia information of a target object; the extraction module is used for extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; the generation module is used for generating a target operation flow based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps.
It should be noted here that the obtaining module 902, the extracting module 904, and the generating module 906 correspond to steps S202 to S206 in embodiment 1, and the three modules and the corresponding steps implement the same example and application scenario, but are not limited to what is disclosed in embodiment 1 above, and it should be noted that the modules as a part of the tool may operate in the computer terminal 10 provided in embodiment 1.
In the above embodiments of the present application, the generating module includes: the device comprises a correlation unit, a series unit and a generation unit.
The correlation unit is used for correlating a plurality of target operation steps with target multimedia information to generate a plurality of nodes; the series unit is used for serially connecting a plurality of nodes based on the target extraction sequence to obtain a series result; the generation unit is used for generating a target work flow based on the target object and the concatenation result.
In the above embodiments of the present application, the association unit includes: identifying a subunit and associating the subunit.
The identification subunit is used for identifying the target multimedia information to obtain an identification result, wherein the identification result is used for representing the category of the target multimedia information; the association subunit is configured to associate the target operation steps with the target multimedia information based on a preset priority and an identification result, and generate a plurality of nodes, where the preset priority is used to indicate priorities of display of different types of target multimedia information when the nodes are output.
In the above embodiment of the present application, the apparatus further includes: the device comprises an identification module, a query module and an output module.
The acquisition module is used for acquiring first remote sensing data and first meteorological data of a target object, wherein the first meteorological data are meteorological data of an area where the target object is located; the identification module is used for identifying the first remote sensing data and the first meteorological data to obtain a first identification result, wherein the first identification result is used for indicating whether the target object is in a preset state or not; the query module is used for querying the first identification result and generating target guide information, wherein the target guide information is used for guiding the first user to execute operation corresponding to a preset state; the output module is used for outputting target guidance information based on the target operation process.
In the above embodiment of the present application, the query module includes: the device comprises a generating unit, a first output unit and a first receiving unit.
The generating unit is used for generating target early warning information based on the first recognition result; the first output unit is used for outputting target early warning information; the first receiving unit is used for receiving first feedback information, wherein the first feedback information is obtained by a first user through information acquisition on a target object according to target early warning information; the generating unit is further used for generating target guide information based on the first feedback information and the target early warning information.
In the above embodiment of the present application, the generating unit includes: a receiving subunit and a generating subunit.
The receiving subunit is configured to receive second feedback information, where the second feedback information is a problem associated with the target object, which is provided according to the target early warning information; the generating subunit is configured to generate target guidance information based on the first feedback information, the second feedback information, and the target early warning information.
In the above embodiment of the present application, the apparatus further includes: and determining a module.
The acquisition module is further used for acquiring second remote sensing data and second meteorological data of the target object, wherein the second remote sensing data are used for representing the remote sensing data of the target object acquired at the preset moment, and the second meteorological data are used for representing the meteorological data of the area where the target object is located acquired at the preset moment; the determining module is further used for determining a target time period and target operation based on the preset time, the second remote sensing data and the second meteorological data; the generating module is further used for generating target task information based on the target time period and the target operation, wherein the target task information is used for guiding the first user to execute the target operation on the target object in the target time period; the output module is also used for outputting the target task information.
In the above embodiments of the present application, the output module includes: the device comprises an acquisition unit, an identification unit, a determination unit and an output unit.
The acquisition unit is used for acquiring a plurality of third remote sensing data and a plurality of third meteorological data of a target object in a target time period, wherein the target time period comprises a plurality of moments, and the plurality of third remote sensing data and the plurality of third meteorological data correspond to the plurality of moments; the identification unit is used for identifying the plurality of third remote sensing data and the plurality of third meteorological data to obtain a second identification result, wherein the second identification result is used for indicating whether the plurality of third remote sensing data and the third meteorological data meet the preset condition or not; the determining unit is used for determining a target moment in a target time period based on the second recognition result, wherein the target moment is a moment corresponding to third remote sensing data and third meteorological data which meet preset conditions in the target time period; the output unit is used for outputting the target task information based on the target time.
In the above embodiment of the present application, the obtaining module is further configured to obtain first question information of a second user; the identification module is further used for identifying the first question information to obtain a third identification result, wherein the third identification result is used for representing the type of the first question information; the identification module is also used for carrying out intention identification on the first question information based on the third identification result to obtain a target query text; the query module is further used for querying the target query text to generate target guidance information, wherein the target guidance information is further used for guiding a second user to execute the operation corresponding to the first question information; the output module is further configured to output the target guidance information based on the target workflow.
In the above embodiment of the present application, the output module includes: display element, second receiving element.
The display unit is used for displaying each target operation step in the target guide information according to the target operation flow; the second receiving unit is used for receiving target feedback information, wherein the target feedback information is obtained by confirming the target information in the target operation step; the display unit is also used for displaying the target multimedia information corresponding to the target feedback information.
It should be noted that the preferred embodiments described in the foregoing examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 6
According to an embodiment of the present application, there is also provided an information processing apparatus for implementing the above-described information processing method, and fig. 10 is a schematic diagram of an information processing apparatus according to embodiment 6 of the present application, as shown in fig. 10, the apparatus 1000 including: an acquisition module 1002, an extraction module 1004, and a generation module 1006.
The system comprises an acquisition module, a control module and a display module, wherein the acquisition module is used for acquiring farming operation information and farming multimedia information of target crops; the extraction module is used for extracting the farm work operation information to obtain a plurality of farm work operation steps of the farm work operation information and a target extraction sequence of the plurality of farm work operation steps; the generation module is used for generating a target operation flow based on the target crops, the plurality of farm work operation steps, the target extraction sequence and the farm work multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the plurality of nodes correspond to the plurality of farm work operation steps.
It should be noted here that the obtaining module 1002, the extracting module 1004, and the generating module 1006 correspond to steps S602 to S606 in embodiment 2, and the implementation examples and application scenarios of the three modules and the corresponding steps are the same, but are not limited to the disclosure in embodiment 1, and it should be noted that the modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of a tool.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 7
According to an embodiment of the present application, there is also provided an information processing apparatus for implementing the above-described information processing method, and fig. 11 is a schematic diagram of an information processing apparatus according to embodiment 7 of the present application, as shown in fig. 11, the apparatus 1100 includes: an obtaining module 1102, an extracting module 1104 and a generating module 1106.
The acquisition module is used for acquiring target operation information and target multimedia information of a target building; the extraction module is used for extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; the generation module is used for generating a target operation flow based on the target building, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps.
Here, it should be noted that the obtaining module 1102, the extracting module 1104, and the generating module 1106 correspond to steps S702 to S706 in embodiment 3, and the implementation examples and application scenarios of the three modules and the corresponding steps are the same, but are not limited to the disclosure in embodiment 1, and it should be noted that the modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of a tool.
It should be noted that the preferred embodiments described in the foregoing examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 8
According to an embodiment of the present application, there is also provided an information processing apparatus for implementing the above-described information processing method, and fig. 12 is a schematic diagram of an information processing apparatus according to embodiment 8 of the present application, as shown in fig. 12, the apparatus including: an obtaining module 1202, an extracting module 1204, and a generating module 1206.
The acquisition module is used for acquiring target operation information and target multimedia information of a target object through the cloud server; the extraction module is used for extracting the target operation information through the cloud server to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; the generation module is used for generating a target operation flow based on a target object, a plurality of target operation steps, a target extraction sequence and target multimedia information through the cloud server, wherein the target operation flow comprises a plurality of nodes connected in series, and the plurality of nodes correspond to the plurality of target operation steps.
Here, it should be noted that the obtaining module 1202, the extracting module 1204, and the generating module 1206 correspond to steps S802 to S806 in embodiment 4, and the implementation examples and application scenarios of the three modules and the corresponding steps are the same, but are not limited to the disclosure in embodiment 1, and it should be noted that the modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of a tool.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 9
The embodiment of the application can provide a computer terminal which can be any computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute program codes of the following steps in the information processing method: acquiring target operation information and target multimedia information of a target object; extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; and generating a target operation flow based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps.
Alternatively, fig. 13 is a block diagram of a computer terminal according to an embodiment of the present application. As shown in fig. 13, the computer terminal a may include: one or more (only one shown) processors, memory.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the information processing method and apparatus in the embodiments of the present application, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the information processing method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located from the processor, and these remote memories may be connected to terminal a through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring target operation information and target multimedia information of a target object; extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; and generating a target operation flow based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps.
Optionally, the processor may further execute the program code of the following steps: associating the target operation steps with the target multimedia information to generate a plurality of nodes; connecting a plurality of nodes in series based on the target extraction sequence to obtain a series result; a target workflow is generated based on the target object and the concatenation result.
Optionally, the processor may further execute the program code of the following steps: identifying the target multimedia information to obtain an identification result, wherein the identification result is used for representing the category of the target multimedia information; and associating the target operation steps with the target multimedia information based on the preset priority and the identification result to generate a plurality of nodes, wherein the preset priority is used for representing the display priority of the target multimedia information of different categories when the nodes are output.
Optionally, the processor may further execute the program code of the following steps: acquiring first remote sensing data and first meteorological data of a target object, wherein the first meteorological data are meteorological data of an area where the target object is located; identifying the first remote sensing data and the first meteorological data to obtain a first identification result, wherein the first identification result is used for indicating whether the target object is in a preset state or not; inquiring the first identification result to generate target guide information, wherein the target guide information is used for guiding the first user to execute operation corresponding to a preset state; and outputting target guidance information based on the target operation flow.
Optionally, the processor may further execute the program code of the following steps: generating target early warning information based on the first recognition result; outputting target early warning information; receiving first feedback information, wherein the first feedback information is obtained by a first user through information acquisition on a target object according to target early warning information; and generating target guide information based on the first feedback information and the target early warning information.
Optionally, the processor may further execute the program code of the following steps: receiving second feedback information, wherein the second feedback information is a problem related to the target object and is provided according to the target early warning information; and generating target guide information based on the first feedback information, the second feedback information and the target early warning information.
Optionally, the processor may further execute the program code of the following steps: acquiring second remote sensing data and second meteorological data of the target object, wherein the second remote sensing data is used for representing the remote sensing data of the target object acquired at the preset moment, and the second meteorological data is used for representing the meteorological data of the area where the target object is located acquired at the preset moment; determining a target time period and target operation based on the preset time, the second remote sensing data and the second meteorological data; generating target task information based on the target time period and the target operation, wherein the target task information is used for guiding a first user to execute the target operation on the target object in the target time period; and outputting the target task information.
Optionally, the processor may further execute the program code of the following steps: acquiring a plurality of third remote sensing data and a plurality of third meteorological data of a target object in a target time period, wherein the target time period comprises a plurality of moments, and the plurality of third remote sensing data and the plurality of third meteorological data correspond to the plurality of moments; identifying the plurality of third remote sensing data and the plurality of third meteorological data to obtain a second identification result, wherein the second identification result is used for indicating whether the plurality of third remote sensing data and the plurality of third meteorological data meet preset conditions or not; determining a target time in a target time period based on the second identification result, wherein the target time is a time corresponding to third remote sensing data and third meteorological data which meet preset conditions in the target time period; target task information is output based on the target time.
Optionally, the processor may further execute the program code of the following steps: acquiring first question information of a second user; identifying the first question information to obtain a third identification result, wherein the third identification result is used for representing the type of the first question information; performing intention recognition on the first question information based on the third recognition result to obtain a target query text; inquiring the target inquiry text to generate target guidance information, wherein the target guidance information is also used for guiding a second user to execute the operation corresponding to the first question information; target guidance information is output based on the target workflow.
Optionally, the processor may further execute the program code of the following steps: displaying each target operation step in the target guidance information according to the target operation flow; receiving target feedback information, wherein the target feedback information is obtained by confirming the target information in the target operation step; and displaying the target multimedia information corresponding to the target feedback information.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring farming operation information and farming multimedia information of target crops; extracting the farm work operation information to obtain a plurality of farm work operation steps of the farm work operation information and a target extraction sequence of the plurality of farm work operation steps; and generating a target operation flow based on the target crops, the plurality of farm work operation steps, the target extraction sequence and the farm work multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the plurality of nodes correspond to the plurality of farm work operation steps.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: the cloud server acquires target operation information and target multimedia information of a target object; the cloud server extracts the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; the cloud server generates a target operation flow based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring target operation information and target multimedia information of a target building; extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; and generating a target operation flow based on the target building, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps.
In the embodiment of the application, firstly, target operation information and target multimedia information of a target object are obtained; extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; the target operation process is generated based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation process comprises a plurality of nodes which are connected in series, and the nodes correspond to the target operation steps, so that the operation steps can be better displayed through the target operation process, and the user can conveniently check the operation steps. It is easy to notice that the target multimedia information is combined in the process of generating the target operation flow, so that clearer answers can be obtained through the target operation flow, the user can understand the answer conveniently, and the technical problem that the answer with better effect is difficult to be given to the operational problem in the question-answering mode of the related technology is solved.
It can be understood by those skilled in the art that the structure shown in fig. 13 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 13 is a diagram illustrating a structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 13, or have a different configuration than shown in FIG. 13.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 10
Embodiments of the present application also provide a storage medium. Optionally, in this embodiment, the storage medium may be configured to store the program code executed by the information processing method provided in the first embodiment.
Optionally, in this embodiment, the storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: acquiring target operation information and target multimedia information of a target object; extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; and generating a target operation flow based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps.
Optionally, the storage medium is further configured to store program code for performing the following steps: associating the target operation steps with the target multimedia information to generate a plurality of nodes; connecting a plurality of nodes in series based on the target extraction sequence to obtain a series result; and generating a target operation flow based on the target object and the tandem result.
Optionally, the storage medium is further configured to store program code for performing the following steps: identifying the target multimedia information to obtain an identification result, wherein the identification result is used for representing the category of the target multimedia information; and associating the target operation steps with the target multimedia information based on the preset priority and the identification result to generate a plurality of nodes, wherein the preset priority is used for representing the display priority of the target multimedia information of different categories when the nodes are output.
Optionally, the storage medium is further configured to store program code for performing the following steps: acquiring first remote sensing data and first meteorological data of a target object, wherein the first meteorological data are meteorological data of an area where the target object is located; identifying the first remote sensing data and the first meteorological data to obtain a first identification result, wherein the first identification result is used for indicating whether the target object is in a preset state or not; inquiring the first identification result to generate target guidance information, wherein the target guidance information is used for guiding the first user to execute operation corresponding to a preset state; and outputting target guidance information based on the target operation flow.
Optionally, the storage medium is further configured to store program code for performing the following steps: generating target early warning information based on the first recognition result; outputting target early warning information; receiving first feedback information, wherein the first feedback information is obtained by a first user through information acquisition on a target object according to target early warning information; and generating target guide information based on the first feedback information and the target early warning information.
Optionally, the storage medium is further configured to store program code for performing the following steps: receiving second feedback information, wherein the second feedback information is a problem related to the target object and is provided according to the target early warning information; and generating target guide information based on the first feedback information, the second feedback information and the target early warning information.
Optionally, the storage medium is further configured to store program code for performing the following steps: acquiring second remote sensing data and second meteorological data of the target object, wherein the second remote sensing data are used for representing the remote sensing data of the target object acquired at the preset moment, and the second meteorological data are used for representing the meteorological data of the area where the target object is located, acquired at the preset moment; determining a target time period and target operation based on the preset time, the second remote sensing data and the second meteorological data; generating target task information based on the target time period and the target operation, wherein the target task information is used for guiding a first user to execute the target operation on the target object in the target time period; and outputting the target task information.
Optionally, the storage medium is further configured to store program code for performing the following steps: acquiring a plurality of third remote sensing data and a plurality of third meteorological data of a target object in a target time period, wherein the target time period comprises a plurality of moments, and the plurality of third remote sensing data and the plurality of third meteorological data correspond to the plurality of moments; identifying the plurality of third remote sensing data and the plurality of third meteorological data to obtain a second identification result, wherein the second identification result is used for indicating whether the plurality of third remote sensing data and the plurality of third meteorological data meet preset conditions or not; determining a target time in a target time period based on the second identification result, wherein the target time is a time corresponding to third remote sensing data and third meteorological data which meet preset conditions in the target time period; and outputting target task information based on the target time.
Optionally, the storage medium is further configured to store program code for performing the following steps: acquiring first question information of a second user; identifying the first question information to obtain a third identification result, wherein the third identification result is used for representing the type of the first question information; performing intention recognition on the first question information based on the third recognition result to obtain a target query text; inquiring the target inquiry text to generate target guide information, wherein the target guide information is also used for guiding a second user to execute the operation corresponding to the first question information; target guidance information is output based on the target workflow.
Optionally, the storage medium is further configured to store program code for performing the following steps: displaying each target operation step in the target guidance information according to the target operation flow; receiving target feedback information, wherein the target feedback information is obtained by confirming the target information in the target operation step; and displaying the target multimedia information corresponding to the target feedback information.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: obtaining farming operation information and farming multimedia information of a target crop; extracting the farm work operation information to obtain a plurality of farm work operation steps of the farm work operation information and a target extraction sequence of the farm work operation steps; and generating a target operation flow based on the target crops, the plurality of farm work operation steps, the target extraction sequence and the farm work multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the plurality of nodes correspond to the plurality of farm work operation steps.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: the cloud server acquires target operation information and target multimedia information of a target object; the cloud server extracts the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; the cloud server generates a target operation flow based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: acquiring target operation information and target multimedia information of a target building; extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; and generating a target operation flow based on the target building, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, and the nodes correspond to the target operation steps.
In the embodiment of the application, firstly, target operation information and target multimedia information of a target object are obtained; extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps; the target operation process is generated based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation process comprises a plurality of nodes which are connected in series, and the nodes correspond to the target operation steps, so that the operation steps can be better displayed through the target operation process, and the user can conveniently check the operation steps. It is easy to notice that the target multimedia information is combined in the process of generating the target operation flow, so that clearer answers can be obtained through the target operation flow, the user can understand the answer conveniently, and the technical problem that the answer with better effect is difficult to be given to the operational problem in the question-answering mode of the related technology is solved.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technical content can be implemented in other manners. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be implemented in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, or portions or all or portions of the technical solutions that contribute to the prior art, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (12)

1. An information processing method, characterized by comprising:
acquiring target operation information and target multimedia information of a target object;
extracting the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps;
generating a target operation flow based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, the nodes correspond to the target operation steps, the target operation flow is triggered by inquiring the target operation steps of the target object, the nodes are used for displaying through interaction with a user, and after the target operation step corresponding to one node is displayed, the user triggers the next node in the target operation flow in an interactive mode;
wherein generating a target workflow based on the target object, the plurality of target operation steps, the target extraction order, and the target multimedia information comprises:
associating the target operation steps with the target multimedia information to generate a plurality of nodes;
the plurality of nodes are connected in series based on the target extraction sequence to obtain a series result;
generating the target workflow based on the target object and the concatenation result;
wherein, associating the target operation steps with the target multimedia information to generate a plurality of nodes comprises:
identifying the target multimedia information to obtain an identification result, wherein the identification result is used for representing the category of the target multimedia information;
and associating the target operation steps with the target multimedia information based on preset priorities and the recognition result to generate the nodes, wherein the preset priorities are used for representing the display priorities of the target multimedia information of different categories when the nodes are output.
2. The method of claim 1, further comprising:
acquiring first remote sensing data and first meteorological data of the target object, wherein the first meteorological data are meteorological data of an area where the target object is located;
identifying the first remote sensing data and the first meteorological data to obtain a first identification result, wherein the first identification result is used for indicating whether the target object is in a preset state or not;
inquiring the first recognition result to generate target guidance information, wherein the target guidance information is used for guiding a first user to execute operation corresponding to the preset state;
outputting the target guidance information based on the target workflow.
3. The method of claim 2, wherein querying the first recognition result to generate target guidance information comprises:
generating target early warning information based on the first recognition result;
outputting the target early warning information;
receiving first feedback information, wherein the first feedback information is obtained by the first user performing information acquisition on the target object according to the target early warning information;
and generating the target guide information based on the first feedback information and the target early warning information.
4. The method of claim 3, wherein generating the target guidance information based on the first feedback information and the target warning information comprises:
receiving second feedback information, wherein the second feedback information is a problem related to the target object and is provided according to the target early warning information;
and generating the target guidance information based on the first feedback information, the second feedback information and the target early warning information.
5. The method of claim 2, further comprising:
acquiring second remote sensing data and second meteorological data of the target object, wherein the second remote sensing data are used for representing the remote sensing data of the target object acquired at a preset moment, and the second meteorological data are used for representing the meteorological data of an area where the target object is located acquired at the preset moment;
determining a target time period and target operation based on the preset time, the second remote sensing data and the second meteorological data;
generating target task information based on the target time period and the target operation, wherein the target task information is used for guiding the first user to execute the target operation on the target object in the target time period;
and outputting the target task information.
6. The method of claim 5, wherein outputting the target task information comprises:
acquiring a plurality of third remote sensing data and a plurality of third meteorological data of the target object in the target time period, wherein the target time period comprises a plurality of moments, and the plurality of third remote sensing data and the plurality of third meteorological data correspond to the plurality of moments;
identifying the plurality of third remote sensing data and the plurality of third meteorological data to obtain a second identification result, wherein the second identification result is used for indicating whether the plurality of third remote sensing data and the third meteorological data meet a preset condition or not;
determining a target time in the target time period based on the second identification result, wherein the target time is a time corresponding to the third remote sensing data and the third meteorological data which meet the preset condition in the target time period;
and outputting the target task information based on the target time.
7. The method of claim 2, further comprising:
acquiring first question information of a second user;
identifying the first question information to obtain a third identification result, wherein the third identification result is used for representing the type of the first question information;
performing intention recognition on the first question information based on the third recognition result to obtain a target query text;
querying the target query text to generate the target guidance information, wherein the target guidance information is further used for guiding the second user to execute an operation corresponding to the first question information;
outputting the target guidance information based on the target workflow.
8. The method of claim 2 or 7, wherein outputting the target guidance information based on the target workflow comprises:
displaying each target operation step in the target guidance information according to the target operation flow;
receiving target feedback information, wherein the target feedback information is obtained by confirming the target information in the target operation step;
and displaying the target multimedia information corresponding to the target feedback information.
9. An information processing method characterized by comprising:
obtaining farming operation information and farming multimedia information of a target crop;
extracting the farm work operation information to obtain a plurality of farm work operation steps of the farm work operation information and a target extraction sequence of the farm work operation steps;
generating a target operation flow based on the target crop, the plurality of farm work operation steps, the target extraction sequence and the farm work multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, the plurality of nodes correspond to the plurality of farm work operation steps, the target operation flow is triggered by inquiring the target operation step of the target object, the plurality of nodes are used for displaying through interaction with a user, and after the farm work operation step corresponding to one node is displayed, the user triggers the next node in the target operation flow in an interactive mode;
wherein generating a target work flow based on the target crop, the plurality of farming operation steps, the target extraction order, and the farming multimedia information comprises:
associating the plurality of farming operation steps with the farming multimedia information to generate a plurality of nodes;
the plurality of nodes are connected in series based on the target extraction sequence to obtain a series result;
generating the target workflow based on the target object and the concatenation result;
wherein, associating the plurality of farming operation steps with the farming multimedia information to generate a plurality of nodes, comprising:
identifying the farming multimedia information to obtain an identification result, wherein the identification result is used for representing the category of the farming multimedia information;
and associating the plurality of farming operation steps with the farming multimedia information based on preset priorities and the identification result to generate a plurality of nodes, wherein the preset priorities are used for representing the display priorities of different types of farming multimedia information when the nodes are output.
10. An information processing method, characterized by comprising:
the cloud server acquires target operation information and target multimedia information of a target object;
the cloud server extracts the target operation information to obtain a plurality of target operation steps of the target operation information and a target extraction sequence of the target operation steps;
the cloud server generates a target operation flow based on the target object, the target operation steps, the target extraction sequence and the target multimedia information, wherein the target operation flow comprises a plurality of nodes connected in series, the nodes correspond to the target operation steps, the target operation flow is triggered by inquiring the target operation steps of the target object, the nodes are used for displaying through interaction with a user, and after the target operation step corresponding to one node is displayed, the user triggers the next node in the target operation flow in an interactive mode;
wherein the cloud server generating a target workflow based on the target object, the plurality of target operation steps, the target extraction order, and the target multimedia information comprises:
the cloud server associates the target operation steps with the target multimedia information to generate a plurality of nodes;
the cloud server connects the plurality of nodes in series based on the target extraction sequence to obtain a series result;
the cloud server generates the target operation process based on the target object and the series connection result;
wherein, the cloud server associates the target operation steps and the target multimedia information to generate a plurality of nodes, including:
the cloud server identifies the target multimedia information to obtain an identification result, wherein the identification result is used for representing the category of the target multimedia information;
and the cloud server associates the target operation steps with the target multimedia information based on preset priorities and the recognition result to generate the nodes, wherein the preset priorities are used for representing the display priorities of the target multimedia information of different categories when the nodes are output.
11. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method of any one of claims 1 to 10.
12. An electronic device, comprising: a memory and a processor for executing a program stored in the memory, wherein the program when executed performs the method of any one of claims 1 to 10.
CN202210745675.3A 2022-06-29 2022-06-29 Information processing method, computer-readable storage medium, and electronic device Active CN114817584B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210745675.3A CN114817584B (en) 2022-06-29 2022-06-29 Information processing method, computer-readable storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210745675.3A CN114817584B (en) 2022-06-29 2022-06-29 Information processing method, computer-readable storage medium, and electronic device

Publications (2)

Publication Number Publication Date
CN114817584A CN114817584A (en) 2022-07-29
CN114817584B true CN114817584B (en) 2022-11-15

Family

ID=82522727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210745675.3A Active CN114817584B (en) 2022-06-29 2022-06-29 Information processing method, computer-readable storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN114817584B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609487A (en) * 2009-08-06 2009-12-23 北京农学院 Logical and the using method of a kind of farming based on PDA
CN109600622A (en) * 2018-08-31 2019-04-09 北京微播视界科技有限公司 Audio/video information processing method, device and electronic equipment
CN111159435A (en) * 2019-12-27 2020-05-15 北大方正集团有限公司 Multimedia resource processing method, system, terminal and computer readable storage medium
CN112115282A (en) * 2020-09-17 2020-12-22 北京达佳互联信息技术有限公司 Question answering method, device, equipment and storage medium based on search
CN112788330A (en) * 2020-12-25 2021-05-11 深圳市元征科技股份有限公司 Diagnostic video generation method and device, terminal equipment and storage medium
CN114331753A (en) * 2022-03-04 2022-04-12 阿里巴巴达摩院(杭州)科技有限公司 Intelligent farm work method and device and control equipment
CN114359745A (en) * 2021-12-10 2022-04-15 阿里巴巴(中国)有限公司 Information processing method, storage medium, and electronic device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832434B (en) * 2017-11-15 2022-05-06 百度在线网络技术(北京)有限公司 Method and device for generating multimedia play list based on voice interaction
JPWO2020235085A1 (en) * 2019-05-23 2021-12-23 日本電信電話株式会社 Operation log visualization device, operation log visualization method and operation log visualization program
CN113762048A (en) * 2021-05-11 2021-12-07 腾讯科技(深圳)有限公司 Product installation guiding method and device, electronic equipment and storage medium
CN113255614A (en) * 2021-07-06 2021-08-13 杭州实在智能科技有限公司 RPA flow automatic generation method and system based on video analysis
CN113949697B (en) * 2021-09-24 2023-05-09 北京达佳互联信息技术有限公司 Data distribution method, device, electronic equipment and storage medium
CN114064112A (en) * 2021-11-24 2022-02-18 建信金融科技有限责任公司 Business process configuration method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609487A (en) * 2009-08-06 2009-12-23 北京农学院 Logical and the using method of a kind of farming based on PDA
CN109600622A (en) * 2018-08-31 2019-04-09 北京微播视界科技有限公司 Audio/video information processing method, device and electronic equipment
CN111159435A (en) * 2019-12-27 2020-05-15 北大方正集团有限公司 Multimedia resource processing method, system, terminal and computer readable storage medium
CN112115282A (en) * 2020-09-17 2020-12-22 北京达佳互联信息技术有限公司 Question answering method, device, equipment and storage medium based on search
CN112788330A (en) * 2020-12-25 2021-05-11 深圳市元征科技股份有限公司 Diagnostic video generation method and device, terminal equipment and storage medium
CN114359745A (en) * 2021-12-10 2022-04-15 阿里巴巴(中国)有限公司 Information processing method, storage medium, and electronic device
CN114331753A (en) * 2022-03-04 2022-04-12 阿里巴巴达摩院(杭州)科技有限公司 Intelligent farm work method and device and control equipment

Also Published As

Publication number Publication date
CN114817584A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
US6064943A (en) Computer network for collecting and analyzing agronomic data
US20180342020A1 (en) System, method and apparatus for management of agricultural resource
Turner et al. Oil palm cultivation and management.
CN110377743B (en) Text labeling method and device
CN106682658A (en) APP (Application) for domestically raising Chinese herbaceous peony flowers based on big data and image identification
CN115656167A (en) Plant diagnosis method, plant diagnosis device and computer-readable storage medium
Haroni et al. Application of artificial neural networks for predicting the yield and GHG emissions of sugarcane production.
CN114817584B (en) Information processing method, computer-readable storage medium, and electronic device
CN111563759B (en) Identification and analysis system for agricultural product traceability process based on AI technology
CN112328771A (en) Service information output method, device, server and storage medium
Suebsombut et al. Chatbot application to support smart agriculture in Thailand
Yi et al. Framework for integrated ecosystem management in the Hindu Kush Himalaya (for pilot testing within transboundary landscapes).
CN115630967A (en) Intelligent tracing method and device for agricultural products, electronic equipment and storage medium
Hall et al. Voices for development: the Tanzanian National Radio Study Campaign.
CN112750291A (en) Farmland intelligent monitoring system and method based on multi-network fusion
Cardozier Growing cotton.
Hoedjes Public participation in environmental research
CN205405153U (en) Intelligent agriculture plants system
CN214756414U (en) Farmland intelligent monitoring system based on multi-network integration
CN113987039A (en) Digital agricultural cloud platform
CN106682656A (en) Domestic Hippeastrum rutilum culture APP based on big data and image recognition
CN113826598A (en) Unmanned aerial vehicle pesticide spreading method and device based on neural network
Liu et al. Real-time pixel-wise classification of agricultural images based on depth-wise separable convolution.
CN113656481A (en) Intelligent ecological garden intelligence library service system, method and application equipment
Puri et al. Smart-farming assistance for agricultural crops in various seasons using web-enabled information service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant