CN113077237A - Course arrangement method and system for self-adaptive hybrid algorithm - Google Patents

Course arrangement method and system for self-adaptive hybrid algorithm Download PDF

Info

Publication number
CN113077237A
CN113077237A CN202110409674.7A CN202110409674A CN113077237A CN 113077237 A CN113077237 A CN 113077237A CN 202110409674 A CN202110409674 A CN 202110409674A CN 113077237 A CN113077237 A CN 113077237A
Authority
CN
China
Prior art keywords
course
graph
neural network
node
teaching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110409674.7A
Other languages
Chinese (zh)
Other versions
CN113077237B (en
Inventor
经建忠
李小红
徐光波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kaiyuan Software Technology Co ltd
Original Assignee
Guangzhou Kaiyuan Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kaiyuan Software Technology Co ltd filed Critical Guangzhou Kaiyuan Software Technology Co ltd
Priority to CN202110409674.7A priority Critical patent/CN113077237B/en
Publication of CN113077237A publication Critical patent/CN113077237A/en
Application granted granted Critical
Publication of CN113077237B publication Critical patent/CN113077237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • G06Q10/1093Calendar-based scheduling for persons or groups
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Primary Health Care (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to a course arrangement system of a self-adaptive hybrid algorithm, which comprises: storing original data in the system by using a data storage module; generating a plurality of structured course nodes by using the original data acquired from the data storage module by the initial node generation module; generating a subgraph by utilizing a subgraph generation module; the information of the initial node generation module and the sub-graph generation module is input to a trained graph neural network, the graph neural network outputs one or more optimal course arrangement tables which are arranged in sequence according to the recommended values, the course arrangement problem is solved by using the graph neural network, different requirement changes can be self-adapted, the universality is strong, when course arrangement variables are changed, only basic network nodes in the course arrangement process need to be changed, course arrangement algorithms do not need to be adjusted, the optimal course arrangement result can be obtained in a low time complexity, and the problem that the traditional genetic algorithm is not converged or falls into a local optimal solution is avoided.

Description

Course arrangement method and system for self-adaptive hybrid algorithm
Technical Field
The invention relates to the technical field of educational administration management, in particular to a course arrangement method and system based on a self-adaptive hybrid algorithm.
Background
The course scheduling problem proved to be an NP-complete problem as early as 70 s, namely, the calculation time of the algorithm is exponentially increased, and the conclusion establishes the theoretical depth of the course scheduling problem. For NP-complete problems, no general algorithm is currently mathematically well solved. Solving the NP-complete problem can only rely on approximate algorithms, including dynamic programming, greedy algorithms, backtracking methods, and the like.
In the course of the prior art, the genetic algorithm GA (genetic Algorithm) has also been used. GA is an optimization technology abstracted from a biological evolution process and based on natural selection and a biological genetic mechanism, and an optimal solution or a satisfactory solution is found through fast random search according to the principle of 'win or loss and survival of suitable persons'. However, GA is easy to generate premature convergence, the obtained solution is a local optimal solution rather than a global optimal solution, the problem of non-convergence of a schedule due to 'gene mutation' exists, meanwhile, the GA algorithm is complex to implement, and when the data size is large, the time complexity is high. In real life, the course arrangement problem involves a plurality of variable factors, the universality of the existing course arrangement method is poor, and once the influence variable is changed, the course arrangement algorithm in the middle needs to be readjusted.
Deep learning is a hotspot technology in recent years, particularly in the field of artificial intelligence since 2010, and two basic networks are commonly used in deep learning: one is a convolutional neural network, CNN for short, which is mainly used for processing image data. The other is a recurrent neural network, abbreviated as RNN, which is mainly used for processing time series data, such as speech data and text data.
The year 2019 is the original year of the development of a graph neural network, called GNN for short, and the GNN is initially used in the fields of scene graph generation, point cloud classification and segmentation, action recognition, traffic prediction, commodity recommendation, program verification, program reasoning, social influence prediction, adversarial attack prevention, electronic health record modeling, brain networks, event detection, combination optimization and the like at present.
The existing course arrangement method is difficult to be applied in practice, and a course arrangement method which is low in time complexity, high in universality, specific in self-adaption and capable of quickly obtaining global optimal solution is urgently needed.
Disclosure of Invention
In order to solve the above problem, the present invention provides a course arrangement system of an adaptive hybrid algorithm, the system comprising:
the data storage module is used for storing original data in the system;
the initial node generation module is used for generating a plurality of structured course nodes from the original data acquired from the data storage module; the course node comprises a plurality of attributes including a course number, a course name, a teaching teacher, a teaching starting time, a teaching terminating time, a front course, a rear course and the like; the preposed course refers to the previous course on which the current course number depends, and the post-positioned course refers to the next course for starting teaching after the current course number is finished;
the subgraph generation module comprises a plurality of course nodes and connecting edges among the course nodes, and the subgraph is a directed graph from the same course;
and inputting the information of the initial node generation module and the sub-graph generation module into a trained graph neural network, and outputting one or more optimal course arrangement tables which are sequentially arranged according to the recommended score by the graph neural network.
Further, in one embodiment, the data storage module is further configured to store a historical schdule, which is a reliable schdule that has been matured and used, and which must satisfy a plurality of rigid conditions that are constrained by objective natural conditions.
Further, in one embodiment, the rigid conditions include, but are not limited to: only one course can be arranged in the same real or virtual classroom time period; the same group of students can only learn one course in the same real or virtual classroom time period; the same teacher can teach only one course during the same real or virtual time period.
Further, in one embodiment, the training comprises:
constructing a training sample set, and constructing course arrangement sample nodes by using data in a reliable course arrangement table stored by the data storage module; the course arrangement sample node is a target node in a course arrangement network graph, and the target node has a plurality of attributes including course number, course name, teaching teacher, teaching starting time, teaching terminating time, front course, rear course and the like; the preposed course refers to the previous course on which the current course number depends, and the post-positioned course refers to the next course for starting teaching after the current course number is finished; the course arrangement network graph comprises a plurality of target nodes and connecting edges among the target nodes, and each connecting edge has weight, direction and number information corresponding to each connecting edge.
Further, in one embodiment, the graph neural network includes an encoding layer and a decoding layer; the coding layer is a graph neural network algorithm with coding parameters and a combination thereof, and the decoding layer is a learning algorithm with decoding parameters and a combination thereof; based on any node and adjacent nodes in the training sample subgraph, generating a graph representation vector by utilizing an encoding layer based on the attribute of each node; carrying out multiple times of iterative training on the decoding parameters, extracting a specified number of targets from training samples in each iteration, and generating a predicted position value of each node by using a decoding layer based on the graph representing vector; the graph neural network is further provided with a target loss function for optimizing the graph neural network, and the optimization process is repeated continuously until the training condition is met.
Correspondingly, the application also provides a course arrangement method of the self-adaptive hybrid algorithm, and the method comprises the following steps:
storing original data by using a data storage module;
an initial node generation module is used for generating a plurality of structured course nodes from the original data acquired from the data storage module; the course node comprises a plurality of attributes including a course number, a course name, a teaching teacher, a teaching starting time, a teaching terminating time, a front course, a rear course and the like; the preposed course refers to the previous course on which the current course number depends, and the post-positioned course refers to the next course for starting teaching after the current course number is finished;
generating a subgraph by using a subgraph generation module, wherein the subgraph comprises a plurality of course nodes and connecting edges among the course nodes, and the subgraph is a directed graph from the same course;
and inputting the information of the initial node generation module and the sub-graph generation module into a trained graph neural network, and outputting one or more optimal course arrangement tables which are sequentially arranged according to the recommended score by the graph neural network.
Further, in one embodiment, the data storage module is further configured to store a historical schdule, which is a reliable schdule that has been matured and used, and which must satisfy a plurality of rigid conditions that are constrained by objective natural conditions.
Further, in one embodiment, the rigid conditions include, but are not limited to: only one course can be arranged in the same real or virtual classroom time period; the same group of students can only learn one course in the same real or virtual classroom time period; the same teacher can teach only one course during the same real or virtual time period.
Further, in one embodiment, the training comprises:
constructing a training sample set, and constructing course arrangement sample nodes by using data in a reliable course arrangement table stored by the data storage module; the course arrangement sample node is a target node in a course arrangement network graph, and the target node has a plurality of attributes including course number, course name, teaching teacher, teaching starting time, teaching terminating time, front course, rear course and the like; the preposed course refers to the previous course on which the current course number depends, and the post-positioned course refers to the next course for starting teaching after the current course number is finished; the course arrangement network graph comprises a plurality of target nodes and connecting edges among the target nodes, and each connecting edge has weight, direction and number information corresponding to each connecting edge.
Further, in one embodiment, the graph neural network includes an encoding layer and a decoding layer; the coding layer is a graph neural network algorithm with coding parameters and a combination thereof, and the decoding layer is a graph neural network algorithm with decoding parameters and a combination thereof; based on any node and adjacent nodes in the training sample subgraph, generating a graph representation vector by utilizing an encoding layer based on the attribute of each node; carrying out multiple times of iterative training on the decoding parameters, extracting a specified number of targets from training samples in each iteration, and generating a predicted position value of each node by using a decoding layer based on the graph representing vector; the graph neural network is further provided with a target loss function for optimizing the graph neural network, and the optimization process is repeated continuously until the training condition is met.
The present invention also proposes a terminal device comprising a memory, a processor, said memory storing a computer program running on said processor, said program performing the method described in the above steps or a combination of the steps, provided that various combinations that solve the technical problems to be solved are included in this application.
The present invention also proposes a computer readable medium storing a computer program running on the processor, the program performing the method described in the above steps or a combination of the steps, provided that various combinations that can solve the technical problems to be solved are included in the application.
The inventive contribution and the beneficial effects of the application are as follows: the course scheduling problem is solved to this application first application map neural network, can the different demand of self-adaptation change, has extremely strong commonality, when the variable of scheduling course changes, only needs to change the basic network node of the in-process of scheduling course, and need not adjust and arrange course algorithm itself, can obtain the optimal course scheduling result with lower time complexity, has avoided traditional genetic algorithm not to converge or has been absorbed in the problem of local optimal solution.
Drawings
FIG. 1 shows the basic flow diagram of the present invention
Detailed Description
As shown in fig. 1, a basic flow chart of the present invention is shown, which corresponds to the course arrangement system flow.
The invention provides a course arrangement system of a self-adaptive hybrid algorithm, which comprises:
the data storage module is used for storing original data in the system;
the initial node generation module is used for generating a plurality of structured course nodes from the original data acquired from the data storage module; the course node comprises a plurality of attributes including a course number, a course name, a teaching teacher, a teaching starting time, a teaching terminating time, a front course, a rear course and the like; the preposed course refers to the previous course on which the current course number depends, and the post-positioned course refers to the next course for starting teaching after the current course number is finished;
the subgraph generation module comprises a plurality of course nodes and connecting edges among the course nodes, and the subgraph is a directed graph from the same course;
and inputting the information of the initial node generation module and the sub-graph generation module into a trained graph neural network, and outputting one or more optimal course arrangement tables which are sequentially arranged according to the recommended score by the graph neural network.
Further, in one embodiment, the data storage module is further configured to store a historical schdule, which is a reliable schdule that has been matured and used, and which must satisfy a plurality of rigid conditions that are constrained by objective natural conditions.
Further, in one embodiment, the rigid conditions include, but are not limited to: only one course can be arranged in the same real or virtual classroom time period; the same group of students can only learn one course in the same real or virtual classroom time period; the same teacher can teach only one course during the same real or virtual time period.
Further, in one embodiment, the training comprises:
constructing a training sample set, and constructing course arrangement sample nodes by using data in a reliable course arrangement table stored by the data storage module; the course arrangement sample node is a target node in a course arrangement network graph, and the target node has a plurality of attributes including course number, course name, teaching teacher, teaching starting time, teaching terminating time, front course, rear course and the like; the preposed course refers to the previous course on which the current course number depends, and the post-positioned course refers to the next course for starting teaching after the current course number is finished; the course arrangement network graph comprises a plurality of target nodes and connecting edges among the target nodes, and each connecting edge has weight, direction and number information corresponding to each connecting edge.
Further, in one embodiment, the graph neural network includes an encoding layer and a decoding layer; the coding layer is a graph neural network algorithm with coding parameters and a combination thereof, and the decoding layer is a learning algorithm with decoding parameters and a combination thereof; based on any node and adjacent nodes in the training sample subgraph, generating a graph representation vector by utilizing an encoding layer based on the attribute of each node; carrying out multiple times of iterative training on the decoding parameters, extracting a specified number of targets from training samples in each iteration, and generating a predicted position value of each node by using a decoding layer based on the graph representing vector; the graph neural network is further provided with a target loss function for optimizing the graph neural network, and the optimization process is repeated continuously until the training condition is met.
Optionally, the graph neural network further includes: the device comprises a first convolution unit, a first pooling layer, a second convolution unit, a second pooling layer and a full-connection layer; the convolution kernel size adopted by the first convolution unit is 3 x 3, and the first activation function of the first convolution unit is marked as RL1(ii) a The convolution kernel of the second convolution unit is 1 x 1, and the second activation function of the second convolution unit is marked as RL2(ii) a The full connection layer outputs one or more lesson scheduling tables which are ordered from high to low according to scores; the first pooling layer and the pooling method of the first pooling layer are as follows:
Figure BDA0003023662850000061
wherein x iseRepresents the output of the current layer, ueRepresenting the input of an activation function, RL representing the activation function, weRepresents the weight of the current layer, phi represents the loss function, xe-1Represents the output of the previous layer, beRepresents a bias, δ represents a constant;
activation function RL ═ RL2=RL1Expressed as:
Figure BDA0003023662850000062
the loss function φ is as follows:
Figure BDA0003023662850000063
n represents the size of the sample data set, i takes values of 1-N, yiRepresents a sample xiA corresponding label; wyiRepresents a sample xiAt its label yiA b vector comprising byiAnd bj,byiRepresents a sample xiAt its label yiDeviation of (a) from (b)jRepresents the deviation at output node j; a represents the predicted output of the neural network of the graph;
and the loss function phi is used for training the neural network of the graph and converging the model parameters.
Correspondingly, the application also provides a course arrangement method of the self-adaptive hybrid algorithm, and the method comprises the following steps:
storing original data by using a data storage module;
an initial node generation module is used for generating a plurality of structured course nodes from the original data acquired from the data storage module; the course node comprises a plurality of attributes including a course number, a course name, a teaching teacher, a teaching starting time, a teaching terminating time, a front course, a rear course and the like; the preposed course refers to the previous course on which the current course number depends, and the post-positioned course refers to the next course for starting teaching after the current course number is finished;
generating a subgraph by using a subgraph generation module, wherein the subgraph comprises a plurality of course nodes and connecting edges among the course nodes, and the subgraph is a directed graph from the same course;
and inputting the information of the initial node generation module and the sub-graph generation module into a trained graph neural network, and outputting one or more optimal course arrangement tables which are sequentially arranged according to the recommended score by the graph neural network.
Further, in one embodiment, the data storage module is further configured to store a historical schdule, which is a reliable schdule that has been matured and used, and which must satisfy a plurality of rigid conditions that are constrained by objective natural conditions.
Further, in one embodiment, the rigid conditions include, but are not limited to: only one course can be arranged in the same real or virtual classroom time period; the same group of students can only learn one course in the same real or virtual classroom time period; the same teacher can teach only one course during the same real or virtual time period.
Further, in one embodiment, the training comprises:
constructing a training sample set, and constructing course arrangement sample nodes by using data in a reliable course arrangement table stored by the data storage module; the course arrangement sample node is a target node in a course arrangement network graph, and the target node has a plurality of attributes including course number, course name, teaching teacher, teaching starting time, teaching terminating time, front course, rear course and the like; the preposed course refers to the previous course on which the current course number depends, and the post-positioned course refers to the next course for starting teaching after the current course number is finished; the course arrangement network graph comprises a plurality of target nodes and connecting edges among the target nodes, and each connecting edge has weight, direction and number information corresponding to each connecting edge.
Further, in one embodiment, the graph neural network includes an encoding layer and a decoding layer; the coding layer is a graph neural network algorithm with coding parameters and a combination thereof, and the decoding layer is a graph neural network algorithm with decoding parameters and a combination thereof; based on any node and adjacent nodes in the training sample subgraph, generating a graph representation vector by utilizing an encoding layer based on the attribute of each node; carrying out multiple times of iterative training on the decoding parameters, extracting a specified number of targets from training samples in each iteration, and generating a predicted position value of each node by using a decoding layer based on the graph representing vector; the graph neural network is further provided with a target loss function for optimizing the graph neural network, and the optimization process is repeated continuously until the training condition is met.
Optionally, the graph neural network further includes: the device comprises a first convolution unit, a first pooling layer, a second convolution unit, a second pooling layer and a full-connection layer; the convolution kernel size adopted by the first convolution unit is 3 x 3, and the first activation function of the first convolution unit is marked as RL1(ii) a The convolution kernel of the second convolution unit is 1 x 1, and the second activation function of the second convolution unit is marked as RL2(ii) a The full connection layer outputs one or more lesson scheduling tables which are ordered from high to low according to scores; the first pooling layer and the pooling method of the first pooling layer are as follows:
Figure BDA0003023662850000081
wherein x iseRepresents the output of the current layer, ueRepresenting the input of an activation function, RL representing the activation function, weRepresents the weight of the current layer, phi represents the loss function, xe-1Represents the output of the previous layer, beRepresents a bias, δ represents a constant;
activation function RL ═ RL2=RL1Expressed as:
Figure BDA0003023662850000082
the loss function φ is as follows:
Figure BDA0003023662850000083
n represents the size of the sample data set, i takes values of 1-N, yiRepresents a sample xiA corresponding label; wyiRepresents a sample xiAt its label yiA b vector comprising byiAnd bj,byiRepresents a sample xiAt the mark thereofSign yiDeviation of (a) from (b)jRepresents the deviation at output node j; a represents the predicted output of the neural network of the graph;
and the loss function phi is used for training the neural network of the graph and converging the model parameters.
The present invention also proposes a terminal device comprising a memory, a processor, said memory storing a computer program running on said processor, said program performing the method described in the above steps or a combination of the steps, provided that various combinations that solve the technical problems to be solved are included in this application.
The present invention also proposes a computer readable medium storing a computer program running on the processor, the program performing the method described in the above steps or a combination of the steps, provided that various combinations that can solve the technical problems to be solved are included in the application.
The embodiment of the present disclosure further provides an electronic power supply device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method. The electronic power supply device may be provided as a terminal, server or other form of power supply device.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible power supply device that may hold and store instructions for use by the instruction execution power supply device. The computer readable storage medium may be, for example, but not limited to, an electrical storage power supply, a magnetic storage power supply, an optical storage power supply, an electromagnetic storage power supply, a semiconductor storage power supply, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanically encoded power supply device, a raised structure such as a punch card or indentation having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to various computing/processing power supply devices, or to an external computer or external storage power supply device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing power supply device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing power supply device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other power generation devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer-readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other power providing devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other power providing devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other power providing devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A course scheduling system for an adaptive hybrid algorithm, the system comprising:
the data storage module is used for storing original data in the system;
the initial node generation module is used for generating a plurality of structured course nodes from the original data acquired from the data storage module; the course node comprises a plurality of attributes including a course number, a course name, a teaching teacher, a teaching starting time, a teaching terminating time, a front course, a rear course and the like; the preposed course refers to the previous course on which the current course number depends, and the post-positioned course refers to the next course for starting teaching after the current course number is finished;
the subgraph generation module generates a subgraph which comprises a plurality of course nodes and connecting edges among the course nodes, and the subgraph is a directed graph from the same course;
and inputting the information of the initial node generation module and the sub-graph generation module into a trained graph neural network, and outputting one or more optimal course arrangement tables which are sequentially arranged according to the recommended score by the graph neural network.
2. The lesson scheduling system of claim 1, wherein the data storage module is further configured to store a historical schedule, wherein the historical schedule is a reliable schedule that has been matured and used, and must satisfy a plurality of rigid conditions that are constrained by objective natural conditions.
3. The course scheduling system of claim 2, wherein the rigid conditions include, but are not limited to:
only one course can be arranged in the same real or virtual classroom time period; the same group of students can only learn one course in the same real or virtual classroom time period; the same teacher can teach only one course during the same real or virtual time period.
4. The course scheduling system of claim 1, the training comprising:
constructing a training sample set, and constructing course arrangement sample nodes by using data in a reliable course arrangement table stored by the data storage module; the course arrangement sample node is a target node in a course arrangement network graph, and the target node has a plurality of attributes including course number, course name, teaching teacher, teaching starting time, teaching terminating time, front course, rear course and the like; the preposed course refers to the previous course on which the current course number depends, and the post-positioned course refers to the next course for starting teaching after the current course number is finished; the course arrangement network graph comprises a plurality of target nodes and connecting edges among the target nodes, and each connecting edge has weight, direction and number information corresponding to each connecting edge.
5. The course scheduling system of claim 1, wherein the graph neural network comprises an encoding layer and a decoding layer; the coding layer is a graph neural network algorithm with coding parameters and a combination thereof, and the decoding layer is a learning algorithm with decoding parameters and a combination thereof; based on any node and adjacent nodes in the training sample subgraph, generating a graph representation vector by utilizing an encoding layer based on the attribute of each node; carrying out multiple times of iterative training on the decoding parameters, extracting a specified number of targets from training samples in each iteration, and generating a predicted position value of each node by using a decoding layer based on the graph representing vector; the graph neural network is further provided with a target loss function for optimizing the graph neural network, and the optimization process is repeated continuously until the training condition is met.
6. A method of course scheduling for an adaptive hybrid algorithm, the method comprising:
storing original data by using a data storage module;
an initial node generation module is used for generating a plurality of structured course nodes from the original data acquired from the data storage module; the course node comprises a plurality of attributes including a course number, a course name, a teaching teacher, a teaching starting time, a teaching terminating time, a front course, a rear course and the like; the preposed course refers to the previous course on which the current course number depends, and the post-positioned course refers to the next course for starting teaching after the current course number is finished;
generating a subgraph by using a subgraph generation module, wherein the subgraph comprises a plurality of course nodes and connecting edges among the course nodes, and the subgraph is a directed graph from the same course;
and inputting the information of the initial node generation module and the sub-graph generation module into a trained graph neural network, and outputting one or more optimal course arrangement tables which are sequentially arranged according to the recommended score by the graph neural network.
7. A course scheduling method according to claim 6, wherein the data storage module is further used for storing a historical scheduling table, the historical scheduling table is a reliable scheduling table which is already matured and used, and must satisfy a plurality of rigid conditions which are constrained by objective natural conditions.
8. The course scheduling method of claim 7, wherein the rigid conditions include, but are not limited to:
only one course can be arranged in the same real or virtual classroom time period; the same group of students can only learn one course in the same real or virtual classroom time period; the same teacher can teach only one course during the same real or virtual time period.
9. The lesson-scheduling method of claim 6, wherein the training comprises:
constructing a training sample set, and constructing course arrangement sample nodes by using data in a reliable course arrangement table stored by the data storage module; the course arrangement sample node is a target node in a course arrangement network graph, and the target node has a plurality of attributes including course number, course name, teaching teacher, teaching starting time, teaching terminating time, front course, rear course and the like; the preposed course refers to the previous course on which the current course number depends, and the post-positioned course refers to the next course for starting teaching after the current course number is finished; the course arrangement network graph comprises a plurality of target nodes and connecting edges among the target nodes, and each connecting edge has weight, direction and number information corresponding to each connecting edge.
10. The course scheduling method of claim 6, wherein the graph neural network comprises an encoding layer and a decoding layer; the coding layer is a graph neural network algorithm with coding parameters and a combination thereof, and the decoding layer is a graph neural network algorithm with decoding parameters and a combination thereof; based on any node and adjacent nodes in the training sample subgraph, generating a graph representation vector by utilizing an encoding layer based on the attribute of each node; carrying out multiple times of iterative training on the decoding parameters, extracting a specified number of targets from training samples in each iteration, and generating a predicted position value of each node by using a decoding layer based on the graph representing vector; the graph neural network is further provided with a target loss function for optimizing the graph neural network, and the optimization process is repeated continuously until the training condition is met.
CN202110409674.7A 2021-04-16 2021-04-16 Course arrangement method and system for self-adaptive hybrid algorithm Active CN113077237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110409674.7A CN113077237B (en) 2021-04-16 2021-04-16 Course arrangement method and system for self-adaptive hybrid algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110409674.7A CN113077237B (en) 2021-04-16 2021-04-16 Course arrangement method and system for self-adaptive hybrid algorithm

Publications (2)

Publication Number Publication Date
CN113077237A true CN113077237A (en) 2021-07-06
CN113077237B CN113077237B (en) 2021-08-27

Family

ID=76617748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110409674.7A Active CN113077237B (en) 2021-04-16 2021-04-16 Course arrangement method and system for self-adaptive hybrid algorithm

Country Status (1)

Country Link
CN (1) CN113077237B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743421A (en) * 2022-04-27 2022-07-12 广东亚外国际文化产业有限公司 Comprehensive evaluation system and method for foreign language learning intelligent teaching
CN115271510A (en) * 2022-08-10 2022-11-01 南通大学 Culture scheme teaching plan compiling method based on improved topological sorting

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060184470A1 (en) * 2004-11-24 2006-08-17 Nanyang Polytechnic Method and system for timetabling using pheromone and hybrid heuristics based cooperating agents
CN109165908A (en) * 2018-07-11 2019-01-08 大连卓云科技有限公司 Course Automatic Arranging System
CN109615571A (en) * 2018-12-17 2019-04-12 北京竞业达数码科技股份有限公司 A kind of Course Arrangement
CN110705709A (en) * 2019-10-14 2020-01-17 支付宝(杭州)信息技术有限公司 Method and device for training neural network model of graph
CN111784127A (en) * 2020-06-15 2020-10-16 北京工业大学 Course arrangement system design method based on genetic algorithm
CN111966076A (en) * 2020-08-11 2020-11-20 广东工业大学 Fault positioning method based on finite-state machine and graph neural network
CN112381365A (en) * 2020-10-30 2021-02-19 广东蕾琪化妆品有限公司 Neural network-based cosmetic production condition monitoring system and control method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060184470A1 (en) * 2004-11-24 2006-08-17 Nanyang Polytechnic Method and system for timetabling using pheromone and hybrid heuristics based cooperating agents
CN109165908A (en) * 2018-07-11 2019-01-08 大连卓云科技有限公司 Course Automatic Arranging System
CN109615571A (en) * 2018-12-17 2019-04-12 北京竞业达数码科技股份有限公司 A kind of Course Arrangement
CN110705709A (en) * 2019-10-14 2020-01-17 支付宝(杭州)信息技术有限公司 Method and device for training neural network model of graph
CN111784127A (en) * 2020-06-15 2020-10-16 北京工业大学 Course arrangement system design method based on genetic algorithm
CN111966076A (en) * 2020-08-11 2020-11-20 广东工业大学 Fault positioning method based on finite-state machine and graph neural network
CN112381365A (en) * 2020-10-30 2021-02-19 广东蕾琪化妆品有限公司 Neural network-based cosmetic production condition monitoring system and control method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743421A (en) * 2022-04-27 2022-07-12 广东亚外国际文化产业有限公司 Comprehensive evaluation system and method for foreign language learning intelligent teaching
CN114743421B (en) * 2022-04-27 2023-05-16 广东亚外国际文化产业有限公司 Comprehensive assessment system and method for foreign language learning intelligent teaching
CN115271510A (en) * 2022-08-10 2022-11-01 南通大学 Culture scheme teaching plan compiling method based on improved topological sorting

Also Published As

Publication number Publication date
CN113077237B (en) 2021-08-27

Similar Documents

Publication Publication Date Title
KR102422729B1 (en) Learning Data Augmentation Policy
DeVries et al. Dataset augmentation in feature space
US11030414B2 (en) System and methods for performing NLP related tasks using contextualized word representations
CN110443351B (en) Generating natural language descriptions of images
US20210271968A1 (en) Generative neural network systems for generating instruction sequences to control an agent performing a task
RU2666631C2 (en) Training of dnn-student by means of output distribution
US10546507B2 (en) Recommending a set of learning activities based on dynamic learning goal adaptation
US11151324B2 (en) Generating completed responses via primal networks trained with dual networks
US11593672B2 (en) Conversation history within conversational machine reading comprehension
CN110795569B (en) Method, device and equipment for generating vector representation of knowledge graph
Roopaei et al. Deep learning control for complex and large scale cloud systems
US20200167677A1 (en) Generating result explanations for neural networks
Jeon et al. Artificial intelligence for traffic signal control based solely on video images
CN113077237B (en) Course arrangement method and system for self-adaptive hybrid algorithm
CN112541060B (en) End-to-end task type dialogue learning framework and method based on confrontation training
CN111291170B (en) Session recommendation method and related device based on intelligent customer service
EP3563302A1 (en) Processing sequential data using recurrent neural networks
US20230188321A1 (en) Method for training model based on homomorphic encryption, device, and storage medium
US11281867B2 (en) Performing multi-objective tasks via primal networks trained with dual networks
CN111090740B (en) Knowledge graph generation method for dialogue system
CN112686381A (en) Neural network model, method, electronic device, and readable medium
CN111723186A (en) Knowledge graph generation method based on artificial intelligence for dialog system and electronic equipment
JP2023527341A (en) Interpretable imitation learning by discovery of prototype options
CN113360669A (en) Knowledge tracking method based on gated graph convolution time sequence neural network
CN111914882A (en) Method and device for generating support vector machine, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant