CN113610084A - Topic auxiliary method, topic auxiliary device and topic auxiliary system - Google Patents

Topic auxiliary method, topic auxiliary device and topic auxiliary system Download PDF

Info

Publication number
CN113610084A
CN113610084A CN202110972468.7A CN202110972468A CN113610084A CN 113610084 A CN113610084 A CN 113610084A CN 202110972468 A CN202110972468 A CN 202110972468A CN 113610084 A CN113610084 A CN 113610084A
Authority
CN
China
Prior art keywords
problem solving
topic
solving
steps
title
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110972468.7A
Other languages
Chinese (zh)
Inventor
何涛
石凡
罗欢
陈明权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dana Technology Inc
Original Assignee
Hangzhou Dana Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dana Technology Inc filed Critical Hangzhou Dana Technology Inc
Priority to CN202110972468.7A priority Critical patent/CN113610084A/en
Publication of CN113610084A publication Critical patent/CN113610084A/en
Priority to PCT/CN2022/111171 priority patent/WO2023024898A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to a topic auxiliary method, a topic auxiliary device, and a topic auxiliary system, wherein the topic auxiliary method includes: acquiring a first problem solving process of a user for a problem, wherein the first problem solving process comprises one or more first problem solving steps; selecting a second problem solving process which is closest to the first problem solving process from all problem solving processes corresponding to the problems, wherein the second problem solving process comprises one or more second problem solving steps; comparing each of the one or more first problem solving steps with a corresponding second problem solving step of the one or more second problem solving steps, respectively; and generating a correction result of the questions according to the comparison result of the first question solving step and the corresponding second question solving step.

Description

Topic auxiliary method, topic auxiliary device and topic auxiliary system
Technical Field
The present disclosure relates to the field of teaching assistance, and in particular, to a question assistance method, a question assistance device, and a question assistance system.
Background
In recent years, various teaching assistance techniques have been widely used to improve the efficiency of teaching and the effect of teaching. For example, the user may learn related knowledge, complete corresponding tasks, and the like with the aid of teaching assistance techniques. However, at present, automatic modification of user jobs is not perfect, and especially, it is difficult to modify the problem solving process step by step. Therefore, there is a need for new techniques.
Disclosure of Invention
The purpose of the present disclosure is to provide a topic support method, a topic support device, and a topic support system.
According to a first aspect of the present disclosure, there is provided a title assisting method, including: obtaining a first problem solving process of a user for a problem, wherein the first problem solving process comprises one or more first problem solving steps; selecting a second problem solving process which is closest to the first problem solving process from all problem solving processes corresponding to the problems, wherein the second problem solving process comprises one or more second problem solving steps; comparing each of the one or more first problem solving steps with a corresponding second problem solving step of the one or more second problem solving steps, respectively; and generating a correction result of the questions according to the comparison result of the first question solving step and the corresponding second question solving step.
According to a second aspect of the present disclosure, there is provided a title assisting apparatus comprising a memory, a processor and instructions stored on the memory, which when executed by the processor, implement the steps of the title assisting method as described above.
According to a third aspect of the present disclosure, there is provided a topic support system, the topic support system comprising a user terminal and a server, wherein: the user terminal is configured to obtain a topic image of a topic and display a correction result of the topic; at least one of the user terminal and the server is configured to obtain a first problem solving process of a user for a problem according to the problem image, wherein the first problem solving process comprises one or more first problem solving steps; the server is configured to select a second problem solving process closest to the first problem solving process from all problem solving processes corresponding to the topics, wherein the second problem solving process comprises one or more second problem solving steps; comparing each of the one or more first problem solving steps with a corresponding second problem solving step of the one or more second problem solving steps, respectively; and generating a correction result of the questions according to the comparison result of the first question solving step and the corresponding second question solving step.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon instructions which, when executed by the processor, implement the steps of the title assisting method as described above.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising instructions which, when executed by the processor, implement the steps of the title assisting method as described above.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates a flow diagram of a topic support method according to an exemplary embodiment of the present disclosure.
Fig. 2 schematically illustrates a flowchart of step S200 in a title assisting method according to a specific example of the present disclosure.
Fig. 3 schematically illustrates a flowchart of step S210 in the title assisting method according to a specific example of the present disclosure.
FIG. 4 schematically illustrates a flow diagram of some steps in a topic assistance method according to an exemplary embodiment of the present disclosure.
FIG. 5 schematically illustrates a correction result diagram of a topic according to a specific example of the present disclosure.
FIG. 6 schematically illustrates a correction result diagram of a topic according to another specific example of the present disclosure.
Fig. 7 schematically illustrates a block diagram of a title assisting apparatus according to an exemplary embodiment of the present disclosure.
FIG. 8 schematically illustrates a block diagram of a topic assistance system according to an exemplary embodiment of the present disclosure.
Note that in the embodiments described below, the same reference numerals are used in common between different drawings to denote the same portions or portions having the same functions, and a repetitive description thereof will be omitted. In this specification, like reference numerals and letters are used to designate like items, and therefore, once an item is defined in one drawing, further discussion thereof is not required in subsequent drawings.
Detailed Description
Various exemplary embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise. In the following description, numerous details are set forth in order to better explain the present disclosure, however it is understood that the present disclosure may be practiced without these details.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
The disclosure provides a question auxiliary method, which aims to realize correction of a question solving process, in particular to realize correction of at least one part of steps in the question solving process, thereby improving the learning or teaching effect.
As shown in fig. 1, in an exemplary embodiment of the present disclosure, a title assisting method may include:
step S100, a first problem solving process of the user for the problem is obtained.
Wherein the first problem solving process may comprise one or more first problem solving steps. In some embodiments, the first problem solving process may include only one problem solving step, which may be a final problem solving step corresponding to the answer to the topic or may be an intermediate problem solving step not corresponding to the answer to the topic. In other embodiments, the first problem solving process may include a plurality of first problem solving steps. The plurality of first problem solving steps typically includes at least one intermediate problem solving step that does not correspond to an answer to a question. In addition, the first problem solving process may be a complete problem solving process or may be only a part of the problem solving process according to the problem solving situation of the user.
In the process of executing the title assisting method, a first problem solving process which is already processed into a corresponding character set can be directly obtained from other equipment, and the character set is compared with the determined second problem solving process in the subsequent step, so that the first problem solving process is improved. Alternatively, the topic image may be obtained and processed by using a region recognition model, a character recognition model, a picture recognition model, etc. to obtain a character set representing the first problem solving process that can be directly compared with the second problem solving process. It can be understood that an area recognition model, a character recognition model, an image recognition model, etc. may also be used to obtain the question stem included in the question image, and obtain all the problem solving processes corresponding to the question stem according to the question stem.
Specifically, an image capture device can be used to capture a topic image containing a topic stem and a corresponding first problem solving process. The image capturing device may be provided independently, or may be included in a user terminal such as a smart phone or a tablet computer. The theme images may include any form of visual presentation, such as photographs, videos, and the like.
Then, at least one of the question stem region, the question solving process region and the picture region in the question image can be identified by adopting the region identification model. The region identification model may include a first neural network model having an input as the topic image and an output as various types of regions in the topic image.
The first neural network model may be pre-trained by any known method using a large number of training samples, according to the input and output described above. For example, it can be trained by the following process: and establishing a subject image sample training set. Labeling each topic image sample to label the position of each region in each topic image sample; and training the first neural network through the subject image sample training set subjected to labeling processing to obtain a first neural network model. The first neural network may be any known neural network, such as a deep residual network, a recurrent neural network, or the like.
Training the first neural network may further include: testing the output accuracy of the trained first neural network model based on the question image sample test set; if the output accuracy rate is smaller than a preset first threshold value, increasing the number of subject image samples in a subject image sample training set, and performing the labeling processing on each subject image sample in the increased subject image samples; and retraining the first neural network through the subject image sample training set after the number of the subject image samples is increased. And then testing the output accuracy of the retrained first neural network model again based on the topic image sample test set until the output accuracy of the first neural network model meets the requirement, namely is not less than a preset first threshold value. It will be understood by those skilled in the art that one or more subject image samples in the subject image sample training set can be placed in the subject image sample test set, or one or more subject image samples in the subject image sample test set can be placed in the subject image sample training set, as desired.
Then, a character recognition model can be adopted to respectively recognize a first character in the problem solving process area and a second character in the question stem area, and a first problem solving process and a question stem are generated according to the first character and the second character. In addition, all possible problem solving processes corresponding to the problem stems can be obtained according to the problem stems.
The character recognition model can include a second neural network model that inputs various regions in the topic image and outputs characters in the relevant regions. It should be understood that the characters referred to herein may include text words, graphic words, letters, numbers, symbols, and the like. Similarly, the second neural network model may be pre-trained by any known method using a large number of training samples, in accordance with the input and output described above. For example, it can be trained by the following process: and establishing a training set of area samples, wherein each area sample can be a problem solving process area or a problem stem area. Labeling each area sample to label characters in each area sample; and training the second neural network through the region sample training set subjected to labeling processing to obtain a second neural network model. The second neural network may be any known neural network. In addition, similar to the description of the first neural network above, training the second neural network may further include verifying the output accuracy of the second neural network model with the region sample test set, and if the accuracy does not meet the requirement, increasing the number of samples in the region sample training set and retraining.
In some embodiments of the present disclosure, after the picture region is determined by using the region identification model, the picture region may be further processed to generate a question stem or at least a part of the first question solving process according to the picture region. For example, it may be determined whether the picture area belongs to the question stem or the question solving process, and then the first question solving process may be determined jointly according to the question solving process area and the picture area belonging to the question solving process, or the question stem may be determined jointly according to the question stem area and the picture area belonging to the question stem.
In other embodiments, the problem solving process can be separated from the question stem. For example, the problem solving process may be on an answer sheet, but not on a test paper or the like containing a question stem. In this case, the problem solving process corresponding to the corresponding question stem can also be distinguished according to the result of the area recognition or the character recognition, or the like.
In some embodiments, some explanatory words not belonging to the problem solving process itself may be further doped in the problem solving process area. At this time, a character recognition model may be used to recognize all characters in the problem solving process area, determine the characters belonging to the first problem solving step and the characters not belonging to the first problem solving step in the characters, and generate the first problem solving process according to all the characters belonging to the first problem solving step in the problem solving process area, thereby eliminating the interference of other characters.
In the above embodiment, at least one of the area recognition model, the character recognition model and the picture recognition model may be deployed in a local user terminal or in a cloud server, so as to reasonably utilize resources in the user terminal and the server, increase the recognition speed as much as possible, and improve the recognition efficiency.
After obtaining the first problem solving process, as shown in fig. 1, the problem assisting method may further include:
and step S200, selecting a second problem solving process which is closest to the first problem solving process from all problem solving processes corresponding to the problems.
Wherein the second problem solving process may include one or more second problem solving steps. For the same topic, it may have different solutions, corresponding to different second solving processes. In each solution, some of the steps may be omitted or the order of some of the steps may be changed. Therefore, in order to accurately find the second solving process closest to the first solving process, all possible solving processes (solving processes) can be enumerated according to the questions, and the first solving process and all subsets of all enumerated solving processes can be compared. Wherein the subset of the problem solving process can include a final problem solving step that is an answer to the question and at least one intermediate problem solving step that is not an answer.
For example, assume that for a topic there are a total of two different solutions, solution 1 and solution 2. Wherein, for solution method 1, the set of all solving problem steps is { a1, B1, C1, D1, E }, where E is also the answer to the subject, and then the subset of all solving problem steps corresponding to solution method 1 for the subject includes { a1, E }, { B1, E }, { C1, E }, { D1, E }, { a1, B1, E }, { a1, C1, E }, { a1, D1, E }, { B1, C1, E }, { B1, D1, E }, { C1, D1, E }, { B1, C1, D1, E }, { a1, C1, D1, E }, { a1, B1, D1, E68672, B72, B1, E }, C1, E }, and { a1, C }, C1, C, 1, C, and B3, 1, C1, C, and B3, C1, respectively, E }, respectively. For solution 2, the set of all solution steps is { a2, B2, E }, and the subset of all solution steps corresponding to solution 2 for this topic includes { a2, E }, { B2, E }, { a2, B2, E }. Then, in the subsequent step, the subset closest to the first problem solving process given by the user is found from the above 18 subsets, and the first problem solving step and the corresponding second problem solving step are compared.
In general, the effect of the order between the solving steps may not be considered in determining the second solving process. It is understood that the order between the solving steps may also be considered, if necessary, to determine the second solving process that is closest to the first solving process.
Specifically, as shown in fig. 2, step S200 may include:
step S210, comparing the first problem solving process with all subsets of all problem solving processes;
step S220, according to the comparison result between the first problem solving process and the subset, determining the subset which is closest to the first problem solving process; and
step S230, determining the problem solving process corresponding to the subset closest to the first problem solving process as the second problem solving process.
Wherein the determination of the closest subset may be based on the edit distance. As shown in fig. 3, step S210 may include:
step S211, determining a first feature vector corresponding to a first problem solving step in a first problem solving process by adopting a vector model;
step S212, determining a characteristic vector corresponding to the problem solving step in the subset by adopting a vector model; and
step S213 compares the first problem solving step and the problem solving step by calculating the edit distance between the first feature vector and the feature vector.
It is understood that the second problem solving step corresponding to the first problem solving step has the smallest edit distance from the first problem solving step.
In a specific example, assuming that the topic to be modified is 2x (5+1), it can have two different solving processes.
The first problem solving procedure is as follows:
=2x5+2x1
=10+2x1
=10+2
=12
the second problem solving procedure is as follows:
=2x6
=12
if the first problem solving process given by the user is as follows:
=10+1
=11
then, as can be seen from the comparison, the two first problem solving steps of the user are closest to the last two problem solving steps in the first problem solving process, so that the first problem solving process can be determined as the second problem solving process for comparison in the subsequent steps.
Under the condition that the topic to be modified is known, all the solving processes corresponding to the topic can be obtained according to the steps shown in fig. 4, which specifically include:
step S510, determining the type of the title according to the title;
step S520, based on the question type, obtaining a problem solving model corresponding to the question type from a preset rule base;
step S530, a problem solving process of the problem is generated by adopting the problem solving model.
The region recognition model and character recognition model herein may be similar to those used above to determine the problem solving process.
Further, a third neural network model can be employed to determine topic types from topic content. Specifically, the input of the third neural network model can be topic content and the output can be topic type. The third neural network model may be obtained by pre-training the third neural network according to the input and output described above by any known method using a large number of training samples. The third neural network may be any known neural network, such as a deep convolutional neural network or the like.
In some embodiments, the problem solving model can include a computational model for a topic with a topic type of computational topic, and a natural language processing model and/or a vector model for a topic with a topic type of application topic.
Specifically, when the topic type is a computational topic, the process of solving the topic using the computational model to generate the topic can include: and acquiring a corresponding calculation model from a preset rule base according to the form characteristics (such as the number, the highest power, the position, the calculation symbol and the like) of the question, and generating a question solving process according to the calculation model.
For example, if the identified calculation topic is a calculation
Figure BDA0003226352740000091
The formal feature of the topic is determined to be the sum of the computed scores. The problem solving rule for obtaining the sum of the calculated scores from the preset rule base may, for example, sequentially include: the denominator of each fraction in the unified formula, the denominator are combined, the sum of numerators is calculated, and the fraction is reduced. In other examples, a graphical problem solving process may also be generated. For example, if the identified calculation questions are titled as calculation
Figure BDA0003226352740000092
The value of x in (1). Then, a graphical problem solving rule for calculating a unary linear equation can be obtained in a preset rule base, specifically, the problem solving rule is drawn respectively
Figure BDA0003226352740000093
And
Figure BDA0003226352740000094
and finding out the x coordinate of the position of the intersection point as a problem solving answer, thereby generating a graphical problem solving process.
When the topic type is an application topic, the process of solving the topic by using the vector model can comprise the following steps: performing feature extraction on the subject content to generate a corresponding feature vector; searching a standard vector matched with the corresponding feature vector from a preset question bank; and generating a problem solving process based on the standard vector.
In some embodiments, the corresponding feature vector may be a two-dimensional feature vector, which may be a feature map (feature map), which may be generated by any method known in the art, such as extraction by processing with a deep convolutional neural network to apply the image region of the subject. The two-dimensional feature vector can be generated according to characters in the application questions, another two-dimensional feature vector can be generated according to pictures in the corresponding application questions, and the two-dimensional feature vectors are spliced to obtain the two-dimensional feature vector.
The feature extraction can be realized by adopting a fourth neural network model, that is, the input of the fourth neural network model can be topic content (including characters and pictures), and the output is a feature vector corresponding to the topic content. The fourth neural network model may be obtained by pre-training the fourth neural network according to the input and output described above by any known method using a large number of training samples. The fourth neural network may be any known neural network, such as a deep convolutional neural network or the like.
The preset question bank may include a plurality of groups, and each group may include one or more vectors. These vectors are feature vectors generated by extracting features of topics of known application topics (for example, topics in a pre-collected application topic library). Any two vectors from the same group have the same length and any two vectors from different groups have different lengths.
Searching the criterion vector from the preset question bank may include: firstly, according to the length of the characteristic vector, finding a group matched with the length of the characteristic vector in a preset question bank; a search is then made in this set of length matches to find the standard vector. In this way, the standard vector matched with the feature vector corresponding to the application topic can be searched more quickly. In some embodiments, each group may have a respective index that matches (e.g., is equal to) the length of the respective vectors in the group. Then, the problem solving process of the application problem can be generated according to the preset problem solving process associated with the standard vector.
Or, when the topic type is an application topic, a natural language processing model can be adopted to analyze and extract key data according to the language of the application topic, and then a problem solving process is given based on the data.
After determining a second problem solving process corresponding to the first problem solving process, as shown in fig. 1, the problem assisting method may further include:
step S300, comparing each of the one or more first problem solving steps with a corresponding second problem solving step of the one or more second problem solving steps, respectively.
In some examples, a first problem solving step in the first problem solving process corresponds to a second problem solving step in the second problem solving process, and the first problem solving step and the second problem solving step can be compared. In other examples, the first problem solving process may correspond to only a proper subset of the second problem solving processes, and each first problem solving step and its corresponding second problem solving step may be compared only while skipping other second problem solving steps in the second problem solving processes. In the comparison, the first problem solving step may be compared with the corresponding second problem solving step character by character. In addition, for comparison, each problem solving step may occupy one row in the problem solving process in the previous processing.
As shown in fig. 1, the title assisting method may further include:
step 400, generating a correction result for the questions according to the comparison result between the first question solving step and the corresponding second question solving step.
Specifically, when the first problem solving step is the same as the corresponding second problem solving step, the first problem solving step is correct; when the first problem solving step is different from the corresponding second problem solving step, the first problem solving step may be wrong, and corresponding prompt information may be generated.
For example, in some embodiments, when a third character included in the first problem solving step is different from a corresponding fourth character in the corresponding second problem solving step, the third character may be marked to indicate to the user that there may be an error.
In other embodiments, when a third character included in the first problem solving step is different from a corresponding fourth character in the corresponding second problem solving step, the corresponding fourth character may also be displayed in a correction area around the wrong third character to help the user correct. It will be appreciated that the correct fourth character may be displayed around the incorrect third character based on user instructions. For example, by default, only the third character that may be incorrect may be identified until the user gives the relevant instruction through the user terminal, and then the correct fourth character is displayed, so that the user may look ahead.
In some embodiments, to promote a user to develop a good practice for solving problems, and to avoid missing important problem solving steps, the second problem solving process may also include at least one necessary problem solving step. At this time, generating a correction result for the topic according to the comparison result between the first topic solving step and the corresponding second topic solving step may further include: and when the first problem solving step corresponding to at least one necessary problem solving step does not exist in the first problem solving process, generating prompt information to prompt a user that an important step exists.
Fig. 5 to 6 show results of the modification of the job displayed in some specific examples. It is to be understood that the error of the corresponding solving step or correction of the solving step may be displayed in different ways and is not limited herein. For example, in FIG. 5, the wrong step can be marked off with diagonal lines and the correct step displayed next. In fig. 6, the wrong step may be marked with a different color.
In addition, on the display interface of the user terminal, other different buttons may be further provided to receive corresponding instructions of the user, for example, the user may select to modify the corresponding topic by clicking the corresponding area on the interface, or the user may instruct to modify the next topic by clicking the button, store the wrong topic in the wrong topic book, record a note for the wrong topic, and the like, which is not limited herein.
The present disclosure also provides a topic assist device, as shown in FIG. 7, the topic assist device 1000 can include a memory 1010, a processor 1020, and instructions stored on the memory 1010, which when executed by the processor 1020, implement the steps of the topic assist method described above.
Among other things, the processor 1020 may perform various actions and processes in accordance with instructions stored in the memory 1010. In particular, the processor 1020 may be an integrated circuit chip having signal processing capabilities. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present disclosure may be implemented or performed. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, either of the X810 architecture or the ARM architecture or the like.
The memory 1010 stores executable instructions that, when executed by the processor 1020, perform the topic assist methods described above. The memory 1010 can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), or flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Synchronous Link Dynamic Random Access Memory (SLDRAM), and direct memory bus random access memory (DR RAM). It should be noted that the memories of the methods described herein are intended to comprise, without being limited to, these and any other suitable types of memory.
The disclosure also provides a topic auxiliary system. As shown in FIG. 8, the topic assistance system 1100 can include a user terminal 1110 and a server 1120.
The user terminal 1110 may be configured to obtain a topic image of a topic and display a correction result of the topic. At least one of the user terminal 1110 and the server 1120 can be configured to obtain a first problem solving process of the user on the problem according to the problem image. The server 1120 can be configured to select a second solving process closest to the first solving process from all solving processes corresponding to the topic, wherein the second solving process comprises one or more second solving steps; comparing each of the one or more first problem solving steps with a corresponding second problem solving step of the one or more second problem solving steps, respectively; and generating a correction result of the questions according to the comparison result of the first question solving step and the corresponding second question solving step. It is to be understood that at least one of the user terminal and the server can also perform other steps described above with respect to the topic assistance method.
The present disclosure also proposes a non-transitory computer-readable storage medium having stored thereon instructions that, when executed, may implement the steps in the title-assist method described above.
Similarly, non-transitory computer readable storage media in embodiments of the present disclosure may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. It should be noted that the computer-readable storage media described herein are intended to comprise, without being limited to, these and any other suitable types of memory.
The present disclosure also contemplates a computer program product that can include instructions that, when executed by a processor, can implement the steps of the title assisting method as described above.
The instructions may be any set of instructions to be executed directly by one or more processors, such as machine code, or indirectly, such as scripts. The terms "instructions," "applications," "processes," "steps," and "programs" herein may be used interchangeably. The instructions may be stored in an object code format for direct processing by one or more processors, or in any other computer language, including scripts or collections of independent source code modules that are interpreted or compiled in advance, as needed. The instructions may include instructions that cause, for example, one or more processors to act as the neural networks herein. The functions, methods, and routines of the instructions are explained in more detail elsewhere herein.
In addition, embodiments of the present disclosure may also include the following examples:
1. a title assistant method comprises the following steps:
acquiring a first problem solving process of a user for a problem, wherein the first problem solving process comprises one or more first problem solving steps;
selecting a second problem solving process which is closest to the first problem solving process from all problem solving processes corresponding to the problems, wherein the second problem solving process comprises one or more second problem solving steps;
comparing each of the one or more first problem solving steps with a corresponding second problem solving step of the one or more second problem solving steps, respectively; and
and generating a correction result of the questions according to the comparison result of the first question solving step and the corresponding second question solving step.
2. The topic assistant method of 1, the one or more first problem solving steps comprising at least one intermediate problem solving step, wherein the intermediate problem solving step does not correspond to an answer to a topic.
3. According to the topic auxiliary method of 1, the process of obtaining the first solution of the user to the topic comprises the following steps:
acquiring a question image;
identifying a problem solving process area in the problem image by adopting an area identification model;
identifying a first character in the problem solving process area by adopting a character identification model; and
a first problem solving process is generated based on the identified first character.
4. According to the topic auxiliary method of 1, the process of obtaining all the solving questions corresponding to the topics comprises the following steps:
acquiring a question image;
identifying a question stem area in the question image by adopting an area identification model;
recognizing a second character in the stem region by adopting a character recognition model; and
and determining a question stem according to the recognized second character, and generating all question solving processes according to the question stem.
5. The title assisting method according to 3 or 4, further comprising:
identifying a picture area in the subject image by adopting an area identification model; and
a first problem solving process or a problem stem is determined according to the picture area.
6. According to the topic assistant method of 1, selecting a second solving process closest to the first solving process from all solving processes corresponding to the topics comprises:
comparing the first solving process with all subsets of all solving processes, wherein the subsets of solving processes comprise a final solving step as an answer to the question and at least one intermediate solving step which is not an answer;
determining the subset which is closest to the first problem solving process according to the comparison result of the first problem solving process and the subset; and
and determining the problem solving process corresponding to the subset closest to the first problem solving process as a second problem solving process.
7. According to the topic assistant method of 6, comparing the first solving process to all subsets of all solving processes comprises:
determining a first feature vector corresponding to a first problem solving step in a first problem solving process by adopting a vector model;
determining a feature vector corresponding to the problem solving step in the subset by adopting a vector model; and
comparing the first problem solving step with the problem solving step by calculating an edit distance between the first feature vector and the feature vector;
wherein, the problem solving step corresponding to the first problem solving step has the minimum editing distance with the first problem solving step.
8. According to the topic assistant method of 1, comparing each of the one or more first problem solving steps with a corresponding second problem solving step of the one or more second problem solving steps, respectively, comprises:
the first problem solving step is compared with the corresponding second problem solving step character by character.
9. The title assistant method according to 1, wherein generating a correction result for a title according to a comparison result between a first problem solving step and a corresponding second problem solving step comprises:
marking the third character when the third character in the first solving step is different from the corresponding fourth character in the corresponding second solving step.
10. The title assisting method according to 9, wherein generating a correction result for the title according to the comparison result between the first solving step and the corresponding second solving step further comprises:
and when the third character in the first problem solving step is different from the corresponding fourth character in the corresponding second problem solving step, displaying the corresponding fourth character in the batch modification area around the third character.
11. According to the topic assistant method of 1, the second problem solving process comprises at least one necessary problem solving step;
generating a correction result for the topic according to the comparison result of the first topic solving step and the corresponding second topic solving step comprises:
and when the first problem solving step corresponding to at least one necessary problem solving step does not exist in the first problem solving process, generating prompt information.
12. The title assisting method according to 1, the title assisting method further comprising:
determining the type of the title according to the title;
based on the topic type, obtaining a problem solving model corresponding to the topic type from a preset rule base;
a problem solving model is employed to generate a problem solving process for the problem.
13. According to the topic support method of 12, the problem solving model includes a calculation model for a topic with a topic type of a calculation topic, and a natural language processing model and/or a vector model for a topic with a topic type of an application topic.
14. According to the topic assistant method of 13, when the topic type is a computational topic, the process of solving the topic using the computational model comprises:
and acquiring a corresponding calculation model from a preset rule base according to the form characteristics of the questions, and generating a question solving process according to the calculation model.
15. According to the topic assistant method of 13, when the topic type is an application topic, the process of solving the topic using a vector model comprises:
when the topic type is an application topic, performing feature extraction on the topic content to generate a third feature vector;
retrieving a standard vector matched with the third feature vector from a preset question bank; and
based on the standard vector, a problem solving process is generated.
16. A title assistant comprising a memory, a processor, and instructions stored on the memory which, when executed by the processor, implement the steps of a title assistant method according to any one of 1 to 15.
17. A topic auxiliary system comprises a user terminal and a server, wherein:
the user terminal is configured to obtain a topic image of a topic and display a correction result of the topic;
at least one of the user terminal and the server is configured to obtain a first problem solving process of a user on a problem according to the problem image, wherein the first problem solving process comprises one or more first problem solving steps;
the server is configured to select a second problem solving process which is closest to the first problem solving process from all problem solving processes corresponding to the problems, wherein the second problem solving process comprises one or more second problem solving steps; comparing each of the one or more first problem solving steps with a corresponding second problem solving step of the one or more second problem solving steps, respectively; and generating a correction result of the questions according to the comparison result of the first question solving step and the corresponding second question solving step.
18. A non-transitory computer readable storage medium having stored thereon instructions which, when executed by a processor, implement the steps of a topic assist method according to any one of claims 1 to 15.
19. A computer program product comprising instructions which, when executed by a processor, carry out the steps of a title assisting method according to any one of claims 1 to 15.
It is to be noted that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In general, the various example embodiments of this disclosure may be implemented in hardware or special purpose circuits, software, firmware, logic or any combination thereof. Certain aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While aspects of embodiments of the disclosure have been illustrated or described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
The terms "front," "back," "top," "bottom," "over," "under," and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein.
As used herein, the word "exemplary" means "serving as an example, instance, or illustration," and not as a "model" that is to be replicated accurately. Any implementation exemplarily described herein is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, the disclosure is not limited by any expressed or implied theory presented in the preceding technical field, background, brief summary or the detailed description.
As used herein, the term "substantially" is intended to encompass any minor variation resulting from design or manufacturing imperfections, device or component tolerances, environmental influences, and/or other factors. The word "substantially" also allows for differences from a perfect or ideal situation due to parasitics, noise, and other practical considerations that may exist in a practical implementation.
In addition, the foregoing description may refer to elements or nodes or features being "connected" or "coupled" together. As used herein, unless expressly stated otherwise, "connected" means that one element/node/feature is directly connected to (or directly communicates with) another element/node/feature, either electrically, mechanically, logically, or otherwise. Similarly, unless expressly stated otherwise, "coupled" means that one element/node/feature may be mechanically, electrically, logically, or otherwise joined to another element/node/feature in a direct or indirect manner to allow for interaction, even though the two features may not be directly connected. That is, to "couple" is intended to include both direct and indirect joining of elements or other features, including connection with one or more intermediate elements.
In addition, "first," "second," and like terms may also be used herein for reference purposes only, and thus are not intended to be limiting. For example, the terms "first," "second," and other such numerical terms referring to structures or elements do not imply a sequence or order unless clearly indicated by the context.
It will be further understood that the terms "comprises/comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In the present disclosure, the term "providing" is used broadly to encompass all ways of obtaining an object, and thus "providing an object" includes, but is not limited to, "purchasing," "preparing/manufacturing," "arranging/setting," "installing/assembling," and/or "ordering" the object, and the like.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. The various embodiments disclosed herein may be combined in any combination without departing from the spirit and scope of the present disclosure. It will also be appreciated by those skilled in the art that various modifications may be made to the embodiments without departing from the scope and spirit of the disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (10)

1. A title assisting method, comprising:
obtaining a first problem solving process of a user for a problem, wherein the first problem solving process comprises one or more first problem solving steps;
selecting a second problem solving process which is closest to the first problem solving process from all problem solving processes corresponding to the problems, wherein the second problem solving process comprises one or more second problem solving steps;
comparing each of the one or more first problem solving steps with a corresponding second problem solving step of the one or more second problem solving steps, respectively; and
and generating a correction result of the questions according to the comparison result of the first question solving step and the corresponding second question solving step.
2. The title assisting method of claim 1, wherein the one or more first solving steps comprise at least one intermediate solving step, wherein the intermediate solving step does not correspond to an answer to the title.
3. The topic assistance method of claim 1, wherein selecting a second problem solving process that is closest to the first problem solving process from among all problem solving processes corresponding to the topic comprises:
comparing the first solving process with all subsets of all solving processes, wherein a subset of solving processes comprises a final solving step as an answer to the question and at least one intermediate solving step that is not the answer;
determining a subset which is closest to the first problem solving process according to the comparison result of the first problem solving process and the subset; and
and determining the problem solving process corresponding to the subset closest to the first problem solving process as a second problem solving process.
4. The topic assistance method of claim 3, wherein comparing the first problem solving process to all subsets of all problem solving processes comprises:
determining a first feature vector corresponding to a first problem solving step in the first problem solving process by adopting a vector model;
determining a feature vector corresponding to the problem solving step in the subset by adopting a vector model; and
comparing the first problem solving step with the problem solving step by calculating an edit distance between the first feature vector and the feature vector;
wherein, the problem solving step corresponding to the first problem solving step has the minimum editing distance with the first problem solving step.
5. The title assisting method of claim 1, wherein comparing each of the one or more first problem solving steps with a corresponding second problem solving step of the one or more second problem solving steps, respectively, comprises:
the first problem solving step is compared with the corresponding second problem solving step character by character.
6. The title assisting method of claim 1, wherein generating the modification result for the title according to the comparison result of the first solving step and the corresponding second solving step comprises:
marking a third character in the first problem solving step when the third character is different from a corresponding fourth character in the corresponding second problem solving step.
7. A title assistant, comprising a memory, a processor, and instructions stored on the memory, which when executed by the processor, implement the steps of a title assistant method according to any one of claims 1 to 6.
8. A topic auxiliary system, comprising a user terminal and a server, wherein:
the user terminal is configured to obtain a topic image of a topic and display a correction result of the topic;
at least one of the user terminal and the server is configured to obtain a first problem solving process of a user for a problem according to the problem image, wherein the first problem solving process comprises one or more first problem solving steps;
the server is configured to select a second problem solving process closest to the first problem solving process from all problem solving processes corresponding to the topics, wherein the second problem solving process comprises one or more second problem solving steps; comparing each of the one or more first problem solving steps with a corresponding second problem solving step of the one or more second problem solving steps, respectively; and generating a correction result of the questions according to the comparison result of the first question solving step and the corresponding second question solving step.
9. A non-transitory computer readable storage medium having stored thereon instructions which, when executed by the processor, implement the steps of the title assisting method according to any one of claims 1 to 6.
10. A computer program product comprising instructions which, when executed by the processor, carry out the steps of the title assisting method according to any one of claims 1 to 6.
CN202110972468.7A 2021-08-24 2021-08-24 Topic auxiliary method, topic auxiliary device and topic auxiliary system Pending CN113610084A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110972468.7A CN113610084A (en) 2021-08-24 2021-08-24 Topic auxiliary method, topic auxiliary device and topic auxiliary system
PCT/CN2022/111171 WO2023024898A1 (en) 2021-08-24 2022-08-09 Problem assistance method, problem assistance apparatus and problem assistance system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110972468.7A CN113610084A (en) 2021-08-24 2021-08-24 Topic auxiliary method, topic auxiliary device and topic auxiliary system

Publications (1)

Publication Number Publication Date
CN113610084A true CN113610084A (en) 2021-11-05

Family

ID=78309209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110972468.7A Pending CN113610084A (en) 2021-08-24 2021-08-24 Topic auxiliary method, topic auxiliary device and topic auxiliary system

Country Status (2)

Country Link
CN (1) CN113610084A (en)
WO (1) WO2023024898A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023024898A1 (en) * 2021-08-24 2023-03-02 杭州大拿科技股份有限公司 Problem assistance method, problem assistance apparatus and problem assistance system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1115360A (en) * 1997-06-24 1999-01-22 Matsushita Electric Ind Co Ltd Correction and comment lessoning method
CN108172050B (en) * 2017-12-26 2020-12-22 科大讯飞股份有限公司 Method and system for correcting answer result of mathematic subjective question
CN112184505A (en) * 2020-09-30 2021-01-05 北京有竹居网络技术有限公司 Information processing method and device and computer storage medium
CN112509404A (en) * 2020-11-19 2021-03-16 江苏乐易学教育科技有限公司 Teaching system and teaching method for space geometric thinking process
CN113127682A (en) * 2021-04-15 2021-07-16 杭州大拿科技股份有限公司 Topic presentation method, system, electronic device, and computer-readable storage medium
CN113610084A (en) * 2021-08-24 2021-11-05 杭州大拿科技股份有限公司 Topic auxiliary method, topic auxiliary device and topic auxiliary system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023024898A1 (en) * 2021-08-24 2023-03-02 杭州大拿科技股份有限公司 Problem assistance method, problem assistance apparatus and problem assistance system

Also Published As

Publication number Publication date
WO2023024898A1 (en) 2023-03-02

Similar Documents

Publication Publication Date Title
WO2020177531A1 (en) Question assistance method and system
CN109710590B (en) Error problem book generation method and device
CN109271401B (en) Topic searching and correcting method and device, electronic equipment and storage medium
CN109815932B (en) Test paper correcting method and device, electronic equipment and storage medium
WO2021073332A1 (en) Method and apparatus for assisting maths word problem
CN112990180B (en) Question judging method, device, equipment and storage medium
CN110781648A (en) Test paper automatic transcription system and method based on deep learning
CN112434690A (en) Method, system and storage medium for automatically capturing and understanding elements of dynamically analyzing text image characteristic phenomena
CN112347997A (en) Test question detection and identification method and device, electronic equipment and medium
CN116311279A (en) Sample image generation, model training and character recognition methods, equipment and media
WO2022127425A1 (en) Question assistance method, apparatus and system
JP7422548B2 (en) Label noise detection program, label noise detection method, and label noise detection device
CN112115907A (en) Method, device, equipment and medium for extracting structured information of fixed layout certificate
CN115061769A (en) Self-iteration RPA interface element matching method and system for supporting cross-resolution
CN113610068B (en) Test question disassembling method, system, storage medium and equipment based on test paper image
WO2023024898A1 (en) Problem assistance method, problem assistance apparatus and problem assistance system
CN113239909B (en) Question processing method, device, equipment and medium
CN113723367B (en) Answer determining method, question judging method and device and electronic equipment
CN115204366A (en) Model generation method and device, computer equipment and storage medium
CN111832550B (en) Data set manufacturing method and device, electronic equipment and storage medium
US10095802B2 (en) Methods and systems for using field characteristics to index, search for, and retrieve forms
CN113837167A (en) Text image recognition method, device, equipment and storage medium
Nguyen et al. Handwriting recognition and automatic scoring for descriptive answers in Japanese language tests
CN113407676A (en) Title correction method and system, electronic device and computer readable medium
KR20220118579A (en) System for providing tutoring service using artificial intelligence and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination