CN116758554A - Test question information extraction method and device, electronic equipment and storage medium - Google Patents

Test question information extraction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116758554A
CN116758554A CN202310674194.2A CN202310674194A CN116758554A CN 116758554 A CN116758554 A CN 116758554A CN 202310674194 A CN202310674194 A CN 202310674194A CN 116758554 A CN116758554 A CN 116758554A
Authority
CN
China
Prior art keywords
test question
illustration
information
test
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310674194.2A
Other languages
Chinese (zh)
Inventor
兴百桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xingtong Technology Co ltd
Original Assignee
Shenzhen Xingtong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xingtong Technology Co ltd filed Critical Shenzhen Xingtong Technology Co ltd
Priority to CN202310674194.2A priority Critical patent/CN116758554A/en
Publication of CN116758554A publication Critical patent/CN116758554A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19173Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9532Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/18Extraction of features or characteristics of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a method, a device, an electronic device and a storage medium for extracting test question information, wherein the method comprises the following steps: determining an initial test question type, an initial illustration type and an illustration image of the test question with the illustration; acquiring test question feature information based on the initial test question type, the initial illustration type and the illustration image, wherein the test question feature information at least comprises illustration feature information; and acquiring test question information based on the illustration characteristic information. When determining test question information of a to-be-searched illustration test question based on the method of the exemplary embodiment of the disclosure; and when recommended test questions are obtained from the test question library based on the test question information to be searched, test question screening can be efficiently performed, the accuracy of test question searching is improved, and therefore test question recommendation is accurately performed for a user.

Description

Test question information extraction method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a method and a device for extracting test question information, electronic equipment and a storage medium.
Background
In recent years, with the development of internet education, more and more users use a test question library to search for test questions to obtain a solution idea or an answer of the test questions, and the most commonly used method is photographing and searching the questions.
In the related technology, a user uploads a test question image, a question answering system acquires the test question image, and the question stem, the test question answer and the test question analysis of a plurality of test questions which are most similar to the test questions uploaded by the user are pushed for reference by the user; and introducing a graph searching technology for acquiring the test question images for the test questions containing the illustration.
Disclosure of Invention
According to an aspect of the present disclosure, there is provided a method for extracting test question information, including:
determining an initial test question type, an initial illustration type and an illustration image of the test question with the illustration;
acquiring test question feature information based on the initial test question type, the initial illustration type and the illustration image, wherein the test question feature information at least comprises illustration feature information;
and acquiring test question information based on the illustration characteristic information.
According to another aspect of the present disclosure, there is provided a test question recommending method, including:
determining test question information of a to-be-searched illustration test question based on the method disclosed by the exemplary embodiment of the disclosure;
and acquiring recommended test questions from a test question library based on the test question information to be searched.
According to another aspect of the present disclosure, there is provided an extraction apparatus of test question information, including:
the determining module is used for determining an initial test question type, an initial illustration type and an illustration image of the test question with the illustration;
The acquisition module is used for acquiring test question feature information based on the initial test question type, the initial illustration type and the illustration image, wherein the test question feature information at least comprises illustration feature information;
the obtaining module is also used for obtaining test question information based on the illustration characteristic information.
According to another aspect of the present disclosure, there is provided a test question recommending apparatus including:
the determining module is used for determining test question information of the to-be-searched insert test questions based on the method disclosed by the exemplary embodiment of the disclosure;
and the acquisition module is used for acquiring recommended test questions from the test question library based on the test question information to be searched.
According to another aspect of the present disclosure, there is provided an electronic device including:
a processor; and a memory storing a program;
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method according to an exemplary embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method according to an exemplary embodiment of the present disclosure.
According to one or more technical schemes provided by the exemplary embodiments of the present disclosure, an illustration image may be obtained from an illustration-carrying test question, and test question feature information including at least illustration feature information is obtained based on an initial test question type, an initial illustration type, and an illustration image, so that the illustration feature information not only contains relevant features of the illustration image, but also is fused with the initial test question type and the initial illustration type, so that the illustration image is analyzed more accurately, interference of other factors in the illustration-carrying test question is avoided, and then the test question information is obtained based on the illustration feature information. Therefore, according to the embodiment of the disclosure, the illustration images can be stripped from the illustration-carrying test questions, the overall analysis can be carried out on the illustration-carrying images, the analysis can be carried out on the illustration-carrying images alone, and through the mutual combination analysis of the illustration-carrying images and the illustration-carrying images, the interference of other text information except the illustration in the illustration-carrying test question images can be avoided, the illustration-carrying test question searching capability is improved, and the accuracy of the test question information extraction is also improved. Based on this, when determining test question information of a test question of an illustration to be searched for based on the method of the exemplary embodiment of the present disclosure; and when recommended test questions are obtained from the test question library based on the test question information to be searched, test question screening can be efficiently performed, the accuracy of test question searching is improved, and therefore test question recommendation is accurately performed for a user.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the present disclosure, and together with the description serve to explain the present disclosure. In the drawings:
FIG. 1 illustrates a schematic diagram of an example system in which various methods described herein may be implemented, according to an example embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a method for extracting test question information according to an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a text recognition flow schematic of a text recognition model of an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a flow diagram of a method for question recommendation in accordance with an exemplary embodiment of the present disclosure;
FIG. 5 shows a flow diagram of a method of determining test question similarity according to an exemplary embodiment of the present disclosure;
fig. 6 is a schematic block diagram showing a functional module of an extraction apparatus of test question information according to an exemplary embodiment of the present disclosure;
FIG. 7 is a schematic block diagram illustrating functional blocks of a test question recommending apparatus according to an exemplary embodiment of the present disclosure;
FIG. 8 shows a schematic block diagram of a chip according to an exemplary embodiment of the present disclosure;
fig. 9 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Before describing embodiments of the present disclosure, the following definitions are first provided for the relative terms involved in the embodiments of the present disclosure:
searching the images by the image recognition technology according to the color distribution, geometric shape, texture and other visual characteristics of the original images.
Forward reasoning starts with atomic statements in the knowledge base and applies reasoning rules in the forward direction to extract more data until the goal is reached.
Optical character recognition (Optical Character Recognition, OCR) refers to a process of analyzing and recognizing an image file of a text material to obtain text and layout information. I.e. the text in the image is identified and returned in the form of text.
The differentiable binarization processing (Differentiable Binarization Network, DBNet) algorithm, collectively known as differentiable binarization processing, implements threshold adaptation throughout the thermodynamic diagram by inserting binarization operations into the split network for combinatorial optimization.
The bounding box refers to a simple geometric space, and in the three-dimensional point cloud, a series of clustered point sets are contained therein. Establishing a bounding box for the target point set, and extracting geometrical properties of the obstacle to the tracking module as an observation value; the scattered target point cloud is converted into a regular object through the bounding box, so that the decision module can plan the motion trail more easily.
The text recognition network (Convolutional Recurrent Neural Network, CRNN) is mainly used for recognizing text sequences with indefinite lengths end to end, and does not need to cut single characters first, but converts the text recognition into a sequence learning problem depending on time sequence, namely image-based sequence recognition.
Convolutional neural networks (Convolutional Neural Network, CNN), which are a class of feedforward neural networks that contain convolutional computations and have a deep structure, are one of the representative algorithms for deep learning.
The recurrent neural network (Recurrent Neural Network, RNN) is a recurrent neural network which takes sequence data as input, performs recursion in the evolution direction of the sequence and all nodes (circulation units) are connected in a chained manner.
CTCLoss (Connectionist Temporal Classification Loss) loss function, ctcsoss was designed to address situations where the labels of the neural network data and the network predicted data outputs are not aligned.
A Long Short-Term Memory network (LSTM) is a time-cycled neural network adapted to process and predict important events with relatively Long intervals and delays in a time series.
The Word vector tool (Word 2 Vec) is an open source tool developed by Google for computing Word vectors, which is a shallow neural network that functions to convert words in natural language into dense vectors that can be understood by a computer.
The Huffman tree is also called an optimal binary tree, and is a binary tree with the shortest weighted path length.
The YOLOv5s network model is the most commonly used lightweight object detection model, implemented based on the PyTorch framework, and contains five version models of YOLOv5n, YOLOv5s, YOLOv5m, YOLOv5l, YOLOv5 x. The YOLOv5s has high running speed and small model, and is convenient to expand into embedded equipment to put into production.
The centering net is a classical target detection algorithm proposed in a paper Objects as Points published in 2019, and uses an Anchor-Free mode to realize target detection and other extended tasks, the centering net regards the target detection as a standard key point estimation problem, the target is expressed as a single point at the center of a centering box, and other attributes such as the size, dimension, orientation and gesture of the target are directly regressed from the image features at the center point.
EfficientNet is an engineer of Google brain, tan Star and the first scientist Quoc V.le, are presented in the paper EfficientNet Rethinking Model Scaling for Convolutional Neural Networks. The underlying network architecture of the model is designed using neural network architecture search (neural architecture search). Convolutional neural network models are typically trained under known hardware resources. When you have better hardware resources, you can get better training results by scaling up the network model. For systematic research model scaling, google brain researchers have proposed a new model scaling method for the basic network model of the EfficientNets, which uses simple and efficient complex coefficients to trade off network depth, width and input picture resolution.
Global average pooling, we often see pooling layers in neural networks, and there are four common pooling operations: mean-pooling, max-pooling, stochastic-pooling and global average pooling (global average pooling), the pooling layer has a significant effect: the feature map size is reduced, that is, the calculation amount and the required display memory can be reduced.
MobileNet is one application to mobile-side and embedded convolutional neural networks proposed by Google in 2017. The main application scenes of the system comprise a smart phone, an unmanned aerial vehicle, a robot, automatic driving, augmented reality and the like.
The SquezeNet is a lightweight and efficient CNN model proposed by Han et al, which has 50x less parameters than AlexNet, but model performance is close to AlexNet. Small models have many advantages over large models with acceptable performance.
The elastic search is a very powerful open source search engine, has very powerful functions, and can help us to quickly find needed contents from mass data. The elastic search combination kibana, logstash, beats, namely elastic stack (ELK), is widely used in the fields of log data analysis, real-time monitoring and the like. And the elastiscearch is the core of the elastiscstock and is responsible for storing, searching and analyzing data.
In recent years, as online learning systems are becoming more popular in educational environments, the number of online learners is increasing, and more student users use a test question library to search for test questions to obtain solution ideas or answers of the test questions. For example, the electronic equipment with the photographing function is used for acquiring the test question image to be searched, the test question image is uploaded to the server corresponding to the answering system, the server identifies the test question image to be searched, the most similar test questions are matched in the test question library, and the matched test questions and the test question analysis process are fed back to the student users.
However, in practical application, most answering systems mainly search based on characters, but the search results of test question images with illustration with fewer characters are poor, and the difference between the test questions fed back to students and the test question contents in the test question images with illustration is large. The general graph searching technology is a fuzzy search, the search basis is the similarity with the original graph, the obtained result is often wide, rough and even failed, and the number of the questions contained in the question bank is huge due to the interference of other factors such as characters in the test questions with the illustration. Therefore, after a student user photographs and searches pure picture test questions or most of test questions which are pictures, even if the original questions exist in the question bank, the student cannot match and search the corresponding test questions from the question bank or the searched test questions are not the test questions which the student wants to search, so that the student cannot search the corresponding test questions when searching the test questions containing the pictures through photographing, and the problem of difficult and low efficiency of searching the test questions is caused.
In view of the above problems, the disclosure provides a method for extracting test question information, when a test question is a test question with an illustration, characters in a test question image with an illustration and an illustration in the test question image with an illustration can be detected respectively, a detection result and original image data are input into a test question image feature extraction module together, feature vectors of the test question image with an illustration are output, and finally, a feature vector search library is utilized to search out the most similar test questions in a test question library according to the feature vectors of the test question image with an illustration and feed back to student users.
Fig. 1 illustrates a schematic diagram of an example system in which various methods described herein may be implemented, according to an example embodiment of the present disclosure. As shown in fig. 1, a system 100 of an exemplary embodiment of the present disclosure may include: user device 110, computing device 120, and data storage system 130.
As shown in fig. 1, the user device 110 may communicate with the computing device 120 over a communication network. The communication network may be a wired communication network or a wireless communication network. The wired communication network may be a communication network based on a power line carrier technology, and the wireless communication network may be a local area wireless network or a wide area wireless network. The local wireless network may be a WIFI wireless network, a Zigbee wireless network, a mobile communication network, or a satellite communication network, etc.
As shown in fig. 1, the user equipment 110 may include a computer, a mobile phone, or an intelligent terminal such as an information processing center, and the user equipment 110 may be used as an image acquisition end of the test question image to be searched, and initiate a request to the computing device 120. The computing device 120 may be a cloud server, a network server, an application server, a management server, or other servers with data processing functions, for implementing the extraction method and the search method. The server can be internally provided with a processor, and the processor can comprise a text detection processor and an illustration detection processor so as to finish the tasks of extracting test question information and searching test questions.
As shown in FIG. 1, the data storage system 130 may store a database of test question information, which may be on the computing device 120 or on another network server. The data storage system 130 may be separate from the computing device 120 or may be integrated within the computing device 120.
In practical application, the computer device can acquire test question information based on the test question image, and can detect only characters if the test questions to be searched are test questions without illustration; if the test question to be searched is the test question with the illustration, not only the characters in the test question image to be searched are detected, but also the illustration information in the test question image to be searched is detected. And then inputting the detection result and original image data into a test question image feature extraction module to obtain feature vectors of the test question image, and constructing a database containing the test question feature vectors based on the feature vectors. Based on the above, when the student user can obtain the test question image to be searched through the user equipment and upload the test question image to be searched to the computing equipment through the communication network, the computer equipment can identify the test question image to be searched, obtain the feature vector of the test question image to be searched, search a plurality of test questions matched with the feature vector of the test question image to be searched in the database based on the feature vector of the test question image to be searched, and feed back the searched plurality of test questions and the test question analysis process to the student user according to the sequence from big to small in similarity for the student user to refer.
The method for extracting test question information according to the exemplary embodiments of the present disclosure may be applied to a server or a chip in the server, and the method according to the exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 2 shows a flowchart of a method for extracting test question information according to an exemplary embodiment of the present disclosure. The extraction method of the test question information of the exemplary embodiment of the disclosure comprises the following steps:
step 201: and determining the initial test question type, the initial illustration type and the illustration image of the test questions with the illustration. It should be understood that the subject with the illustration of the exemplary embodiments of the present disclosure may be a basic subject, a subject with various application subjects, a crowd of subjects, a student subject, a possible non-student subject, various professional qualification subjects, etc., but is not limited thereto.
In practical application, the initial test question type, the initial picture inserting type and the picture inserting image with the picture inserting test questions can be directly calibrated manually, and can also be determined through a network model. For example: after the test question image with the illustration is acquired based on the user equipment, the initial test question type can be determined based on the test question image with the illustration, meanwhile, the illustration position information can be determined based on the test question image with the illustration, and the illustration image is acquired from the test question with the illustration based on the illustration position information; an initial illustration type is determined based on the illustration image.
Step 202: based on the initial test question type, the initial illustration type and the illustration image, test question feature information is obtained, and the test question feature information at least comprises illustration feature information. In the process of obtaining the test question feature information based on the initial test question type, the initial illustration type and the illustration image, the initial test question type and the initial illustration type can provide reference basis for feature information extraction, so that the test question feature information extracted from the illustration image not only contains features of the illustration image, but also refers to features implied by the initial test question type and the initial illustration type, and the extracted test question feature information is ensured to be more comprehensive and accurate.
Step 203: and obtaining test question information based on the illustration characteristic information. When the test question information is obtained based on the illustration characteristic information, the test question information can be determined based on the illustration characteristic information and the illustration size information in order to enrich the content of the test question information, and in order to ensure that the test question information can be more fully referenced, a plurality of identical illustration size information can be copied for N copies, namely, the order of magnitude expansion is carried out, so that the data volume is increased, and the generalization capability of the model is improved. Based on the method, the test question information can be obtained, and the accuracy of the search result can be ensured when the test question information is used for searching the test questions in the test question library. Meanwhile, a test question library can be constructed based on the test question information, so that the search efficiency and the search accuracy can be ensured when the test question search is carried out.
Therefore, the embodiment of the disclosure not only can carry out overall analysis on the image with the illustration, but also can carry out analysis on the image with the illustration alone, and through the mutual combination analysis of the two, the interference of other text information except the illustration in the image of the illustration test question can be avoided, the capability of searching the illustration test question is improved, and the accuracy of extracting the test question information is also improved.
In one possible implementation manner, when determining a corresponding initial test question type based on a test question image with an illustration, the exemplary embodiment of the disclosure may identify test question text information of the test question image with an illustration based on a text recognition model, determine word vectors of a plurality of sub-texts contained in the test question text information based on the test question text information, and process the word vectors of the plurality of sub-texts based on a text classification model to obtain the initial test question type.
In practical applications, the text recognition model of exemplary embodiments of the present disclosure may be an optical character recognition model (OCR). Before the test question image with the illustration is input into the optical character recognition model, the test question image with the illustration can be processed according to at least one of the following processing modes according to actual needs:
first, scaling the belt illustration question image to an input image size suitable for a text recognition model, such as: the test question image with the illustration can be processed into square or rectangle through scaling or combination with blank filling, and the like, and the test question image with the illustration can be specifically determined according to actual conditions.
Second, the image data is subjected to normalization processing: the data is subjected to centering treatment through mean value removal, and according to the knowledge related to the convex optimization theory and the data probability distribution, the data centering accords with the data distribution rule, so that the generalization effect after training is easier to obtain; the normalization processing can be performed on the image data, and the value range of the pixel value is converted from 0-255 to 0-1, so that the subsequent processing is convenient.
Fig. 3 shows a text recognition flow diagram of a text recognition model according to an exemplary embodiment of the present disclosure. As shown in fig. 3, the text recognition model may include a text detection module and a text line recognition module. And inputting the preprocessed test question image data with the illustration into a text detection module and carrying out forward reasoning, outputting text line coordinate information of each text line in the test question image with the illustration by the text detection module, acquiring a text line image from the test question with the illustration based on the text line coordinate information, and inputting the text line image into a text line identification module to obtain the text information of the test question.
For example, the text detection module of the exemplary embodiments of the present disclosure may be a DBNet text detection module based on a DBNet algorithm, wherein the network skeleton of the DBNet text detection module may be a feature pyramid skeleton network. Based on the method, the test question image with the illustration can be input into the characteristic pyramid skeleton network, then the output of the characteristic pyramid is transformed into the same size based on the up-sampling mode of the characteristic pyramid skeleton network, and the characteristic diagrams are generated in a cascading way; and generating a probability map and a threshold map based on the feature map, calculating an approximate binary map through the probability map and the threshold map, expanding label generation according to the approximate binary map, and forming a text box. And, in the training phase, the supervision is applied on the threshold graph, the probability graph and the approximate binary graph, wherein the latter two share the same supervision; in the reasoning stage, the bounding box can be easily obtained from the two, so that the DBNet text detection module is excellent in efficiency and detection effect in the text detection field. It should be understood that other text detection modules may be used in practical applications, such as an EAST text detection module based on the EAST (Efficient and Accuracy Scene Text) network, a PSENet text detection module based on the progressive scale expansion network (Progressive Scale Expansion Network, abbreviated as PSENet), and the like.
After determining the position coordinate information of each text line in the test question image with the illustration, the exemplary embodiment of the disclosure inputs the position coordinate information of the text line and the test question image with the illustration to the text line recognition module, and the text line recognition module intercepts a plurality of line text images based on text frames from the test question image with the illustration based on the position coordinate information of the text line, but because the test question image with the illustration uploaded by a user may have problems such as text inclination or text distortion, the intercepted text images with the text lines need to be corrected, and the positions of the text images with the text lines are unified. For example, the plurality of lines of the text image may be subjected to horizontal correction, perspective correction, and the like. And then, preprocessing the corrected text images of a plurality of lines based on an image preprocessing module, wherein the blank can be filled for scaling to a standard size or the image missing part in the corrected text images of the plurality of lines, and the normalization processing or the normalization processing can be carried out on the text images of the lines. And finally, inputting the corrected and preprocessed text images of the multiple lines into a text line recognition module and performing forward reasoning, wherein the text line recognition module outputs test question text information corresponding to the text images of the multiple lines based on the text images of the multiple lines.
For example, the text line recognition module of an exemplary embodiment of the present disclosure may be a CRNN model, and may include a convolution layer, a loop layer, and a transcription layer. The convolution layer can be a CNN model and is mainly used for extracting characteristics of an input test question image with the illustration to obtain a characteristic image of the test question image with the illustration; the cyclic layer can be an RNN model, predicts the feature sequence by using a two-way long-short-term memory network, learns each feature vector in the feature sequence, and outputs the text information distribution of the prediction test questions; the transcription layer comprises a CTCLoss loss function, so that a series of test question text information distribution acquired from the circulation layer can be converted into a final test question text information sequence, namely test question text information by using a CTC algorithm. It should be appreciated that the text line recognition module or other text line recognition module that draws attention mechanisms may also be employed in actual applications.
In practical application, after obtaining the test question text information output by the text recognition model, word vectors of a plurality of sub-texts contained in the test question text information can be determined based on the test question text information, for example, a Word vector tool (Word 2 Vec) can be used to convert the test question text information into Word vectors of a plurality of sub-texts, and the Word2Vec tool mainly comprises two models: a continuous word bag module (continuous bag of words, abbreviated as CBOW) and a Skip-gram, wherein CBOW is trained to predict target words according to context, skip-gram is trained to predict surrounding words according to target words to obtain word vectors, based on which test text information can be converted into word vectors of multiple sub-texts through CBOW and Skip-gram.
For example, firstly, word segmentation is performed on test question text information, then the occurrence times of each character are counted, further, a Huffman tree corresponding to all the characters is constructed, corresponding Huffman codes are distributed to each character, finally, training is performed on the characters based on a CBOW module and a skip module, and word vectors of a plurality of sub texts contained in the test question text information are obtained.
After word vectors of a plurality of sub-texts contained in the test question text information are obtained, word vector data of the plurality of sub-texts can be input into a trained text classification model, the text classification model carries out forward reasoning on the word vectors of the plurality of sub-texts, and the text classification model outputs an initial test question type. The test question types comprise a gap-filling test question type, a selection test question type or a calculation test question type and the like.
In one possible implementation, the text classification model may be a fastttext classification model, and when a sequence of word vectors of a plurality of sub-texts is input in the fastttext classification model, the fastttext classification model may output an initial question type corresponding to the sequence of word vectors. It should be understood that different test question types can be marked by numbers or English letters, so that the fastText text classification model can be directly utilized to output initial test question type numbers corresponding to different initial test question types, a plurality of test questions with illustration can be classified based on the initial test question type numbers, and test question type information corresponding to the test questions with illustration can be seen more clearly and simply.
Exemplary embodiments of the present disclosure may also determine illustration position information based on the belt illustration test question image. Firstly, preprocessing the test questions with the illustration, and then inputting the preprocessed test question image data into a trained illustration detection model, wherein the illustration detection model can be a Yolov5 target detection model, the Yolov5 target detection model has very high detection efficiency in the target detection field, and the detection effect is also relatively good. And forward reasoning is carried out in the Yolov5 target detection model based on the preprocessed test question image data, and the Yolov5 target detection model outputs the insert position information contained in the test question image based on the preprocessed test question image data. It should be understood that the preprocessing method is consistent with the processing method of the image preprocessing module, which is not described herein, and the illustration detection model may be a single-stage target detection algorithm or a centrnet target detection algorithm.
In practical application, the above-mentioned illustration position information and illustration test question image may be input into the illustration type determining module, and the illustration image may be obtained from the test questions with illustration based on the illustration position information. For example, the corresponding illustration area data may be intercepted in the strip illustration test question image according to the illustration position information, and when the illustration detection module does not detect the illustration image in the strip illustration test question image, the whole strip illustration test question image is used as the illustration image, and then the initial illustration type is determined based on the illustration image.
In one possible implementation, the initial illustration type of an exemplary embodiment of the present disclosure may be determined by an illustration type determination module based on an illustration image, wherein the illustration type module includes a first feature extraction module, a mobile rollover bottleneck convolution module, and a second feature extraction module, thereby determining the initial illustration type based on the illustration image.
By way of example, the illustration type module may be an EfficientNet model; when the illustration image is input into a trained EfficientNet model, firstly, shallow image features of the illustration image are extracted based on a first feature extraction module; further, based on the mobile overturning bottleneck convolution module, shallow image features are extracted, and depth image features are obtained; and finally, extracting the depth image features based on the second feature extraction module, and performing forward reasoning so as to obtain the initial illustration type. The illustration type may include a graph type, a line graph type, a geometric image type, or the like. It should be appreciated that the illustration type module may also be a MobileNet model, a SqueezeNet model, or the like.
In an optional mode, the moving overturning bottleneck convolution firstly carries out compression operation on feature images corresponding to the features of the incoming shallow image, carries out global average pooling operation on the channel dimension direction, obtains global image features of the feature images corresponding to the features of the shallow image in the channel dimension direction, carries out excitation operation on the global image features, reduces the number of channels and the calculated amount, obtains weights of different channels through a sigmoid activation function, multiplies the weights of different channels with the feature images corresponding to the input illustration images, and carries out feature fusion, so that depth image features are obtained. The mobile overturning bottleneck convolution module actually performs attention operation on the channel dimension, and the attention mechanism enables the illustration type module to pay more attention to the channel characteristics with the maximum amount of illustration information, and suppresses the unimportant channel characteristics, so that the integrity and the accuracy of extracting the content information contained in the illustration image are ensured, and the accuracy of judging the initial illustration type is ensured. It should be understood that the channel dimensions herein may be the illustration image height, the illustration image width, the number of illustration image channels, etc. It should be understood that different illustration types may be labeled with numbers or english letters, so that illustration type numbers corresponding to the different illustration types may be directly output by using the illustration type module.
As can be seen, the exemplary embodiments of the present disclosure detect the text portion in the test question image with an illustration based on the optical character recognition model, detect the illustration portion in the test question image with an illustration based on the illustration detection module and the illustration type determination module, and based on this, detect the same test question image with an illustration from two angles respectively, that is, ensure the accurate detection of the text portion in the test question image with an illustration, and also ensure the accurate detection of the illustration portion in the test question image with an illustration, so that the judgment for the type of the initial test question and the type of the initial illustration is more accurate.
The exemplary embodiment of the disclosure can also splice the initial test question type, the initial illustration type and the illustration image to obtain first illustration splicing information; the first illustration splicing information may be described in a matrix form, wherein the width of the matrix is W, and the height of the matrix is H, where W represents the width of the illustration image, and H is determined by the height of the illustration image, the test question class number, and the illustration class.
The method includes inputting an initial test question type, an initial test question type and an insert image into an insert feature extraction model, scaling the insert image to a fixed size, and splicing and converting matrix data forms of W.times.H. by the insert feature extraction model according to the initial test question type, the initial insert type corresponding to the insert type and the insert image data corresponding to the insert image, inputting insert splicing information into the insert feature extraction model based on the matrix data forms, and obtaining insert feature information corresponding to the insert test question image, so that the insert feature extraction model is trained, and the insert feature extraction model can pay attention to feature information of the initial test question type and the initial insert type. It should be understood that the specific zoom-in and zoom-out for the illustration image depends on the actual situation.
According to the embodiment of the disclosure, the image data corresponding to the image can be standardized, the standardized image data is input into a trained image feature extraction model, wherein the image feature extraction model can be a MobileNet V3 model, the MobileNet V3 model has 3 output ends in total, namely, a first output end, a second output end and a third output end, the first output end is used for outputting an image feature vector with a fixed dimension, the second output end is used for outputting an initial test question model, and the third output end is used for outputting the initial image model.
The MobileNet V3 model is provided with two output ends on the basis of the network structure of the MobileNet model, wherein the two output ends are of full-connection layer structures, and the initial test question type and the initial illustration type are respectively output. Based on the method, the better characteristics can be extracted by utilizing the multi-task graph characteristic extraction model which is beneficial to the training, and the input initial test question type and initial graph type have a certain correction effect. The test question feature information of the exemplary embodiments of the present disclosure includes a corrected initial test question type and a corrected initial illustration type, which may be further obtained based on the MobileNetV3 model.
In practical application, the insert feature vector can be further processed, for example, the test question information can comprise insert feature information and insert size information, and based on the insert feature vector, the insert size information can be added to the insert feature vector, so that the insert in the test question with the insert can be more accurately characterized.
In an optional manner, the illustration characteristic information and the illustration size information can be spliced to obtain corresponding second illustration splicing information; the second illustration splicing information contains a plurality of identical illustration size information, the illustration size information comprises illustration length-width ratio parameters and/or area ratio parameters of illustration images in the test questions with the illustration, and the splicing rule can copy N copies of the illustration length-width ratio parameters and the area ratio parameters of the illustration images in the test questions with the illustration and splice the copy N copies to the tail of the illustration feature vector, wherein N is an empirical value, and the adaptation adjustment can be carried out according to the dimension of the illustration feature vector in practical application. And finally, determining test question information based on the splicing information of each second illustration, the corrected initial test question type and the corrected initial illustration type.
Therefore, the illustration images can be stripped from the test questions with the illustration, the overall analysis can be carried out on the images with the illustration, the analysis can be carried out on the images with the illustration independently, the initial types of the test questions can be considered, the initial types of the illustration are considered, and the illustration size information is further fused, so that the illustration information of the test questions with the illustration is more accurately represented, and the accuracy and the comprehensiveness of illustration detection and the illustration feature extraction are ensured.
The method includes the steps that the initial test question type is corrected, and the test question feature information is corresponding to the test question image with the test picture, when the initial test question type is corrected, and the test question feature information is spliced correctly, complete test question information can be obtained, one test question information corresponds to one test question image with the test picture, namely one test question with the test picture, based on the complete test question information with the test question image with the test picture, the test question feature information not only contains relevant features of the test picture image, but also is fused with the initial test question type and the initial test picture type, so that the test picture image is analyzed more accurately, interference of other factors in the test question with the test picture is avoided, and the accuracy of the test picture feature information is guaranteed.
The recommendation of the test questions of the exemplary embodiments of the present disclosure may be applied to a server or a chip in the server, and the method of the exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 4 shows a flow chart of a test question recommending method according to an exemplary embodiment of the present disclosure. The test question recommending method of the exemplary embodiment of the disclosure comprises the following steps:
step 401: the method for extracting the test question information based on the embodiment of the disclosure determines the test question information of the to-be-searched illustration test questions.
In practical application, a student user can input a to-be-searched test question image into a server based on user equipment, the server firstly extracts test question information based on the extraction method of the test question information in the exemplary embodiment of the disclosure, determines to-be-searched test question information corresponding to the to-be-searched test question, and then inputs the to-be-searched test question information into a feature vector search library, and the feature vector search library compares the to-be-searched test question information with test question information contained in the test question library to determine test question information matched with the to-be-searched test question information. The test question library includes test question information of a plurality of test questions determined by the method for extracting test question information according to the disclosed exemplary embodiments, and the plurality of test questions may include various types of test questions, for example, may be selection questions, application questions, geometric questions, and the like. It should be understood that, the feature vector search library adopted in the exemplary embodiment of the present disclosure is an elastic search library, and based on the distributed file storage of the elastic search library, each feature vector may be indexed and available for searching, and the feature vectors may be combined with each other when querying, so as to improve the test question searching efficiency. In practical applications, other vector search libraries such as milvus can also be used.
Step 402: and acquiring recommended test questions from the test question library based on the test question information to be searched. The test question information of the to-be-searched test questions of the exemplary embodiment of the present disclosure may be determined by the second insert picture mosaic information in the extraction method of the test question information of the exemplary embodiment of the present disclosure. It should be understood that the test question information to be searched is test question information corresponding to the test question to be searched.
For example, a plurality of candidate questions may be obtained from the question library based on the illustration characteristic information in the to-be-searched question information, so as to determine the similarity between the illustration characteristic information of each candidate question and the illustration characteristic information of the to-be-searched question information; if the similarity between the candidate test questions and the test question information to be searched meets the screening condition, determining that the candidate test questions are recommended test questions; and if the identity information of the candidate test questions is matched with the identity information of the test question information to be searched, updating the similarity of each candidate test question and the test question information to be searched. It should be understood that the similarity comparison is based on the test question information corresponding to the candidate test questions and the test question information corresponding to the test questions to be searched.
In practical application, the screening condition may include that the similarity between the insert characteristic information of the candidate test question and the insert characteristic information of the test question information to be searched is greater than or equal to a preset threshold value; the candidate test questions can be ranked according to the sequence of the similarity from large to small, the test question to be searched is the kth candidate test question, and k is more than 0 and less than or equal to the total number of the candidate test questions; or a comprehensive ordering of the two.
When the screening condition includes that the similarity between the candidate test question and the test question information to be searched is greater than or equal to a preset threshold, for example, when the similarity between the candidate test question and the test question information to be searched is greater than 95%, determining that the candidate test question is a recommended test question, and feeding back a plurality of screened recommended test questions to the user equipment.
When the screening condition comprises sorting the candidate test questions according to the sequence of the similarity from large to small, when the test questions to be searched are determined to be the kth candidate test questions in the sorting result, determining the kth candidate test questions and the candidate test questions with the sorting result before the kth candidate test questions to be recommended test questions, and feeding back a plurality of screened recommended test questions to the user equipment, wherein k is the total number of the candidate test questions which is more than 0 and less than or equal to the total number of the candidate test questions.
When the screening conditions include the two screening modes, a plurality of candidate test questions can be acquired based on a preset threshold, 100 candidate test questions are supposed to be screened out, the plurality of candidate test questions are ordered based on the difference of similarity between 100 candidate test questions and test question information to be searched, when the 4 th candidate test question is determined as the test question to be searched, the 4 th candidate test question is determined, and three candidate test questions with the ordering result before the 4 th candidate test question are recommended test questions, and the screened 4 recommended test questions are fed back to the user equipment. It should be understood that in practical application, the test question information to be searched may include multiple test question information, so that when the similarity is determined, an average value calculation method can be directly adopted to obtain a similarity average value of multiple test question information of the same test question to be searched, and matching is performed according to the similarity average value; and the similarity accumulation can be carried out based on various test question information, and the final accumulated result is used as a similarity result for matching.
In an optional manner, in an exemplary embodiment of the present disclosure, the identity information of the candidate test question at least includes an initial test question type and an initial insert picture type of the candidate test question, the identity information of the test question to be searched includes at least an initial test question type and an initial insert picture type of the test question information to be searched, based on which, in the test question searching process, when comparing the corresponding test question information based on the candidate test question with the test question information corresponding to the test question to be searched, the initial test question type and the initial insert picture type corresponding to the candidate test question and the test question information to be searched are acquired, when the candidate test question is matched with the initial test question type corresponding to the test question information to be searched, the similarity of the candidate test question and the test question information to be searched is added by t, and when the candidate test question is not matched with the initial test question type corresponding to the test question information to be searched, the similarity is unchanged; when the candidate test question is matched with the initial illustration type corresponding to the test question information to be searched, adding t to the similarity of the candidate test question and the test question information to be searched, and when the candidate test question is not matched with the initial illustration type corresponding to the test question information to be searched, the similarity is unchanged, and based on the similarity, the similarity of each candidate test question and the test question information to be searched is updated. It should be understood that t in the similarity increase t may be set according to the application, and may be 10%, or may be 5%, or the like.
In another alternative manner, the identity information of the candidate test question further includes a correction initial test question type and a correction initial illustration type of the candidate test question, and the identity information of the test question to be searched further includes a correction initial test question type and a correction initial illustration type of the test question to be searched. Based on the above, in the process of searching the test questions, when the corresponding test question information based on the candidate test questions is compared with the test question information corresponding to the test questions to be searched, after a plurality of recommended test questions are obtained, the types of the corrected initial test questions and the types of the corrected initial insert corresponding to the candidate test questions and the test question information to be searched can be compared, when the types of the corrected initial test questions corresponding to the candidate test questions and the test question information to be searched are obtained, the similarity between the candidate test questions and the test question information to be searched is added by t, and when the types of the corrected initial test questions corresponding to the candidate test questions and the test question information to be searched are not matched, the similarity is unchanged; when the candidate test questions are matched with the corrected initial illustration types corresponding to the test question information to be searched, adding t to the similarity of the candidate test questions and the test question information to be searched, and when the candidate test questions are not matched with the corrected initial illustration types corresponding to the test question information to be searched, keeping the similarity unchanged, and updating the similarity of each candidate test question and the test question information to be searched based on the similarity.
Fig. 5 shows a flowchart of a test question similarity determination method according to an exemplary embodiment of the present disclosure. As shown in fig. 5, the test question information to be searched is input into a feature vector search library, and the feature vector search library may acquire a plurality of candidate test questions from the test question library based on the insert characteristic information in the test question information to be searched, for example, 100 test questions corresponding to the insert characteristic information with the insert characteristic information similarity greater than 90% may be determined as candidate test questions, and the plurality of candidate test questions may be ranked in order from large to small based on the insert characteristic information similarity. And then comparing initial test question types corresponding to 100 candidate test questions one by one, when the initial test question types corresponding to the candidate test questions are consistent with the initial test question types in the test question information to be searched, adding 10% of similarity, and when the initial test question types corresponding to the candidate test questions are inconsistent with the initial test question types in the test question information to be searched, keeping the similarity unchanged; comparing initial illustration types corresponding to 100 candidate test questions, adding 10% of similarity when the initial test question type corresponding to the candidate test questions is consistent with the initial illustration type in the test question information to be searched, and keeping the similarity unchanged when the initial test question type corresponding to the candidate test questions is inconsistent with the initial illustration type in the test question information to be searched; comparing the corrected initial test question types corresponding to 100 candidate test questions, adding 10% of similarity when the initial test question types corresponding to the candidate test questions are consistent with the corrected initial test question types in the test question information to be searched, and keeping the similarity unchanged when the initial test question types corresponding to the candidate test questions are inconsistent with the corrected initial test question types in the test question information to be searched; and comparing the corrected initial illustration types corresponding to the 100 candidate test questions, wherein when the initial test question types corresponding to the candidate test questions are consistent with the corrected initial illustration types in the test question information to be searched, the similarity is increased by 10%, and when the initial test question types corresponding to the candidate test questions are inconsistent with the corrected initial illustration types in the test question information to be searched, the similarity is unchanged.
In practical application, the test question information, the initial test question type, the initial illustration type, the corrected initial test question type and the corrected initial illustration type can be compared one by one according to the sequence, so that the similarity of each candidate test question and the test question information to be searched is updated, comprehensive sorting is performed, and then, for example, when 100 candidate test questions are selected, the preset threshold of the similarity is 90%, so that a plurality of candidate test questions with the similarity greater than 90% can be determined as recommended test questions and fed back to a user; the 15 th candidate test question which is matched with the similarity of the picture feature information of the test question information to be searched can be determined as the recommended test question based on the similarity sorting result of the candidate test questions, and the candidate test questions which are ranked before the 15 th candidate test question and the 15 th candidate test question in the comprehensive sorting are recommended to the user, so that the candidate test question which is most matched with the search test question can be recommended to the student user, the test questions which are similar to the search test question can be recommended to the user, the user can not only efficiently screen the test questions, but also ensure the accuracy of the search result, and the test question information related to the test question to be searched can be obtained, so that the student user can widen the solution thought, and study is better.
According to one or more technical schemes provided by the exemplary embodiments of the present disclosure, an illustration image may be obtained from an illustration-carrying test question, and test question feature information including at least illustration feature information is obtained based on an initial test question type, an initial illustration type, and an illustration image, so that the illustration feature information not only contains relevant features of the illustration image, but also is fused with the initial test question type and the initial illustration type, so that the illustration image is analyzed more accurately, interference of other factors in the illustration-carrying test question is avoided, and then the test question information is obtained based on the illustration feature information. Therefore, according to the embodiment of the disclosure, the illustration images can be stripped from the illustration-carrying test questions, the overall analysis can be carried out on the illustration-carrying images, the analysis can be carried out on the illustration-carrying images alone, and through the mutual combination analysis of the illustration-carrying images and the illustration-carrying images, the interference of other text information except the illustration in the illustration-carrying test question images can be avoided, the illustration-carrying test question searching capability is improved, and the accuracy of the test question information extraction is also improved. Based on this, when determining test question information of a test question of an illustration to be searched for based on the method of the exemplary embodiment of the present disclosure; and when recommended test questions are obtained from the test question library based on the test question information to be searched, test question screening can be efficiently performed, the accuracy of test question searching is improved, and therefore test question recommendation is accurately performed for a user.
The foregoing description of the solution provided by the embodiments of the present disclosure has been mainly presented from the perspective of a server. It will be appreciated that the server, in order to implement the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The embodiments of the present disclosure may divide functional units of a server according to the above method examples, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present disclosure, the division of the modules is merely a logic function division, and other division manners may be implemented in actual practice.
In the case of dividing each functional module by corresponding each function, exemplary embodiments of the present disclosure provide an extraction apparatus of test question information, which may be a server or a chip applied to the server. Fig. 6 shows a functional block diagram of an extraction apparatus of test question information according to an exemplary embodiment of the present disclosure. As shown in fig. 6, the test question information extraction apparatus 600 includes:
knowledge dynamic mastery degree determining device characterized in that the device comprises:
the determining module 601 is configured to determine an initial test question type, an initial illustration type and an illustration image of a test question with an illustration;
an obtaining module 602, configured to obtain test question feature information based on the initial test question type, the initial illustration type, and the illustration image, where the test question feature information at least includes illustration feature information;
the obtaining module 602 is further configured to obtain test question information based on the illustration characteristic information.
In a possible implementation manner, the determining module 601 is further configured to determine an initial test question type, an initial insert type, and an insert image of the test question with an insert, and determine the initial test question type based on the test question image with an insert; determining illustration position information based on the illustration test question image; acquiring the illustration images from the illustration-carrying test questions based on the illustration position information; the initial illustration type is determined based on the illustration image.
In a possible implementation manner, the determining module 601 is further configured to determine, based on the test question image with the illustration, a test question text information corresponding to the initial test question type, and identify the test question image with the illustration based on a text identification model; determining word vectors of a plurality of sub-texts contained in the test question text information based on the test question text information; and processing word vectors of a plurality of the sub-texts based on the text classification model to obtain the initial test question type.
In one possible implementation, the initial illustration type is determined by an illustration type determining module based on the illustration image, the illustration type module includes a first feature extracting module, a mobile flip bottleneck convolution module, and a second feature extracting module, the initial illustration type is determined based on the illustration image, the obtaining module 602 is further configured to extract shallow image features of the illustration image based on the first feature extracting module; extracting the shallow image features based on the mobile overturning bottleneck convolution module to obtain depth image features; and extracting the depth image features based on the second feature extraction module to obtain the initial illustration type.
In a possible implementation manner, the obtaining module 602 is further configured to obtain test question feature information based on the initial test question type, the initial insert type, and the insert image, and splice the initial test question type, the initial insert type, and the insert image to obtain first insert splice information; inputting the illustration splicing information into an illustration characteristic extraction model to obtain the corresponding illustration characteristic information.
In one possible implementation manner, the first illustration stitching information is described in a matrix form, where the width of the matrix is W, and the height of the matrix is H, where W represents the width of the illustration image, and H is determined by the height of the illustration image, the test question class number, and the illustration class.
In one possible implementation, the test question information includes the illustration feature information and illustration size information.
In a possible implementation manner, the test question feature information includes a corrected initial test question type and a corrected initial illustration type, and the obtaining module 602 is further configured to splice the illustration feature information with illustration size information to obtain corresponding second illustration splicing information; and determining test question information based on each second illustration splicing information, the corrected initial test question type and the corrected initial illustration type.
In one possible implementation, the second illustration stitching information contains a plurality of identical illustration size information including illustration aspect ratio parameters and/or illustration image area ratio parameters in the strip illustration test questions.
In the case of dividing each functional module by corresponding each function, exemplary embodiments of the present disclosure provide a test question recommending apparatus, which may be a server or a chip applied to the server. Fig. 7 illustrates a functional block diagram of a test question recommending apparatus according to an exemplary embodiment of the present disclosure. As shown in fig. 7, the question recommending apparatus 700 includes:
a determining module 701, configured to determine test question information of a to-be-searched insert test question based on a method according to an exemplary embodiment of the present disclosure;
the obtaining module 702 is configured to obtain recommended test questions from a test question library based on the test question information to be searched.
In a possible implementation manner, the acquiring module 702 is further configured to acquire a plurality of candidate questions from the question library based on the to-be-searched question information; determining the similarity of each candidate test question and the test question information to be searched; if the similarity between the candidate test questions and the test question information to be searched meets the screening condition, determining that the candidate test questions are recommended test questions; and if the identity information of the candidate test questions is matched with the identity information of the test question information to be searched, updating the similarity between each candidate test question and the test question information to be searched.
In one possible implementation, the screening condition includes: the similarity between the candidate test questions and the test question information to be searched is larger than or equal to a preset threshold value; and/or sorting the candidate test questions according to the sequence of the similarity from large to small, wherein the test questions to be searched are kth candidate test questions, and k is more than 0 and less than or equal to the total number of the candidate test questions.
In a possible implementation manner, the test question library includes test question information of a plurality of candidate test questions determined by the method according to the exemplary embodiment of the disclosure; the identity information of the candidate test question at least comprises an initial test question type and an initial illustration type of the candidate test question, and the identity information of the test question information to be searched at least comprises the initial test question type and the initial illustration type of the test question information to be searched.
In one possible implementation manner, the number of recommended questions is a plurality, and the question information of the to-be-searched illustration questions is determined by the exemplary embodiment of the disclosure; the identity information of the candidate test questions further comprises a correction initial test question type and a correction initial illustration type of the candidate test questions, and the identity information of the test questions to be searched further comprises a correction initial test question type and a correction initial illustration type of the test questions to be searched.
Fig. 8 shows a schematic block diagram of a chip according to an exemplary embodiment of the present disclosure. As shown in fig. 8, the chip 800 includes one or more (including two) processors 801 and a communication interface 802. The communication interface 802 may support a server to perform the data transceiving steps of the method described above, and the processor 801 may support a server to perform the data processing steps of the method described above.
Optionally, as shown in fig. 8, the chip 800 further includes a memory 803, and the memory 803 may include a read only memory and a random access memory, and provide operation instructions and data to the processor. A portion of the memory may also include non-volatile random access memory (non-volatile random access memory, NVRAM).
In some implementations, as shown in fig. 8, the processor 801 performs the corresponding operation by invoking a memory-stored operating instruction (which may be stored in an operating system). The processor 801 controls the processing operations of any of the terminal devices, and may also be referred to as a central processing unit (central processing unit, CPU). Memory 803 may include read only memory and random access memory, and provide instructions and data to processor 801. A portion of the memory 803 may also include NVRAM. Such as a memory, a communication interface, and a memory coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 804 in fig. 8.
The method disclosed by the embodiment of the disclosure can be applied to a processor or implemented by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general purpose processor, a digital signal processor (digital signal processing, DSP), an ASIC, an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks of the disclosure in the embodiments of the disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present disclosure may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The exemplary embodiments of the present disclosure also provide an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor for causing the electronic device to perform a method according to embodiments of the present disclosure when executed by the at least one processor.
The present disclosure also provides a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to an embodiment of the present disclosure.
The present disclosure also provides a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to embodiments of the disclosure.
With reference to fig. 9, a block diagram of an electronic device that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic device 900 includes a computing unit 901 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The computing unit 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
A number of components in the electronic device 900 are connected to the I/O interface 905, including: an input unit 906, an output unit 907, a storage unit 908, and a communication unit 909. The input unit 906 may be any type of device capable of inputting information to the electronic device 900, and the input unit 906 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. The output unit 907 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 908 may include, but is not limited to, magnetic disks, optical disks. The communication unit 909 allows the electronic device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 901 performs the respective methods and processes described above. For example, in some embodiments, the methods of the exemplary embodiments of the present disclosure may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 900 via the ROM 902 and/or the communication unit 909. In some embodiments, the computing unit 901 may be configured to perform the methods of the exemplary embodiments of the present disclosure by any other suitable means (e.g., by means of firmware).
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described by the embodiments of the present disclosure are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a terminal, a user equipment, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; optical media, such as digital video discs (digital video disc, DVD); but also semiconductor media such as solid state disks (solid state drive, SSD).
Although the present disclosure has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations thereof can be made without departing from the spirit and scope of the disclosure. Accordingly, the specification and drawings are merely exemplary illustrations of the present disclosure as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents within the scope of the disclosure. It will be apparent to those skilled in the art that various modifications and variations can be made to the present disclosure without departing from the spirit or scope of the disclosure. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (18)

1. The method for extracting the test question information is characterized by comprising the following steps:
determining an initial test question type, an initial illustration type and an illustration image of the test question with the illustration;
acquiring test question feature information based on the initial test question type, the initial illustration type and the illustration image, wherein the test question feature information at least comprises illustration feature information;
and acquiring test question information based on the illustration characteristic information.
2. The method of claim 1, wherein determining an initial test question type, an initial insert picture type, and an insert picture image for a test question with an insert picture comprises:
determining the initial test question type based on the test question image with the illustration;
determining illustration position information based on the illustration test question image;
acquiring the illustration images from the illustration-carrying test questions based on the illustration position information;
the initial illustration type is determined based on the illustration image.
3. The method of claim 2, wherein the determining the initial question type based on the belt illustration question image comprises:
identifying test question text information of the test question image with the illustration based on a text identification model;
determining word vectors of a plurality of sub-texts contained in the test question text information based on the test question text information;
and processing word vectors of a plurality of the sub-texts based on the text classification model to obtain the initial test question type.
4. The method of claim 2, wherein the initial illustration type is determined by an illustration type determination module based on the illustration image, the illustration type module comprising a first feature extraction module, a mobile rollover bottleneck convolution module, and a second feature extraction module, the determining the initial illustration type based on the illustration image comprising:
Extracting shallow image features of the illustration image based on the first feature extraction module;
extracting the shallow image features based on the mobile overturning bottleneck convolution module to obtain depth image features;
and extracting the depth image features based on the second feature extraction module to obtain the initial illustration type.
5. The method of claim 1, wherein the obtaining test question feature information based on the initial test question type, the initial illustration type, and the illustration image comprises:
splicing the initial test question type, the initial illustration type and the illustration image to obtain first illustration splicing information;
inputting the illustration splicing information into an illustration characteristic extraction model to obtain the corresponding illustration characteristic information.
6. The method of claim 5, wherein the first artwork-stitching information is described in a matrix having a width W and a height H, wherein W represents the width of the artwork image and H is determined by the height of the artwork image, the test question class number and the artwork category.
7. The method according to any one of claims 1 to 6, wherein the test question information includes the illustration characteristic information and illustration size information.
8. The method of claim 7, wherein the test question feature information includes a corrected initial test question type and a corrected initial illustration type, the method further comprising:
splicing the illustration characteristic information and the illustration size information to obtain corresponding second illustration splicing information;
and determining test question information based on each second illustration splicing information, the corrected initial test question type and the corrected initial illustration type.
9. The method of claim 8, wherein the second artwork-stitching information contains a plurality of identical artwork-size information including artwork aspect ratio parameters and/or artwork image area ratio parameters in the strip artwork test questions.
10. The test question recommending method is characterized by comprising the following steps of:
determining test question information to be searched for in the test questions of the insert to be searched for based on the method of any one of claims 1 to 9;
and acquiring recommended test questions from a test question library based on the test question information to be searched.
11. The method of claim 10, wherein the obtaining recommended questions from a question bank based on the question information to be searched comprises:
acquiring a plurality of candidate questions from a question library;
Determining the similarity of each candidate test question and the test question information to be searched;
if the similarity between the candidate test questions and the test question information to be searched meets the screening condition, determining that the candidate test questions are recommended test questions;
and if the identity information of the candidate test questions is matched with the identity information of the test question information to be searched, updating the similarity between each candidate test question and the test question information to be searched.
12. The method of claim 11, wherein the screening conditions comprise: the similarity between the candidate test questions and the test question information to be searched is larger than or equal to a preset threshold value; and/or
And sequencing the candidate test questions according to the sequence of the similarity from large to small, wherein the test questions to be searched are kth candidate test questions, and k is more than 0 and less than or equal to the total number of the candidate test questions.
13. The method according to claim 11, wherein the test question library includes test question information of a plurality of the candidate test questions determined by the method according to any one of claims 1 to 8;
the identity information of the candidate test question at least comprises an initial test question type and an initial illustration type of the candidate test question, and the identity information of the test question information to be searched at least comprises the initial test question type and the initial illustration type of the test question information to be searched.
14. The method according to claim 13, wherein the question information of the to-be-searched-for illustration questions is determined by the second illustration stitching information in claim 8 or 9;
the identity information of the candidate test questions further comprises a correction initial test question type and a correction initial illustration type of the candidate test questions, and the identity information of the test questions to be searched further comprises a correction initial test question type and a correction initial illustration type of the test questions to be searched.
15. An extraction device of test question information, which is characterized in that the device comprises:
the determining module is used for determining an initial test question type, an initial illustration type and an illustration image of the test question with the illustration;
the acquisition module is used for acquiring test question feature information based on the initial test question type, the initial illustration type and the illustration image, wherein the test question feature information at least comprises illustration feature information;
the obtaining module is also used for obtaining test question information based on the illustration characteristic information.
16. A test question recommending apparatus, characterized in that the apparatus comprises:
a determining module for determining test question information of the to-be-searched test questions based on the method of any one of claims 1 to 9;
And the acquisition module is used for acquiring recommended test questions from the test question library based on the test question information to be searched.
17. An electronic device, comprising:
a processor; and a memory storing a program;
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method according to any one of claims 1 to 14.
18. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-14.
CN202310674194.2A 2023-06-07 2023-06-07 Test question information extraction method and device, electronic equipment and storage medium Pending CN116758554A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310674194.2A CN116758554A (en) 2023-06-07 2023-06-07 Test question information extraction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310674194.2A CN116758554A (en) 2023-06-07 2023-06-07 Test question information extraction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116758554A true CN116758554A (en) 2023-09-15

Family

ID=87947041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310674194.2A Pending CN116758554A (en) 2023-06-07 2023-06-07 Test question information extraction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116758554A (en)

Similar Documents

Publication Publication Date Title
CN108304835B (en) character detection method and device
CN108229478B (en) Image semantic segmentation and training method and device, electronic device, storage medium, and program
US11812184B2 (en) Systems and methods for presenting image classification results
CN114202672A (en) Small target detection method based on attention mechanism
CN109993102B (en) Similar face retrieval method, device and storage medium
JP7286013B2 (en) Video content recognition method, apparatus, program and computer device
US11983903B2 (en) Processing images using self-attention based neural networks
US20240177462A1 (en) Few-shot object detection method
US20210042572A1 (en) Systems and methods for generating graphical user interfaces
CN111582409A (en) Training method of image label classification network, image label classification method and device
CN111680678B (en) Target area identification method, device, equipment and readable storage medium
CN115457531A (en) Method and device for recognizing text
CN113704531A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112215171B (en) Target detection method, device, equipment and computer readable storage medium
CN110569814A (en) Video category identification method and device, computer equipment and computer storage medium
WO2023040506A1 (en) Model-based data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
US11030726B1 (en) Image cropping with lossless resolution for generating enhanced image databases
CN113869138A (en) Multi-scale target detection method and device and computer readable storage medium
US20230410465A1 (en) Real time salient object detection in images and videos
WO2024027347A9 (en) Content recognition method and apparatus, device, storage medium, and computer program product
US20220292132A1 (en) METHOD AND DEVICE FOR RETRIEVING IMAGE (As Amended)
CN115082840B (en) Action video classification method and device based on data combination and channel correlation
CN117688984A (en) Neural network structure searching method, device and storage medium
CN116758554A (en) Test question information extraction method and device, electronic equipment and storage medium
CN114638973A (en) Target image detection method and image detection model training method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination