CN112949616A - Question processing method and device, electronic equipment and computer storage medium - Google Patents

Question processing method and device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN112949616A
CN112949616A CN202110520085.6A CN202110520085A CN112949616A CN 112949616 A CN112949616 A CN 112949616A CN 202110520085 A CN202110520085 A CN 202110520085A CN 112949616 A CN112949616 A CN 112949616A
Authority
CN
China
Prior art keywords
connection
target
connecting line
standard
relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110520085.6A
Other languages
Chinese (zh)
Inventor
宁亚光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Century TAL Education Technology Co Ltd
Original Assignee
Beijing Century TAL Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Century TAL Education Technology Co Ltd filed Critical Beijing Century TAL Education Technology Co Ltd
Priority to CN202110520085.6A priority Critical patent/CN112949616A/en
Publication of CN112949616A publication Critical patent/CN112949616A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/412Layout analysis of documents structured with printed lines or input boxes, e.g. business forms or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a topic processing method and device, electronic equipment and a computer storage medium. The method comprises the following steps: carrying out connection endpoint detection on a target image containing connection questions to obtain position information of each connection endpoint in the target image and hidden vectors respectively corresponding to each connection endpoint; obtaining a connection relation to be processed between each target connection object according to the similarity between the hidden vectors corresponding to each connection end point and the corresponding relation between each connection end point and each target connection object in the target image obtained based on the position information of each connection end point; and performing question processing according to the to-be-processed connecting line relation among the target connecting line objects and the standard connecting line relation among the standard connecting line objects in the standard image which is obtained in advance and matched with the target image to obtain a processing result. According to the embodiment of the application, the efficiency of processing the connection questions is improved, and the labor cost is reduced.

Description

Question processing method and device, electronic equipment and computer storage medium
Technical Field
The embodiment of the application relates to the technical field of image recognition, in particular to a topic processing method and device, electronic equipment and a computer storage medium.
Background
The continuous development of image processing technology makes automatic processing of topics in the field of online education realistic. For example: for the questions answered by students, automatic correction of the questions and the like can be realized by carrying out a series of image processing operations on the images containing the questions.
At present, the automatic processing scheme for the types of questions such as blank filling questions, choice questions and the like is mature. However, for the line topic type, the manual processing is mainly relied on. Therefore, how to implement automatic processing of the connection problem is a problem that needs to be solved urgently.
Disclosure of Invention
The application aims to provide a topic correction method, a topic correction device, electronic equipment and a computer storage medium, which are used for realizing automatic correction of a connection topic.
According to a first aspect of embodiments of the present application, there is provided a title processing method, including:
carrying out connection endpoint detection on a target image containing connection questions to obtain position information of each connection endpoint in the target image and hidden vectors respectively corresponding to each connection endpoint; the similarity between any two hidden vectors is used for representing whether a connection relation exists between connection endpoints corresponding to the two hidden vectors;
obtaining a connection relation to be processed between each target connection object according to the similarity between the hidden vectors corresponding to each connection end point and the corresponding relation between each connection end point and each target connection object in the target image obtained based on the position information of each connection end point;
and performing question processing according to the to-be-processed connecting line relation among the target connecting line objects and the standard connecting line relation among the standard connecting line objects in the standard image which is obtained in advance and matched with the target image to obtain a processing result.
According to a second aspect of embodiments of the present application, there is provided a title processing apparatus, the apparatus including:
the connecting line end point detection module is used for carrying out connecting line end point detection on a target image containing connecting line questions to obtain position information of each connecting line end point in the target image and hidden vectors respectively corresponding to each connecting line end point; the similarity between any two hidden vectors is used for representing whether a connection relation exists between connection endpoints corresponding to the two hidden vectors;
a to-be-processed link relation obtaining module, configured to obtain a to-be-processed link relation between the target link objects according to similarity between hidden vectors corresponding to the link endpoints and a corresponding relation between the link endpoints and the target link objects in the target image, which is obtained based on position information of the link endpoints;
and the processing result obtaining module is used for performing question processing according to the to-be-processed connecting line relation among the target connecting line objects and the standard connecting line relation among the standard connecting line objects in the standard image which is obtained in advance and matched with the target image to obtain a processing result.
According to a third aspect of embodiments herein, there is provided an electronic apparatus, the apparatus comprising: one or more processors; a computer readable medium configured to store one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the title processing method according to the first aspect.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable medium on which a computer program is stored, the program, when executed by a processor, implementing the title processing method according to the first aspect.
According to the title correction method, the title correction device, the electronic equipment and the computer storage medium, provided by the embodiment of the application, the target image containing the connection title is subjected to connection endpoint detection to obtain the position information of each connection endpoint in the target image and the hidden vectors corresponding to each connection endpoint; the similarity between any two hidden vectors is used for representing whether a connection relation exists between connection endpoints corresponding to the two hidden vectors; obtaining a connection relation to be processed between each target connection object according to the similarity between the hidden vectors corresponding to each connection end point and the corresponding relation between each connection end point and each target connection object in the target image obtained based on the position information of each connection end point; and performing question processing according to the to-be-processed connecting line relation among the target connecting line objects and the standard connecting line relation among the standard connecting line objects in the standard image which is obtained in advance and matched with the target image to obtain a processing result.
In the embodiment of the application, the connection relation to be processed between the target connection objects in the target image can be obtained based on the similarity between the hidden vectors corresponding to the connection end points in the target image and the corresponding relation between the connection end points and the target connection objects; based on the standard connection relation among the standard connection objects in the standard images obtained in advance, the correct connection relation among the target connection objects can be obtained; and then automatically processing the connection relation to be processed through the correct connection relation to obtain a processing result. The process does not need manual participation, automatic processing of the connection questions is achieved, and compared with a processing method which needs manual participation, the method improves the processing efficiency of the connection questions and reduces labor cost.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of steps of a topic processing method according to a first embodiment of the present application;
FIG. 2 is a schematic diagram of a target image of a package connection topic;
FIG. 3 is a schematic illustration of a standard image matched to the target image shown in FIG. 2;
FIG. 4 is a flow chart illustrating steps of a topic processing method according to a second embodiment of the present application;
FIG. 5 is a process flow of the detection model for processing a target image;
FIG. 6 is a diagram illustrating a topic processing flow according to an embodiment II of the present application;
FIG. 7 is a schematic diagram illustrating location information of each target link object in the target image shown in FIG. 2;
FIG. 8 is a schematic diagram of sequence numbers of a target link object and a standard link object;
FIG. 9 is a schematic structural diagram of a topic processing apparatus in the third embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present application;
fig. 11 is a hardware structure of an electronic device according to a fifth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The first embodiment,
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a title processing method according to a first embodiment of the present application.
The title processing method of the embodiment of the application comprises the following steps:
step 102, performing connection endpoint detection on the target image containing the connection topics to obtain position information of each connection endpoint in the target image and hidden vectors corresponding to each connection endpoint.
The similarity between any two hidden vectors is used for representing whether a connection relation exists between connection endpoints corresponding to the two hidden vectors.
The connection end point in the embodiment of the present application refers to an end point of a connection to be processed existing in a target image. For ease of understanding, the description is made by way of example: referring to fig. 2, fig. 2 is a target image including line titles, where the line endpoints in the target image are the endpoints (6 in total) of the to-be-processed lines (3 in total) manually drawn in fig. 2.
In this embodiment of the present application, a suitable method may be adopted to perform connection endpoint detection on a target image, so as to obtain position information of each connection endpoint and hidden vectors respectively corresponding to each connection endpoint, for example: the connection point in the target image may be used as a detection target, a general target detection algorithm may be used to perform target detection, and target detection may also be performed through a neural network model.
For any two hidden vectors obtained by detection, if the similarity between the two hidden vectors is large, for example: if the number of the hidden vectors is larger than the preset threshold, a connection relation exists between the connection endpoints corresponding to the two hidden vectors, namely, the connection endpoints corresponding to the two hidden vectors belong to the same connection line; if the similarity between the two implicit vectors is small, for example: if the connection point is smaller than the preset threshold, the connection relation does not exist between the connection points corresponding to the two hidden vectors, that is, the connection points corresponding to the two hidden vectors do not belong to the same connection line.
In the embodiment of the present application, specific ways of calculating the similarity between hidden vectors are not limited, for example: the similarity between two hidden vectors can be measured by the Euclidean distance between the hidden vectors, and the similarity between two hidden vectors can also be measured by the Manhattan distance between the hidden vectors.
And 104, obtaining a to-be-processed connection relation between the target connection objects according to the similarity between the hidden vectors corresponding to the connection end points and the corresponding relation between the connection end points and the target connection objects in the target image, which is obtained based on the position information of the connection end points.
The target link object is an object to be linked in the target image, which contains 6 target link objects in total, and each cloud-shaped region corresponds to one target link object, as shown in fig. 2.
Because the similarity between any two hidden vectors is used for representing whether a connection relation exists between connection end points corresponding to the two hidden vectors, after the hidden vectors corresponding to the connection end points are obtained in the step 102, the connection relation between the connection end points can be obtained according to the similarity between the hidden vectors; and then according to the corresponding relation between each connecting line end point and each target connecting line object, obtaining the to-be-processed connecting line relation between the target connecting line objects.
And 106, performing topic processing according to the to-be-processed connecting line relation among the target connecting line objects and the standard connecting line relation among the standard connecting line objects in the standard image which is obtained in advance and matched with the target image to obtain a processing result.
The standard image comprises a standard connection question, and the standard connection question and the connection question contained in the target image are the same connection question. Referring to fig. 3, fig. 3 is a schematic diagram of a standard image matched with the target image shown in fig. 2, and it can be seen that: the topic included in fig. 2 is the same topic as the topic included in fig. 3.
Specifically, the standard image may be an image that is predetermined to match the target image; after the target image is acquired, an image matched with the target image is searched from a preset topic library containing a plurality of images according to text content contained in the target image.
The standard connecting objects are objects to be connected in the standard image, and the standard connecting relation among the standard connecting objects is used for representing the correct connecting relation among the standard connecting objects. Referring to fig. 3, the standard image contains 6 standard link objects, and each cloud-shaped region corresponds to one standard link object.
The topic processing in the embodiment of the present application may be: and (4) judging the title, namely: judging whether the connection relation between all target connection objects in the question is correct or not; the following steps can be also included: subject correction, namely: judging whether the connection relation between the target connection objects in the title is correct, and correcting or annotating the target connection objects with the wrong connection relation, wherein the specific content of the title processing is not limited.
When the title processing is performed, the corresponding relationship between each standard link object and the target link object can be obtained first, then the standard link relationship between each standard link object is converted into the correct link relationship between each target link object according to the corresponding relationship, and then the link relationship to be processed between each target link object and the correct link relationship are compared, so that the processing result can be obtained.
In the embodiment of the application, the connection relation to be processed between the target connection objects in the target image can be obtained based on the similarity between the hidden vectors corresponding to the connection end points in the target image and the corresponding relation between the connection end points and the target connection objects; based on the standard connection relation among the standard connection objects in the standard images obtained in advance, the correct connection relation among the target connection objects can be obtained; and then automatically processing the connection relation to be processed through the correct connection relation to obtain a processing result. The process does not need manual participation, automatic processing of the connection questions is achieved, and compared with a processing method which needs manual participation, the method improves the processing efficiency of the connection questions and reduces labor cost.
The topic processing method of the embodiments of the present application can be performed by any suitable electronic device with data processing capability, including but not limited to: servers, PCs, even high performance mobile terminals, etc.
Example II,
Referring to fig. 4, fig. 4 is a flowchart illustrating steps of a title processing method according to a second embodiment of the present application.
The title processing method of the embodiment of the application comprises the following steps:
step 402, inputting a target image containing a connection problem into a detection model which is trained in advance to obtain a connection end thermodynamic diagram and an implicit vector characteristic diagram.
The endpoint thermodynamic diagram of the connection is a score map of the center point, wherein the endpoint of the connection is set as the center point.
The pixel value of each pixel point in the connection endpoint thermodynamic diagram is used for representing the possibility that the pixel point is the connection endpoint, and specifically, the value of each pixel point in the connection endpoint thermodynamic diagram can be between 0 and 1, and is used for representing the probability that the pixel point is the connection endpoint (central point).
Each pixel point in the connecting-line endpoint thermodynamic diagram corresponds to one hidden vector in the hidden vector feature diagram.
Referring to fig. 5, fig. 5 is a processing flow of a detection model on a target image, specifically, the target image may be input to the detection model, for example: neural network models, etc.; and simultaneously obtaining a wiring end point thermodynamic diagram with the size of W x H x 1 and an implicit vector feature diagram with the size of W x H x D through the detection model, wherein W is the width of the wiring end point thermodynamic diagram or the implicit vector feature diagram, H is the height of the wiring end point thermodynamic diagram or the implicit vector feature diagram, D represents the dimension of the obtained implicit vector, and W, H and D are determined by the structure parameters of the detection model. As can be seen from fig. 5, each pixel in the endpoint thermodynamic diagram corresponds to a hidden vector with dimension D in the hidden vector feature diagram.
Optionally, in some of these embodiments, the detection model includes at least: a feature extraction section; a first branch network and a second branch network connected in parallel after the feature extraction section;
inputting a target image containing a connection problem into a detection model which is trained in advance to obtain a connection end thermodynamic diagram and a hidden vector characteristic diagram, wherein the connection end thermodynamic diagram and the hidden vector characteristic diagram comprise the following steps:
inputting a target image containing a connection question into a detection model which is trained in advance, and performing feature extraction on the target image through a feature extraction part of the detection model to obtain image features corresponding to the target image;
obtaining a connecting end thermodynamic diagram based on image characteristics through a first branch network of a detection model; and obtaining a hidden vector feature map based on the image features through a second branch network of the detection model.
In the embodiment of the application, the feature extraction part of the detection model can be any backbone network structure capable of extracting features; the first branch network and the second branch network may both adopt convolutional networks, and in the embodiment of the present application, specific structures of the feature extraction part, the first branch network, and the second branch in the detection model are not limited by parameters.
The following explains the detection process of the model by taking the centret as an example of the detection model:
the feature extraction part of the CenterNet model can adopt a Resnet50 network structure; the first branch network and the second branch network can both adopt convolution networks, and the detection process of the CenterNet model is as follows:
inputting the target image into Resnet50 for feature extraction to obtain image features; and (3) respectively inputting the image characteristics into 2 branch networks (2 convolution networks) to carry out wiring end point thermodynamic diagram prediction and hidden vector characteristic diagram prediction, thereby obtaining a wiring end point thermodynamic diagram and a hidden vector characteristic diagram.
Optionally, in some of the embodiments, the training process of the detection model includes:
acquiring a sample image containing a connection question;
inputting the sample image into an initial detection model, and performing feature extraction on the sample image through a feature extraction part of the initial detection model to obtain sample image features corresponding to the sample image;
obtaining a connection endpoint prediction thermodynamic diagram based on the sample image characteristics through a first branch network of an initial detection model; obtaining a hidden vector prediction characteristic diagram based on the sample image characteristics through a second branch network of the initial detection model;
obtaining a first loss value based on the connection end point prediction thermodynamic diagram and a preset focus loss function; obtaining a second loss value based on the implicit vector prediction characteristic diagram and a preset triple loss function;
and training the initial detection model according to the first loss value and the second loss value to obtain the detection model.
The input of the triple loss function triplet loss is a triple < a, p, n >, in the embodiment of the present application, a, p, and n respectively represent hidden vectors corresponding to three connection endpoints, where the connection endpoints corresponding to a and p have a connection relationship, and the connection endpoints corresponding to a and n have no connection relationship, that is, a and p are hidden vectors of the same type; a and n are different types of hidden vectors.
A triblet loss can be represented by the following formula:
L=max(d(a,p)−d(a,n)+margin,0)
where d (a, p) represents the dot product between hidden vectors a and p, and d (a, n) represents the dot product between hidden vectors a and n.
And step 404, obtaining position information of each connecting end point in the target image and hidden vectors corresponding to each connecting end point respectively based on the connecting end point thermodynamic diagram and the hidden vector characteristic diagram.
Specifically, for each pixel point in the thermodynamic diagram of the connection end point, the pixel point whose pixel value is greater than the preset threshold value is determined as the connection end point, and then the hidden vector corresponding to each connection end point can be obtained.
For any two obtained hidden vectors, the similarity between the two hidden vectors is used for representing whether a connection relation exists between connection endpoints corresponding to the two hidden vectors. Specifically, if the similarity between the two implicit vectors is large, for example: if the similarity is greater than the preset similarity threshold, a connection relation exists between the connection end points corresponding to the two hidden vectors, namely, the connection end points corresponding to the two hidden vectors belong to the same connection line; if the similarity between the two implicit vectors is small, for example: if the value is smaller than the preset similarity threshold, the connection end points corresponding to the two hidden vectors do not have a connection relation, that is, the connection end points corresponding to the two hidden vectors do not belong to the same connection line.
Step 406, obtaining the connection relation between the connection endpoints according to the similarity between the hidden vectors corresponding to the connection endpoints.
The similarity between the hidden vectors corresponding to the two connection end points with the connection relation is the highest.
Specifically, for any one connection end point lpiCan calculate l separatelypiCorresponding hidden vector hveciSimilarity with other hidden vectors, and further combining the hidden vectors with the hidden vector hveciThe connection end point corresponding to the hidden vector with the highest similarity is determined as the connection end point corresponding to the hidden vector with the highest similaritypiThere are connection endpoints of the connection relationship. Wherein N is more than or equal to i and is more than 0, and N is a connecting line end pointThe total number of the cells.
Step 408, performing target detection on the target image to obtain position information of each target connection object in the target image, and obtaining a corresponding relationship between the connection end point and the target connection object based on the position information of each target connection object and the position information of each connection end point.
The target link object is an object to be linked in the target image, and referring to fig. 2, the target image totally includes 6 target link objects, each cloud-shaped region corresponds to one target link object, and the position information of the target link object may be information representing the position of the cloud-shaped region in the target image.
In the embodiment of the present application, target detection may be performed on a target image in an appropriate manner to obtain position information of each target connection object in the target image, for example: the target detection can be carried out on the target image by adopting a general target detection algorithm, and the target detection of the target image can also be realized by a neural network model. In the embodiment of the present application, the specific manner adopted in the target detection is not limited.
After the target image is detected, the position information of the region where the target connection object is located (for example, a rectangular region including the target connection object) can be obtained.
Optionally, in some embodiments, obtaining the correspondence between the connection end point and the target connection object based on the position information of each target connection object and the position information of each connection end point includes:
for each target connecting line object, judging whether a connecting line end point positioned in the area where the target connecting line object is positioned exists or not based on the position information of the target connecting line object and the position information of each connecting line end point;
if the target connection object exists, determining a connection end point positioned in the area where the target connection object exists as a connection end point corresponding to the target connection object;
if the target connection object does not exist, determining the connection end point with the minimum distance with the target connection object in all the connection end points as the connection end point corresponding to the target connection object.
Specifically, after the target image is detected, the position information of the rectangular area including the target connection object may be obtained, and therefore, the distance between the connection end point and the target connection object may be the distance between the connection end point and the center point of the rectangular area, the distance between the connection end point and a straight line located on a certain side of the rectangular area, or the like, which is not limited in the embodiment of the present application.
For example, for the target link object located in the first row and the first column in fig. 2, since there is a link endpoint located inside the region where the target link object is located, the link endpoint located inside the region where the target link object is located is determined as the link endpoint corresponding to the target link object. For another example, for the target link object in the second row and the first column in fig. 2, since there is no link endpoint located inside the region where the target link object is located, the link endpoint closest to the target link object is determined as the link endpoint corresponding to the target link object.
Step 410, obtaining the connection relation to be processed between the target connection objects based on the connection relation between the connection endpoints and the corresponding relation between the connection endpoints and the target connection objects.
Step 412, obtaining the position information of each standard connection object in the standard image matched with the target image and the standard connection relation between each standard connection object.
The standard image comprises a standard connection question, and the standard connection question and the connection question contained in the target image are the same connection question.
Specifically, the standard image may be an image that is predetermined to match the target image; after the target image is acquired, an image matched with the target image is searched from a preset topic library containing a plurality of images according to text content contained in the target image.
The standard connecting objects are objects to be connected in the standard image, and the standard connecting relation among the standard connecting objects is used for representing the correct connecting relation among the standard connecting objects.
Optionally, in some embodiments, the obtaining of the position information of each standard link object in the standard image matched with the target image and the standard link relationship between each standard link object includes:
performing text recognition on the target image to obtain a text recognition result;
searching an image matched with the target image in a preset question library based on a text recognition result to be used as a standard image;
and acquiring the position information of each standard connecting object in the pre-marked standard image and the standard connecting relation among the standard connecting objects.
Specifically, a general text recognition method may be adopted to perform text recognition on the target image, so as to obtain a text recognition result, for example, a pre-trained text recognition neural network model may be adopted to implement text recognition, and in the embodiment of the present application, a specific recognition mode adopted for performing text recognition is not limited.
The preset topic library may include: the method comprises the following steps that a plurality of standard images, pre-marked position information of each standard connecting object in each standard image and a standard connecting relation among the standard connecting objects in each standard image are obtained; meanwhile, text recognition can be performed on each standard image contained in the question bank in advance to obtain text information corresponding to each standard image, and the text information is stored in a preset question bank.
And performing text recognition on the target image, comparing the obtained text recognition result with text information corresponding to each standard image contained in a preset question library, searching in the preset question library to obtain a standard image matched with the target image, and further obtaining the position information of each standard connecting object in the standard image and the standard connecting relation among the standard connecting objects.
Compared with the mode of manually determining the standard image matched with the target image in advance, the mode of searching the standard image matched with the target image in the preset topic database according to the text content contained in the target image can automatically acquire the standard image matched with the target image, and manual participation is not needed in the acquiring process, so that the labor cost in the topic processing process can be reduced. Meanwhile, the question bank can contain a large number of different standard images, so that the embodiment of the application can be used for processing various different connection questions, and the application range is wider.
Step 414, obtaining the corresponding relationship between the target connection object and the standard connection object based on the position information of each standard connection object and the position information of each target connection object.
Since the number of the standard connection objects in the standard image is multiple, and similarly, the number of the target connection objects in the target image is also multiple, in order to ensure that the processing result can be obtained by correctly performing the title processing on the target image based on the standard connection relation between the standard connection objects, it is necessary to determine the corresponding relation between each target connection object and each standard connection object after obtaining the area information of each standard connection object in the standard image and the area information of each target connection object in the target image.
Optionally, in some embodiments, obtaining the correspondence between the target link object and the standard link object based on the position information of each standard link object and the position information of each target link object includes:
numbering each standard connecting line object in a preset sequencing mode based on the position information of each standard connecting line object to obtain the serial number of each standard connecting line object;
numbering each target connecting line object by adopting the same preset sequencing mode as that of each standard connecting line object based on the position information of each target connecting line object to obtain the serial number of each target connecting line object;
and obtaining the corresponding relation between the target connection object and the standard connection object according to the serial number of each standard connection object and the serial number of each target connection object.
In the embodiment of the present application, what sort method is specifically adopted is not limited, and may be set according to actual conditions. For example: the standard wiring objects can be numbered in sequence from top to bottom, wherein for the standard wiring objects on the same horizontal line, the number of the standard wiring object on the left side is smaller than that of the standard wiring object on the right side; then, the target connection objects are numbered in sequence according to the same sorting mode as the above. The standard connecting line objects can be numbered in sequence from left to right, wherein for the standard connecting line objects on the same plumb line, the number of the standard connecting line object positioned above is smaller than that of the standard connecting line object positioned below; then, the target link objects are numbered in sequence according to the same sorting mode as the above.
Taking fig. 2 and fig. 3 as an example, assuming that the target link objects in fig. 2 are numbered sequentially from top to bottom, where, for the target link objects located on the same horizontal line, the number of the target link object located on the left side is smaller than the number of the target link object located on the right side, then: the number of the target connecting line object positioned in the first row and the first column is 1, the number of the target connecting line object positioned in the first row and the second column is 2, the number of the target connecting line object positioned in the second row and the first column is 3, and so on, the number of the target connecting line object positioned in the third row and the second column is 6; correspondingly, with respect to fig. 3, still in the above sorting manner, it can be obtained that: in fig. 3, the standard link object in the first row and the first column is numbered 1, the standard link object in the first row and the second column is numbered 2, the standard link object in the second row and the first column is numbered 3, and so on, the standard link object in the third row and the second column is numbered 6.
Furthermore, the corresponding relationship between the target connection object and the standard connection object can be determined according to the numbers, that is, the target connection object and the standard connection object with the same numbers can be determined as connection objects with the corresponding relationship: the target connection object with the number of 1 corresponds to the standard connection object with the number of 1; the target connection object with the number of 2 corresponds to the standard connection object with the number of 2; the target connection object with the number of 3 corresponds to the standard connection object with the number of 3; and by analogy, the target connection object with the number of 6 corresponds to the standard connection object with the number of 6.
Compared with the mode of performing text matching based on the text information contained in the target link object and the text information contained in the standard link object and further determining the corresponding relationship between the target link object and each standard link object, the mode only needs to number the standard link objects according to the position information of each standard link object and can determine the corresponding relationship according to the numbers without performing a complicated text matching process, so that the processing process is simpler and the efficiency is higher.
Step 416, according to the to-be-processed connection relationship among the target connection objects, the corresponding relationship between the target connection objects and the standard connection objects, and the standard connection relationship, performing question processing to obtain a processing result.
The topic processing in the embodiment of the present application may be: and (4) judging the title, namely: judging whether the connection relation between all target connection objects in the question is correct or not; the following steps can be also included: subject correction, namely: judging whether the connection relation between the target connection objects in the title is correct, and correcting or annotating the target connection objects with the wrong connection relation, wherein the specific content of the title processing is not limited.
In the embodiment shown in fig. 4, the connection relation to be processed between the target connection objects in the target image can be obtained based on the similarity between the hidden vectors corresponding to the connection endpoints in the target image and the corresponding relation between the connection endpoints and the target connection objects; based on the standard connection relation among the standard connection objects in the standard images obtained in advance, the correct connection relation among the target connection objects can be obtained; and then automatically processing the connection relation to be processed through the correct connection relation to obtain a processing result. The process does not need manual participation, automatic processing of the connection questions is achieved, and compared with a processing method which needs manual participation, the method improves the processing efficiency of the connection questions and reduces labor cost.
Meanwhile, in the embodiment of the application, the target image is directly input into the detection model which is trained in advance, so that the connection end point thermodynamic diagram and the hidden vector feature diagram are obtained, and further the position information of each connection end point and the hidden vectors corresponding to each connection end point in the target image are obtained based on the connection end point thermodynamic diagram and the hidden vector feature diagram. In the process, after the model training is carried out once, the trained model can be used for directly obtaining the connection endpoint thermodynamic diagram and the hidden vector characteristic diagram.
The topic processing method of the embodiments of the present application can be performed by any suitable electronic device with data processing capability, including but not limited to: servers, PCs, even high performance mobile terminals, etc.
Referring to fig. 6, fig. 6 is a schematic view of a topic processing flow provided in the second embodiment of the present application, and the following briefly describes, with reference to fig. 6, the topic processing flow provided in the second embodiment of the present application, which mainly includes:
first, a target image is acquired. The target image is an image including a connection topic, and for example, the target image shown in fig. 2 may be acquired.
And secondly, performing text recognition on the target image to obtain a text recognition result.
And thirdly, searching an image matched with the target image in a preset topic library as a standard image according to the text recognition result. If a standard image matched with the target image is searched, executing the fourth step; if the standard image matching the target image is not searched, the flow of the title processing is ended.
Fourthly, performing target detection on the target image by adopting a general target detection network to obtain the position information of each target connecting object in the target image, for example: the position information of each target link object can be represented in a set manner as follows: b isp={bpiThe/i belongs to {1,2,3, …, n } }, wherein n represents the number of target connecting line objects in the target image; bpiIndicating the position information of the ith target connecting object in the target image, and when the detected target connecting object area is a rectangle, the position information can pass through the rectangleFor example, as shown in fig. 7, each rectangular box in fig. 7 represents position information of each target connection object in the target image obtained after target detection is performed on the target image shown in fig. 2, that is, the position information of each target connection object in the target image is represented by coordinates of four vertices: and the position of the detection frame corresponding to each target connecting line object.
And fifthly, acquiring annotation information aiming at the standard image, wherein the annotation information comprises two parts: position information of each standard connecting line object in the standard image; and standard connection relation among the standard connection objects. For example: the position information of each standard connecting line object in the standard image can be represented in a set manner as follows: b isg={bgiI belongs to {1,2,3, …, n } }, wherein n represents the number of standard wiring objects in the standard image (the same as the number of target wiring objects); bgiAnd representing the position information of the ith standard connecting line object in the standard image, wherein the position information can be represented by the coordinates of four vertexes of the rectangle. The standard connection relation between the standard connection objects can be expressed as: { (b)gi,bgj) I ∈ {1,2,3, …, n } }, where bgiAnd bgjRespectively representing two standard connecting line objects with standard connecting line relation.
And sixthly, numbering the standard connecting line object and the target connecting line object respectively in the same sequencing mode to obtain the serial numbers of the standard connecting line object and the target connecting line object respectively. For example: the sorting mode can be as follows: numbering each standard connecting line object from 1 in sequence from top to bottom, wherein the number of the standard connecting line object positioned on the left side is smaller than that of the standard connecting line object positioned on the right side for the standard connecting line objects positioned on the same horizontal line; then, the target connection objects are numbered in sequence according to the same sorting mode as the above. Referring to fig. 8, fig. 8 shows the sequence numbers of the target link object and the standard link object obtained after numbering the target link object in the target image and the standard link object in the standard image shown in fig. 2 according to the above sorting manner. The left image in fig. 8 is a target image, and the right image is a standard image.
Seventhly, connecting the target imageAnd point detection is carried out to obtain the position information of the connection end points and the hidden vectors corresponding to the connection end points. Specifically, the target image may be input into a detection model trained in advance, and connection end point detection and hidden vector regression are performed to obtain a connection end point thermodynamic diagram and a hidden vector feature diagram; and obtaining the position information of the connecting end points and the hidden vectors corresponding to the connecting end points based on the thermodynamic diagram and the hidden vector characteristic diagram of the connecting end points. For example: the position information of the connection end point can also be represented in a set mode as follows: l isp={lpiI ∈ {1,2,3, …, n } }, where n represents the number of link endpoints (same as the number of target link objects); lpiPosition information representing an ith connection end point; correspondingly, the hidden vector corresponding to each connection end point can also be represented in a set manner as follows: hvec={hveciI ∈ {1,2,3, …, n } }, where hveciRepresenting the hidden vector corresponding to the ith link endpoint.
And eighthly, obtaining the corresponding relation between the connecting end points and the target connecting objects based on the position information of each target connecting object and the position information of each connecting end point. Specifically, the method comprises the following steps: if a certain connecting end point falls in the area where a certain target connecting object is located, determining the connecting end point as a connecting end point corresponding to the target connecting object; and if the connecting end point does not fall into the area of any target connecting object, determining that the target connecting object closest to the connecting end point is the target connecting object corresponding to the connecting end point.
Ninth, obtaining the connection relation between the connection endpoints according to the similarity between the hidden vectors corresponding to the connection endpoints; and obtaining the link relation to be processed between the target link objects based on the link relation between the link endpoints and the corresponding relation between the link endpoints and the target link objects. Specifically, the method comprises the following steps: for an arbitrary connection terminal lpiCan calculate l separatelypiCorresponding hidden vector hveciSimilarity with other hidden vectors, and further combining the hidden vectors with the hidden vector hveciThe connection end point corresponding to the hidden vector with the highest similarity is determined as the connection end point corresponding to the hidden vector with the highest similaritypiThere are connection endpoints of the connection relationship.
And step ten, comparing and matching the relation of the to-be-processed connecting lines among the target connecting line objects with the standard connecting line relation among the standard connecting line objects according to the corresponding relation between the target connecting line objects and the standard connecting line objects to obtain a processing result.
The topic processing flow shown in fig. 6 can automatically perform topic processing, and thus a processing result can be obtained. The process does not need manual participation, and automatic processing of the connection questions is achieved. Compared with a processing method which needs manual participation, the method improves the processing efficiency of the connection questions and reduces the labor cost.
Meanwhile, in the question processing flow, the target image is directly input into a detection model which is trained in advance, so that a connection end point thermodynamic diagram and a hidden vector feature diagram are obtained, and further, the position information of each connection end point and the hidden vectors corresponding to each connection end point in the target image are obtained based on the connection end point thermodynamic diagram and the hidden vector feature diagram. In the process, after the model training is carried out once, the trained model can be used for directly obtaining the connection endpoint thermodynamic diagram and the hidden vector characteristic diagram.
In addition, in the topic processing flow, after the target image is acquired, the standard image matched with the target image is searched from the preset topic library containing a plurality of images according to the text content contained in the target image, and compared with the standard image which is manually determined in advance and is matched with the target image, the standard image matched with the target image can be automatically acquired, and manual participation is not needed in the acquisition process, so that the labor cost in the topic processing process can be reduced. Meanwhile, the question bank can contain a large number of different standard images, so that the embodiment of the application can be used for processing various different connection questions, and the application range is wider.
Example III,
Referring to fig. 9, fig. 9 is a schematic structural diagram of a topic processing apparatus in the third embodiment of the present application. The title processing apparatus provided by the embodiment of the application includes:
a connection endpoint detection module 902, configured to perform connection endpoint detection on a target image including connection questions to obtain position information of each connection endpoint in the target image and hidden vectors corresponding to each connection endpoint; the similarity between any two hidden vectors is used for representing whether a connection relation exists between connection endpoints corresponding to the two hidden vectors;
a to-be-processed link relation obtaining module 904, configured to obtain a to-be-processed link relation between each target link object according to similarity between hidden vectors corresponding to each link endpoint and a corresponding relation between each link endpoint and each target link object in the target image, which is obtained based on position information of each link endpoint;
a processing result obtaining module 906, configured to perform topic processing according to the to-be-processed link relationship between the target link objects and the pre-obtained standard link relationship between the standard link objects in the standard image that matches the target image, so as to obtain a processing result.
Optionally, in some embodiments, the connection endpoint detecting module 902 is specifically configured to: inputting a target image containing a connection question into a detection model which is trained in advance to obtain a connection endpoint thermodynamic diagram and a hidden vector characteristic diagram; the pixel value of each pixel point in the connecting end thermodynamic diagram is used for representing the possibility that the pixel point is the connecting end point; each pixel point in the connecting end thermodynamic diagram corresponds to one hidden vector in the hidden vector characteristic diagram; and obtaining the position information of each connecting end point in the target image and the hidden vector corresponding to each connecting end point respectively based on the connecting end point thermodynamic diagram and the hidden vector characteristic diagram.
Optionally, in some of these embodiments, the detection model includes at least: a feature extraction section; a first branch network and a second branch network connected in parallel after the feature extraction section; the connection endpoint detection module 902 is specifically configured to, when executing the step of inputting the target image including the connection question to the detection model which is trained in advance to obtain the connection endpoint thermodynamic diagram and the hidden vector feature diagram:
inputting a target image containing a connection question into a detection model which is trained in advance, and performing feature extraction on the target image through a feature extraction part of the detection model to obtain image features corresponding to the target image; obtaining a connecting end thermodynamic diagram based on image characteristics through a first branch network of a detection model; and obtaining a hidden vector feature map based on the image features through a second branch network of the detection model.
Optionally, in some embodiments, the to-be-processed connection relation obtaining module 904 is specifically configured to:
obtaining a connection relation between the connection endpoints according to the similarity between the hidden vectors corresponding to the connection endpoints; the similarity between the hidden vectors corresponding to the two connecting line end points with the connecting line relation is highest; carrying out target detection on the target image to obtain position information of each target connecting line object in the target image; obtaining the corresponding relation between the connecting end points and the target connecting objects based on the position information of the target connecting objects and the position information of the connecting end points; and obtaining the link relation to be processed between the target link objects based on the link relation between the link endpoints and the corresponding relation between the link endpoints and the target link objects.
Optionally, in some embodiments, when the to-be-processed connection relation obtaining module 904 performs the step of obtaining the corresponding relation between the connection end point and the target connection object based on the position information of each target connection object and the position information of each connection end point, the to-be-processed connection relation obtaining module is specifically configured to:
for each target connecting line object, judging whether a connecting line end point positioned in the area where the target connecting line object is positioned exists or not based on the position information of the target connecting line object and the position information of each connecting line end point; if the target connection object exists, determining a connection end point positioned in the area where the target connection object exists as a connection end point corresponding to the target connection object; if the target connection object does not exist, determining the connection end point with the minimum distance with the target connection object in all the connection end points as the connection end point corresponding to the target connection object.
Optionally, in some embodiments, when executing the step of performing topic processing according to the to-be-processed link relationship between the target link objects and the pre-obtained standard link relationship between the standard link objects in the standard image matched with the target image to obtain the processing result, the processing result obtaining module 906 is specifically configured to:
acquiring position information of each standard connecting object in a standard image matched with the target image and a standard connecting relation between the standard connecting objects; obtaining the corresponding relation between the target connecting line object and the standard connecting line object based on the position information of each standard connecting line object and the position information of each target connecting line object; and performing question processing according to the relation of the to-be-processed connecting lines among the target connecting line objects, the corresponding relation between the target connecting line objects and the standard connecting line relation to obtain a processing result.
Optionally, in some embodiments, when the step of obtaining the corresponding relationship between the target connection object and the standard connection object based on the position information of each standard connection object and the position information of each target connection object is executed by the processing result obtaining module 906, the processing result obtaining module is specifically configured to:
numbering each standard connecting line object in a preset sequencing mode based on the position information of each standard connecting line object to obtain the serial number of each standard connecting line object; numbering each target connecting line object by adopting the same preset sequencing mode as that of each standard connecting line object based on the position information of each target connecting line object to obtain the serial number of each target connecting line object; and obtaining the corresponding relation between the target connection object and the standard connection object according to the serial number of each standard connection object and the serial number of each target connection object.
Optionally, in some embodiments, when the step of obtaining the position information of each standard link object in the standard image matched with the target image and the standard link relationship between each standard link object is executed by the processing result obtaining module 906, the processing result obtaining module is specifically configured to:
performing text recognition on the target image to obtain a text recognition result; searching an image matched with the target image in a preset question library based on a text recognition result to be used as a standard image; and acquiring the position information of each standard connecting object in the pre-marked standard image and the standard connecting relation among the standard connecting objects.
Optionally, in some of the embodiments, the title processing device further includes:
the model training module is used for acquiring a sample image containing a connection question; inputting the sample image into an initial detection model, and performing feature extraction on the sample image through a feature extraction part of the initial detection model to obtain sample image features corresponding to the sample image; obtaining a connection endpoint prediction thermodynamic diagram based on the sample image characteristics through a first branch network of an initial detection model; obtaining a hidden vector prediction characteristic diagram based on the sample image characteristics through a second branch network of the initial detection model; obtaining a first loss value based on the connection end point prediction thermodynamic diagram and a preset focus loss function; obtaining a second loss value based on the implicit vector prediction characteristic diagram and a preset triple loss function; and training the initial detection model according to the first loss value and the second loss value to obtain the detection model.
The topic processing device in the embodiment of the application is used for implementing the corresponding topic processing method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again. In addition, the functional implementation of each module in the topic processing apparatus in the embodiment of the present application can refer to the description of the corresponding part in the foregoing method embodiment, and is not repeated here.
Example four,
Fig. 10 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present application; the electronic device may include:
one or more processors 1001;
a computer-readable medium 1002, which may be configured to store one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the title processing method as in the first or second embodiment.
Example V,
Fig. 11 is a hardware structure of an electronic device according to a fifth embodiment of the present application; as shown in fig. 11, the hardware structure of the electronic device may include: a processor 1101, a communication interface 1102, a computer-readable medium 1103, and a communication bus 1104;
wherein the processor 1101, the communication interface 1102, and the computer readable medium 1103 communicate with each other via a communication bus 1104;
alternatively, the communication interface 1102 may be an interface of a communication module, such as an interface of a GSM module;
among other things, the processor 1101 may be specifically configured to: carrying out connection endpoint detection on a target image containing connection questions to obtain position information of each connection endpoint in the target image and hidden vectors respectively corresponding to each connection endpoint; the similarity between any two hidden vectors is used for representing whether a connection relation exists between connection endpoints corresponding to the two hidden vectors; obtaining a connection relation to be processed between each target connection object according to the similarity between the hidden vectors corresponding to each connection end point and the corresponding relation between each connection end point and each target connection object in the target image obtained based on the position information of each connection end point; and performing question processing according to the to-be-processed connecting line relation among the target connecting line objects and the standard connecting line relation among the standard connecting line objects in the standard image which is obtained in advance and matched with the target image to obtain a processing result.
Processor 1101 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), etc.; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The computer-readable medium 1103 may be, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
In particular, according to an embodiment of the present application, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code configured to perform the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The computer program, when executed by a Central Processing Unit (CPU), performs the above-described functions defined in the method of the present application. It should be noted that the computer readable medium of the present application can be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access storage media (RAM), a read-only storage media (ROM), an erasable programmable read-only storage media (EPROM or flash memory), an optical fiber, a portable compact disc read-only storage media (CD-ROM), an optical storage media piece, a magnetic storage media piece, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code configured to carry out operations for the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may operate over any of a variety of networks: including a Local Area Network (LAN) or a Wide Area Network (WAN) -to the user's computer, or alternatively, to an external computer (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions configured to implement the specified logical function(s). In the above embodiments, specific precedence relationships are provided, but these precedence relationships are only exemplary, and in particular implementations, the steps may be fewer, more, or the execution order may be modified. That is, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprises a connection end point detection module, a to-be-processed connection relation obtaining module and a processing result obtaining module. For example, the connection end point detection module may be further described as a module that performs connection end point detection on a target image including connection topics to obtain position information of each connection end point in the target image, and obtain a hidden vector corresponding to each connection end point.
As another aspect, the present application also provides a computer-readable medium on which a computer program is stored, the program, when executed by a processor, implementing the title processing method as described in the first or second embodiment.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: carrying out connection endpoint detection on a target image containing connection questions to obtain position information of each connection endpoint in the target image and hidden vectors respectively corresponding to each connection endpoint; the similarity between any two hidden vectors is used for representing whether a connection relation exists between connection endpoints corresponding to the two hidden vectors; obtaining a connection relation to be processed between each target connection object according to the similarity between the hidden vectors corresponding to each connection end point and the corresponding relation between each connection end point and each target connection object in the target image obtained based on the position information of each connection end point; and performing question processing according to the to-be-processed connecting line relation among the target connecting line objects and the standard connecting line relation among the standard connecting line objects in the standard image which is obtained in advance and matched with the target image to obtain a processing result.
The expressions "first", "second", "said first" or "said second" used in various embodiments of the present disclosure may modify various components regardless of order and/or importance, but these expressions do not limit the respective components. The above description is only configured for the purpose of distinguishing elements from other elements. For example, the first user equipment and the second user equipment represent different user equipment, although both are user equipment. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure.
When an element (e.g., a first element) is referred to as being "operably or communicatively coupled" or "connected" (operably or communicatively) to "another element (e.g., a second element) or" connected "to another element (e.g., a second element), it is understood that the element is directly connected to the other element or the element is indirectly connected to the other element via yet another element (e.g., a third element). In contrast, it is understood that when an element (e.g., a first element) is referred to as being "directly connected" or "directly coupled" to another element (a second element), no element (e.g., a third element) is interposed therebetween.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (12)

1. A method for processing a topic, the method comprising:
carrying out connection endpoint detection on a target image containing connection questions to obtain position information of each connection endpoint in the target image and hidden vectors respectively corresponding to each connection endpoint; the similarity between any two hidden vectors is used for representing whether a connection relation exists between connection endpoints corresponding to the two hidden vectors;
obtaining a connection relation to be processed between each target connection object according to the similarity between the hidden vectors corresponding to each connection end point and the corresponding relation between each connection end point and each target connection object in the target image obtained based on the position information of each connection end point;
and performing question processing according to the to-be-processed connecting line relation among the target connecting line objects and the standard connecting line relation among the standard connecting line objects in the standard image which is obtained in advance and matched with the target image to obtain a processing result.
2. The method according to claim 1, wherein the performing connection endpoint detection on the target image including the connection topic to obtain position information of each connection endpoint in the target image and hidden vectors corresponding to each connection endpoint respectively comprises:
inputting a target image containing a connection question into a detection model which is trained in advance to obtain a connection endpoint thermodynamic diagram and a hidden vector characteristic diagram; the pixel value of each pixel point in the connecting end thermodynamic diagram is used for representing the possibility that the pixel point is a connecting end; each pixel point in the connecting end thermodynamic diagram corresponds to one hidden vector in the hidden vector characteristic diagram;
and obtaining the position information of each connecting end point in the target image and the hidden vector corresponding to each connecting end point respectively based on the connecting end point thermodynamic diagram and the hidden vector characteristic diagram.
3. The method of claim 2, wherein the detection model comprises at least: a feature extraction section; a first branch network and a second branch network connected in parallel after the feature extraction section;
inputting a target image containing a connection question into a detection model which is trained in advance to obtain a connection end thermodynamic diagram and a hidden vector characteristic diagram, wherein the connection end thermodynamic diagram and the hidden vector characteristic diagram comprise the following steps:
inputting a target image containing a connection question to a detection model which is trained in advance, and performing feature extraction on the target image through a feature extraction part of the detection model to obtain image features corresponding to the target image;
obtaining a connection endpoint thermodynamic diagram based on the image characteristics through a first branch network of the detection model; and obtaining a hidden vector feature map based on the image features through a second branch network of the detection model.
4. The method of claim 1, wherein obtaining the connection relationship to be processed between the target connection objects according to the similarity between the hidden vectors corresponding to the connection endpoints and the corresponding relationship between the connection endpoints and the target connection objects in the target image, which is obtained based on the position information of the connection endpoints, comprises:
obtaining the connection relation between the connection endpoints according to the similarity between the hidden vectors corresponding to the connection endpoints; the similarity between the hidden vectors corresponding to the two connecting line end points with the connecting line relation is highest;
carrying out target detection on the target image to obtain position information of each target connecting line object in the target image; obtaining the corresponding relation between the connecting end points and the target connecting objects based on the position information of the target connecting objects and the position information of the connecting end points;
and obtaining the link relation to be processed between the target link objects based on the link relation between the link endpoints and the corresponding relation between the link endpoints and the target link objects.
5. The method according to claim 4, wherein obtaining the correspondence between the connection end point and the target connection object based on the position information of each target connection object and the position information of each connection end point comprises:
for each target connecting line object, judging whether a connecting line end point positioned in the area where the target connecting line object is positioned exists or not based on the position information of the target connecting line object and the position information of each connecting line end point;
if the target connection object exists, determining a connection end point positioned in the area where the target connection object exists as a connection end point corresponding to the target connection object;
if the connection end point does not exist, determining the connection end point with the minimum distance with the target connection object in all the connection end points as the connection end point corresponding to the target connection object.
6. The method according to claim 4, wherein performing topic processing according to the relation of the to-be-processed links between the target link objects and the pre-obtained standard link relation between the standard link objects in the standard image matched with the target image to obtain a processing result comprises:
acquiring position information of each standard connecting line object in a standard image matched with the target image and a standard connecting line relation among the standard connecting line objects;
obtaining the corresponding relation between the target connecting line object and the standard connecting line object based on the position information of each standard connecting line object and the position information of each target connecting line object;
and performing question processing according to the relation of the to-be-processed connecting lines among the target connecting line objects, the corresponding relation between the target connecting line objects and the standard connecting line relation to obtain a processing result.
7. The method according to claim 6, wherein obtaining the correspondence between the target link object and the standard link object based on the position information of each standard link object and the position information of each target link object comprises:
numbering the standard connecting line objects by adopting a preset sequencing mode based on the position information of the standard connecting line objects to obtain the serial numbers of the standard connecting line objects;
based on the position information of each target connecting line object, numbering each target connecting line object in the same preset sequencing mode as that of each standard connecting line object to obtain the serial number of each target connecting line object;
and obtaining the corresponding relation between the target connecting line object and the standard connecting line object according to the serial number of each standard connecting line object and the serial number of each target connecting line object.
8. The method according to claim 6, wherein the acquiring position information of each standard link object in a standard image matched with the target image and a standard link relation between the standard link objects comprises:
performing text recognition on the target image to obtain a text recognition result;
searching an image matched with the target image in a preset question library based on the text recognition result to be used as a standard image;
and acquiring the position information of each standard connecting line object in the standard image which is labeled in advance and the standard connecting line relation among the standard connecting line objects.
9. The method of claim 3, wherein the training process of the detection model comprises:
acquiring a sample image containing a connection question;
inputting the sample image into an initial detection model, and performing feature extraction on the sample image through a feature extraction part of the initial detection model to obtain a sample image feature corresponding to the sample image;
obtaining a connection endpoint prediction thermodynamic diagram based on the sample image features through a first branch network of the initial detection model; obtaining a hidden vector prediction feature map based on the sample image features through a second branch network of the initial detection model;
obtaining a first loss value based on the connection end point prediction thermodynamic diagram and a preset focus loss function; obtaining a second loss value based on the implicit vector prediction characteristic diagram and a preset triple loss function;
and training the initial detection model according to the first loss value and the second loss value to obtain the detection model.
10. A topic processing apparatus, comprising:
the connecting line end point detection module is used for carrying out connecting line end point detection on a target image containing connecting line questions to obtain position information of each connecting line end point in the target image and hidden vectors respectively corresponding to each connecting line end point; the similarity between any two hidden vectors is used for representing whether a connection relation exists between connection endpoints corresponding to the two hidden vectors;
a to-be-processed link relation obtaining module, configured to obtain a to-be-processed link relation between the target link objects according to similarity between hidden vectors corresponding to the link endpoints and a corresponding relation between the link endpoints and the target link objects in the target image, which is obtained based on position information of the link endpoints;
and the processing result obtaining module is used for performing question processing according to the to-be-processed connecting line relation among the target connecting line objects and the standard connecting line relation among the standard connecting line objects in the standard image which is obtained in advance and matched with the target image to obtain a processing result.
11. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction, which causes the processor to perform operations corresponding to the title processing method according to any one of claims 1-9.
12. A computer storage medium, having stored thereon a computer program which, when executed by a processor, implements the title processing method according to any one of claims 1-9.
CN202110520085.6A 2021-05-13 2021-05-13 Question processing method and device, electronic equipment and computer storage medium Pending CN112949616A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110520085.6A CN112949616A (en) 2021-05-13 2021-05-13 Question processing method and device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110520085.6A CN112949616A (en) 2021-05-13 2021-05-13 Question processing method and device, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN112949616A true CN112949616A (en) 2021-06-11

Family

ID=76233800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110520085.6A Pending CN112949616A (en) 2021-05-13 2021-05-13 Question processing method and device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112949616A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239909A (en) * 2021-07-12 2021-08-10 北京世纪好未来教育科技有限公司 Question processing method, device, equipment and medium
CN113627399A (en) * 2021-10-11 2021-11-09 北京世纪好未来教育科技有限公司 Topic processing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170262738A1 (en) * 2014-09-16 2017-09-14 Iflytek Co., Ltd. Intelligent scoring method and system for text objective question
CN108399626A (en) * 2018-03-02 2018-08-14 苏州大学 A kind of detection method, device and the equipment of image cathetus section
CN109509222A (en) * 2018-10-26 2019-03-22 北京陌上花科技有限公司 The detection method and device of straight line type objects
CN112200167A (en) * 2020-12-07 2021-01-08 北京易真学思教育科技有限公司 Image recognition method, device, equipment and storage medium
CN112766247A (en) * 2021-04-09 2021-05-07 北京世纪好未来教育科技有限公司 Question processing method and device, electronic equipment and computer storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170262738A1 (en) * 2014-09-16 2017-09-14 Iflytek Co., Ltd. Intelligent scoring method and system for text objective question
CN108399626A (en) * 2018-03-02 2018-08-14 苏州大学 A kind of detection method, device and the equipment of image cathetus section
CN109509222A (en) * 2018-10-26 2019-03-22 北京陌上花科技有限公司 The detection method and device of straight line type objects
CN112200167A (en) * 2020-12-07 2021-01-08 北京易真学思教育科技有限公司 Image recognition method, device, equipment and storage medium
CN112766247A (en) * 2021-04-09 2021-05-07 北京世纪好未来教育科技有限公司 Question processing method and device, electronic equipment and computer storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239909A (en) * 2021-07-12 2021-08-10 北京世纪好未来教育科技有限公司 Question processing method, device, equipment and medium
CN113627399A (en) * 2021-10-11 2021-11-09 北京世纪好未来教育科技有限公司 Topic processing method, device, equipment and storage medium
CN113627399B (en) * 2021-10-11 2022-02-08 北京世纪好未来教育科技有限公司 Topic processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US10769487B2 (en) Method and device for extracting information from pie chart
US20220114750A1 (en) Map constructing method, positioning method and wireless communication terminal
CN112766247B (en) Question processing method and device, electronic equipment and computer storage medium
CN109960742B (en) Local information searching method and device
CN110765246B (en) Question and answer method and device based on intelligent robot, storage medium and intelligent device
CN108334805B (en) Method and device for detecting document reading sequence
CN110347940A (en) Method and apparatus for optimizing point of interest label
CN112132143B (en) Data processing method, electronic device and computer readable medium
CN112949616A (en) Question processing method and device, electronic equipment and computer storage medium
CN112069349A (en) Method for automatically filling in answer, electronic device and readable storage medium
CN111639648A (en) Certificate identification method and device, computing equipment and storage medium
CN112232341B (en) Text detection method, electronic device and computer readable medium
CN114241499A (en) Table picture identification method, device and equipment and readable storage medium
CN111626250A (en) Line dividing method and device for text image, computer equipment and readable storage medium
CN115905622A (en) Video annotation method, device, equipment, medium and product
CN111651674A (en) Bidirectional searching method and device and electronic equipment
CN112989768B (en) Method and device for correcting connection questions, electronic equipment and storage medium
CN113537207B (en) Video processing method, training method and device of model and electronic equipment
CN114168768A (en) Image retrieval method and related equipment
CN115797291B (en) Loop terminal identification method, loop terminal identification device, computer equipment and storage medium
CN113705559B (en) Character recognition method and device based on artificial intelligence and electronic equipment
CN114387600B (en) Text feature recognition method, device, computer equipment and storage medium
CN115690795A (en) Resume information extraction method and device, electronic equipment and storage medium
CN115310505A (en) Automatic identification method and system for secondary circuit wiring terminal of mutual inductor
CN114743048A (en) Method and device for detecting abnormal straw picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210611