CN113778281A - Auxiliary information generation method and device, electronic equipment and storage medium - Google Patents

Auxiliary information generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113778281A
CN113778281A CN202111086983.1A CN202111086983A CN113778281A CN 113778281 A CN113778281 A CN 113778281A CN 202111086983 A CN202111086983 A CN 202111086983A CN 113778281 A CN113778281 A CN 113778281A
Authority
CN
China
Prior art keywords
auxiliary information
vertex
input
composition
rule
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111086983.1A
Other languages
Chinese (zh)
Other versions
CN113778281B (en
Inventor
吕琬军
李辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202111086983.1A priority Critical patent/CN113778281B/en
Publication of CN113778281A publication Critical patent/CN113778281A/en
Application granted granted Critical
Publication of CN113778281B publication Critical patent/CN113778281B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an auxiliary information generation method, an auxiliary information generation device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a target geometric figure, obtaining first auxiliary information input aiming at a first vertex of the target geometric figure, analyzing the first auxiliary information to determine a composition rule of the first auxiliary information, obtaining input content aiming at a second vertex of the target geometric figure, and generating second auxiliary information aiming at the second vertex according to the input content and the composition rule. In the embodiment of the application, after the auxiliary information is labeled on one vertex of the determined target geometric figure, when the auxiliary information is labeled on other vertexes, only partial information needs to be input to each other vertex by a user, the remaining information can be automatically supplemented, the user does not need to input complete auxiliary information to each other vertex, and therefore the speed of labeling the vertexes of the target geometric figure is improved.

Description

Auxiliary information generation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a method and an apparatus for generating auxiliary information, an electronic device, and a storage medium.
Background
In some scenarios, after a user draws a graph through an electronic device, auxiliary information needs to be labeled for a graph vertex to distinguish different vertices, but currently, only the user labels the graph vertices one by one, and the labeling speed is slow.
Disclosure of Invention
The application aims to provide an auxiliary information generation method and device, an electronic device and a storage medium, and the method comprises the following technical scheme:
according to a first aspect of the embodiments of the present disclosure, there is provided an auxiliary information generating method, the method including:
determining a target geometric figure;
obtaining first auxiliary information input for a first vertex of the target geometry;
analyzing the first auxiliary information to determine a composition rule of the first auxiliary information;
obtaining input content for a second vertex of the target geometry;
and generating second auxiliary information aiming at the second vertex according to the input content and the composition rule.
With reference to the first aspect, in a first possible implementation manner, the determining a target geometry includes:
receiving a plurality of input line segments;
and determining the geometric figure formed by the line segments as a target geometric figure.
With reference to the first aspect, in a second possible implementation manner, the parsing the first auxiliary information includes:
acquiring an image of a display area of the first auxiliary information;
and processing the image to obtain a composition rule of the first auxiliary information.
With reference to the first aspect, in a third possible implementation manner, the processing the image to obtain a composition rule of the first auxiliary information includes:
performing character recognition on the image to obtain a recognition result; determining a composition rule of the first auxiliary information according to the identification result;
or,
and processing the image based on an analysis engine to obtain a composition rule of the first auxiliary information output by the analysis engine.
With reference to the first aspect, in a fourth possible implementation manner, the rule for configuring the first auxiliary information includes:
the composition components of the first auxiliary information, and the relative position relationship between the composition components.
With reference to the first aspect, in a fifth possible implementation manner, the generating second auxiliary information for the second vertex according to the input content and the composition rule includes:
determining content to be supplemented based on the composition rule;
and combining the content to be supplemented with the input content according to the composition rule to obtain second auxiliary information aiming at the second vertex.
With reference to the first aspect, in a sixth possible implementation manner, the input content corresponds to a first input component in the first auxiliary information, and the content to be supplemented corresponds to a non-first input component in the first input component of the first auxiliary information.
According to a second aspect of the embodiments of the present disclosure, there is provided an assistance information generating apparatus, the apparatus including:
a determination module for determining a target geometry;
the first acquisition module is used for acquiring first auxiliary information input aiming at a first vertex of the target geometric figure;
the analysis module is used for analyzing the first auxiliary information to determine a constitution rule of the first auxiliary information;
a second obtaining module, configured to obtain input content for a second vertex of the target geometry;
a generating module, configured to generate second auxiliary information for the second vertex according to the input content and the composition rule.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a memory for storing a program;
a processor, configured to call and execute the program in the memory, and implement the steps of the auxiliary information generation method according to the first aspect by executing the program.
According to a fourth aspect of embodiments of the present disclosure, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the assistance information generation method according to the first aspect.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product directly loadable into an internal memory of a computer, wherein the memory is included in the electronic device shown in the third aspect and contains software codes, and the computer program can be loaded into and executed by the computer to implement the steps of the auxiliary information generating method according to the first aspect.
As can be seen from the above description, according to the auxiliary information generating method and apparatus, an electronic device, and a storage medium provided by the present application, a target geometric figure is determined, first auxiliary information input for a first vertex of the target geometric figure is obtained, the first auxiliary information is analyzed to determine a configuration rule of the first auxiliary information, input content for a second vertex of the target geometric figure is obtained, and second auxiliary information for the second vertex is generated according to the input content and the configuration rule. In the method and the device, after the auxiliary information is labeled on one vertex of the determined target geometric figure, when the auxiliary information is labeled on other vertexes, only a user needs to input partial information on each other vertex, the residual information can be automatically supplemented, the user does not need to input complete auxiliary information on each other vertex, and therefore the speed of labeling the vertexes of the target geometric figure is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating an implementation manner of a hardware architecture according to an embodiment of the present application;
fig. 2 is a flowchart of an auxiliary information generating method according to an embodiment of the present application;
fig. 3a to fig. 3h are schematic diagrams of input contents for obtaining a second vertex according to an embodiment of the present application;
fig. 4 is a block diagram of an auxiliary information generating apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than described or illustrated herein.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present disclosure.
The embodiment of the application provides an auxiliary information generation method and device, electronic equipment and a storage medium. Before introducing the technical solutions provided by the embodiments of the present application, a hardware architecture related to the embodiments of the present application is described.
In an alternative implementation manner, a first hardware architecture related to the embodiment of the present application includes: an electronic device.
For example, the electronic device may be any electronic product that can interact with a user through one or more ways such as a keyboard, a touch PAD, a touch screen, a remote controller, a voice interaction device, or a handwriting device, for example, a mobile phone, a notebook computer, a tablet computer, a palm computer, a personal computer, a wearable device, a smart television, a PAD, and the like.
Illustratively, a user may input first auxiliary information for a first vertex of the target geometry via the electronic device. The electronic device may analyze the first auxiliary information to obtain a composition rule of the first auxiliary information, and after the user inputs the input content of the second vertex of the target geometric figure through the electronic device, the electronic device may generate second auxiliary information for the second vertex according to the input content and the composition rule.
As shown in fig. 1, a schematic diagram of an implementation manner of a second hardware architecture according to an embodiment of the present application is shown, where the hardware architecture includes: an electronic device 11 and a server 12.
The electronic device 11 may be any electronic product capable of interacting with a user through one or more ways, such as a keyboard, a touch PAD, a touch screen, a remote controller, a voice interaction device, or a handwriting device, for example, a mobile phone, a notebook computer, a tablet computer, a palm computer, a personal computer, a wearable device, a smart television, a PAD, and the like.
The server 12 may be, for example, one server, a server cluster composed of a plurality of servers, or a cloud computing server center. The server 12 may include a processor, memory, and a network interface, among others.
Illustratively, the user may input first auxiliary information for a first vertex of the target geometry via the electronic device 11. The electronic device 11 transmits the first auxiliary information to the server 12, the server 12 analyzes the first auxiliary information to obtain a composition rule of the first auxiliary information, and transmits the composition rule to the electronic device 11, after the user inputs the input content of the second vertex of the target geometric figure through the electronic device 11, the electronic device 11 may generate the second auxiliary information for the second vertex according to the input content and the composition rule, or the electronic device 11 may transmit the input content to the server 12, and the server 12 may generate the second auxiliary information for the second vertex according to the input content and the composition rule, and transmit the second auxiliary information to the electronic device 11.
It will be understood by those skilled in the art that the foregoing electronic devices and servers are merely exemplary and that other existing or future electronic devices or servers may be suitable for use with the present disclosure and are intended to be included within the scope of the present disclosure and are hereby incorporated by reference.
The method for generating auxiliary information provided by the embodiment of the present application is described below with reference to a hardware architecture related to the embodiment of the present application.
As shown in fig. 2, a flowchart of an auxiliary information generating method provided in an embodiment of the present application may be applied to an electronic device in a first hardware architecture or a server 12 in a second hardware architecture, and the method involves the following steps S21 to S25 in implementation.
Step S21: a target geometry is determined.
Illustratively, the shape of the target geometry may be any shape, such as a planar figure or a solid figure.
Illustratively, the planar pattern may be any one of: line segments (curved or straight), sectors, arches, polygons.
Illustratively, the stereoscopic image is any one of: polyhedron, cylinder.
For example, there are various ways to determine the target geometry, and the embodiments of the present application provide, but are not limited to, the following four.
The first way to determine the target geometry is: and taking a display screen of the electronic equipment as handwriting equipment. The electronic equipment receives the geometric figure input by a user through the display screen, and determines the geometric figure as a target geometric figure.
Illustratively, the user may draw geometric figures on the display screen.
Second way of determining the target geometry: and connecting the electronic equipment with the handwriting equipment. The electronic equipment receives the geometric figure input by the user through the handwriting equipment connected with the electronic equipment, and the geometric figure is determined as the target geometric figure.
Illustratively, the handwriting device may be: a drawing board.
Illustratively, the user may draw geometric figures in the handwriting device.
In an alternative implementation, in the first or second way of determining the target geometry, the process of receiving the user-input target geometry includes: receiving a plurality of input line segments; and determining the geometric figure formed by the line segments as a target geometric figure.
The third way to determine the target geometry is: shooting an image containing the geometric figure through a camera of the electronic equipment, and determining the geometric figure contained in the image as a target geometric figure.
A fourth way to determine the target geometry is: and receiving the geometric figures sent by other terminal equipment, and determining the received geometric figures as target geometric figures.
Illustratively, the target geometry includes one or more geometric figures.
In an alternative implementation, if at least one line or at least one plane of any two geometric figures in the geometric figures intersect, the geometric figures are determined to be target geometric figures. That is, the composition rule of the auxiliary information of the vertices of the plurality of geometric figures is the same.
In an alternative implementation, if, for any two geometric figures in the plurality of geometric figures, no line intersects and no plane intersects in the two geometric figures, each geometric figure in the plurality of geometric figures is determined to be the target geometric figure. That is, the composition rule of the auxiliary information of the vertices of the plurality of geometric figures may be different (the specific composition rule relates to the first auxiliary information of the first vertex input by the user).
In an alternative implementation manner, for any two geometric figures in the plurality of geometric figures, at least one line or at least one plane of the two geometric figures intersect, and it may also be determined that each geometric figure in the plurality of geometric figures is a target geometric figure. That is, the composition rule of the auxiliary information of the vertices of the plurality of geometric figures may be different (the specific composition rule relates to the first auxiliary information of the first vertex input by the user).
Step S22: first auxiliary information input for a first vertex of the target geometry is obtained.
Illustratively, the vertices of the target geometry include one or more first points, and/or one or more second points.
Wherein the first point includes, but is not limited to, at least one of: the intersection point of two sides of the corner in the target geometric figure, the highest point of the curve in the target geometric figure, the terminal point of the line segment (straight line segment or curve segment) in the target geometric figure, and the intersection point of two line segments in the target geometric figure.
Wherein the second point includes but is not limited to: the target geometry has a point of a predetermined marker symbol.
Illustratively, the preset mark symbol may be any one of the following, or, a combination of at least two of the following: arrow, solid dot, hollow dot.
Illustratively, the first point is different from the second point.
If the user needs to label a point other than the first point in the target geometric figure, for example, the user needs to label the center of the cube, a preset label symbol may be marked at the coordinates of the center of the cube. The electronic device recognizes that the preset mark symbol is located at the center of the cube, and the center of the cube can be used as a vertex.
Illustratively, the first vertex may comprise any one or more vertices in the target geometry.
The first vertex mentioned in step S22 is explained below, and two application scenarios are involved here.
First application scenario: the target geometry determined in step S21 already contains vertices labeled with auxiliary information.
The number of vertices that have been marked with auxiliary information may be one or more.
If the number of the vertexes marked with the auxiliary information is 1, the first vertex is the vertex marked with the auxiliary information; if the number of vertices already labeled with auxiliary information is greater than 1, the first vertex may be any one of the vertices already labeled with auxiliary information, or the first vertex includes vertices already labeled with auxiliary information.
Second application scenario: no auxiliary information is labeled on each vertex of the target geometry determined in step S21.
For example, when auxiliary information is input for a vertex of the target geometry, the vertex is a first vertex, and the auxiliary information of the vertex is the first auxiliary information.
Exemplarily, the auxiliary information is input for each of the multiple vertices of the target geometric figure, the multiple vertices are all first vertices, and the auxiliary information corresponding to each of the multiple vertices is the first auxiliary information.
Illustratively, the auxiliary information of a vertex is a marking symbol for marking the vertex, and the auxiliary information of different vertices of the same target geometry is different.
In an alternative implementation, the composition rules of the auxiliary information corresponding to the vertices of the same target geometry are the same.
In an alternative implementation, the composition rule of the auxiliary information corresponding to each vertex of the same target geometry may be different.
For example, each vertex of the target geometry may be divided into at least two vertex sets, each vertex set includes a plurality of vertices, the auxiliary information of the vertices included in the same vertex set has the same composition rule, and the auxiliary information of the vertices included in different vertex sets has different composition rules.
For example, the first vertex corresponding to different vertex sets is different, and for each vertex set, any one or more vertices in the vertex set may be the first vertex. Illustratively, for each vertex set, first auxiliary information input for a first vertex of the vertex set needs to be obtained.
Step S23: and analyzing the first auxiliary information to determine a composition rule of the first auxiliary information.
It is understood that for simple side information, the composition rule may be derived based on the first side information of one first vertex; for complex auxiliary information, it may be necessary to obtain a composition rule based on first auxiliary information corresponding to each of the plurality of first vertices.
For example, if the user desires a composition rule to be: a capital letter and a character ", and the character" is located at the upper corner of the capital letter, for example, the auxiliary information is a ". Then, the number of the first vertex may be one.
For example, if the user desires a composition rule to be: a plurality of capital letters, and the plurality of capital letters are sequentially ordered according to the order of 26 english alphabets, for example, the auxiliary information of two vertices is: BCD, CDE, then the number of first vertices is multiple.
In summary, the composition rules include, but are not limited to, at least one of the following: the composition components constituting the auxiliary information, the relative positional relationship between the composition components constituting the auxiliary information, and the sequential relationship between the composition components constituting the auxiliary information.
Exemplary, compositional components include, but are not limited to: capital letters, lowercase letters, symbols, upper case numbers, arabic numerals, and emoticons.
Illustratively, the category to which the composition belongs is different from the category to which the preset markup symbol belongs.
Exemplary, the relative positional relationship between the constituent elements constituting the auxiliary information includes, but is not limited to: at least one of the upper corner mark, the lower corner mark and the normal position relation.
Illustratively, if the auxiliary information is ABThen the relative position relationship between the composition of the second position and the composition of the first position is: the composition B at the second position is positioned at the upper corner mark of the composition A at the first position; if the auxiliary information is CDThen the relative position relationship between the composition of the second position and the composition of the first position is: the composition D at the second position is positioned at the lower corner mark of the composition C at the first position; if the auxiliary information is EF, the relative position relationship between the composition of the second position and the composition of the first position is as follows: the composition E at the first position is in a normal positional relationship with the composition F at the second position.
For example, if the components constituting the auxiliary information include a plurality of categories, the order relationship between the components constituting the auxiliary information includes: the order relationship of the plurality of categories, for example, capital letters in the first position, lowercase letters in the second position, and characters' in the third position.
For example, if the components constituting the auxiliary information include a plurality of categories, the sequential relationship between the components constituting the auxiliary information may further include: and the order relation of the composition components at the same position in the auxiliary information corresponding to the former marked vertex and the latter marked vertex. For example, if the composition of the first position of the auxiliary information corresponding to the vertex labeled before is a, and the composition of the same position in the auxiliary information corresponding to the vertex labeled before and the vertex labeled after is a sequential order of 26 english letters, the composition of the first position of the auxiliary information corresponding to the vertex labeled after is B.
For example, if the components constituting the auxiliary information include a plurality of components belonging to the same category, the sequential relationship between the components constituting the auxiliary information further includes: the sequential relationship between a plurality of constituent elements belonging to the same category. For example, the plurality of components belonging to the same category are sequentially sorted in the order of 26 english letters, and if the vertex auxiliary information includes three components belonging to capital letters, the second position and the third position of the vertex auxiliary information are sequentially C, D if the first position of the component is B.
For example, the above description of the configuration rule is only an example, and the embodiment of the present application does not limit the configuration rule, for example, the configuration rule may further include: the format of the constituent elements constituting the auxiliary information.
Exemplary formats of the components constituting the auxiliary information include, but are not limited to: at least one of underlining, shading, bolding, font size, glyph, font.
Step S24: input content for a second vertex of the target geometry is obtained.
Illustratively, the input content is local information of second auxiliary information of the second vertex (herein, the auxiliary information of the second vertex is referred to as the second auxiliary information).
There are various implementations of step S24, and the embodiments of the present application provide, but are not limited to, the following two.
The first implementation of step S24 includes the following steps a11 to a 13.
Step A11: a first number of components included in the side information of the vertex is determined based on the composition information.
Step A12: and if the instruction touched and pressed at the display area to be input with the auxiliary information corresponding to the vertex is detected, determining that the vertex is the second vertex, and displaying the first number of input indicators.
Illustratively, a first number of input indicators may be displayed in the display area, or, illustratively, the first number of input indicators may be displayed at any location.
Illustratively, if the touch-down of the display area corresponding to the vertex is detected, the instruction is obtained; illustratively, the instruction is obtained if the voice contains input information of the input vertex.
For example, the input indicator may be: any of the boxes, underlined.
Step A13: receiving input content at a location corresponding to any input indicator of the first number of input indicators.
In order to make those skilled in the art more understand the implementation manner of the first step S24 provided in the embodiments of the present application, the following description is given by way of example.
Fig. 3a to 3c are schematic diagrams illustrating an implementation manner of obtaining input content of a second vertex according to an embodiment of the present application.
Fig. 3a illustrates an example in which the target geometry is a rectangle. It is assumed that the first number of components included in the auxiliary information of the vertices of the target geometry is 2, and the composition rule of the auxiliary information includes: the first position is a capital letter, the second position is a character ', and the character' is positioned at an upper corner mark of the capital letter at the first position.
Since the configuration rule of the auxiliary information is relatively simple, the number of the first vertex may be one, for example, the first auxiliary information of the first vertex is a'. If the user clicks the second vertex of the rectangle, as shown in fig. 3a, and the user clicks the display area of the second vertex, 2 input indicators may be displayed at the display area corresponding to the second vertex, as shown in fig. 3 b.
Fig. 3b and 3c illustrate the input indicator as _ for example.
Illustratively, the user may input the corresponding composition at an arbitrary position, and illustratively, the corresponding composition may be input at least one input indicator of the 2 input indicators to obtain the input content. For example, as shown in fig. 3c, the letter B is filled in the first position, and the letter B is the input content.
Fig. 3d to fig. 3f are schematic diagrams illustrating another implementation manner of obtaining input content of a second vertex according to an embodiment of the present application.
Fig. 3d illustrates an example in which the target geometry is a rectangle. It is assumed that the first number of components included in the auxiliary information of the vertices of the target geometry is 4, and the composition rule of the auxiliary information includes: the first position is a capital letter, the second position is a capital letter, the third position is a character ', the character' is positioned at the upper corner mark of the capital letter at the second position, the fourth position is a capital letter, and the sequence of the capital letter at the first position, the capital letter at the second position and the capital letter at the fourth position is 26 English letters.
Since the configuration rule of the auxiliary information is complex, for example, the number of the first vertex may be multiple, and this example is described with the number of the first vertex being 2. As shown in fig. 3D, it is assumed that the first auxiliary information of the two first vertices is AB 'C, BC' D. If the user clicks the second vertex of the rectangle, and the user clicks the display area of the second vertex as shown in fig. 3d, 4 input indicators may be displayed at the display area corresponding to the second vertex as shown in fig. 3 e.
Fig. 3e and 3f illustrate the input indicator as _ for example.
Illustratively, the user may input the corresponding composition at an arbitrary position, and illustratively, the corresponding composition may be input at least one input indicator of the 4 input indicators to obtain the input content. For example, as shown in fig. 3f, the letter D is filled in the second position, and the letter D is the input content.
Illustratively, the input content includes input characters and the characters are located at the positions of the second auxiliary information.
The second implementation of step S24 includes the following steps a21 to a 23.
Step A21: and if the instruction touched and pressed at the display area to be input with the auxiliary information corresponding to the vertex is detected, determining that the vertex is the second vertex.
Step A22: and acquiring a preset position corresponding to the input content to be input, wherein the preset position is the position of the auxiliary information of the input content at the second vertex.
For example, the preset position may be considered by the user to be set, and the preset position may be any position of the auxiliary information of the second vertex.
Step A23: input content input by a user is obtained.
Illustratively, prompt information for prompting the user that the input content to be input is located at the position of the second auxiliary information may also be presented.
Illustratively, the input content includes input characters and the characters are located at the positions of the second auxiliary information.
Step S25: and generating second auxiliary information aiming at the second vertex according to the input content and the composition rule.
For example, taking fig. 3a to 3c as an example, the generated second auxiliary information of the second vertex is B'. As shown in fig. 3 g. Taking fig. 3d to 3f as an example, the second auxiliary information of the second vertex is CD' E. As shown in fig. 3 h.
The application provides an auxiliary information generation method, an auxiliary information generation device, an electronic device and a storage medium, a target geometric figure is determined, first auxiliary information input aiming at a first vertex of the target geometric figure is obtained, the first auxiliary information is analyzed to determine a composition rule of the first auxiliary information, input content aiming at a second vertex of the target geometric figure is obtained, and second auxiliary information aiming at the second vertex is generated according to the input content and the composition rule. In the method and the device, after the auxiliary information is labeled on one vertex of the determined target geometric figure, when the auxiliary information is labeled on other vertexes, only a user needs to input partial information on each other vertex, the residual information can be automatically supplemented, the user does not need to input complete auxiliary information on each other vertex, and therefore the speed of labeling the vertexes of the target geometric figure is improved.
In an alternative implementation manner, there are various implementation manners of step S23, and the embodiment of the present application provides, but is not limited to, the following two.
The first implementation of step S23 includes the following steps B11 through B12.
Step B11: and acquiring an image of a display area of the first auxiliary information.
Illustratively, the display area of the auxiliary information of the vertices of the target geometry is set in advance. The display area in which the first auxiliary information is displayed can be determined, resulting in an image of the display area, i.e. an image containing the first auxiliary information.
Step B12: and processing the image to obtain a composition rule of the first auxiliary information.
There are various ways to process the image, that is, the implementation manner of step B12, and the embodiment of the present application provides, but is not limited to, the following two ways.
The first implementation manner of step B12 includes: performing character recognition on the image to obtain a recognition result; and determining a composition rule of the first auxiliary information according to the identification result.
Illustratively, the image may be Character-recognized using an OCR (Optical Character Recognition) technique to obtain constituent elements constituting the first auxiliary information.
The second implementation manner of step B12 includes: and processing the image based on an analysis engine to obtain a composition rule of the first auxiliary information output by the analysis engine.
Illustratively, the parsing engine may be a pre-built character recognition model.
For example, the character recognition model is obtained by training a machine learning model with a large number of sample images as input and composition rules corresponding to the large number of sample images as training targets. The sample image includes auxiliary information.
In the process of training the character recognition model, at least one of the technologies of artificial neural network, confidence network, reinforcement learning, transfer learning, inductive learning, formula teaching learning and the like in machine learning is involved.
For example, the character recognition model may be any one of a neural network model, a logistic regression model, a linear regression model, a Support Vector Machine (SVM), Adaboost, XGboost, and a transform-Encoder model.
Illustratively, the neural network model may be any one of a cyclic neural network-based model, a convolutional neural network-based model, and a transform-encoder-based classification model.
For example, the character recognition model may be a deep hybrid model of a cyclic neural network-based model, a convolutional neural network-based model, and a transform-encoder-based classification model.
Illustratively, the character recognition model can be any one of an attention-based depth model, a memory network-based depth model, and a deep learning-based short text classification model.
The short text classification model based on deep learning is a Recurrent Neural Network (RNN) or a Convolutional Neural Network (CNN) or is based on a variant of the recurrent neural network or the convolutional neural network.
For example, some simple domain adaptation may be performed on a pre-trained model to obtain a character recognition model.
Exemplary "simple domain adaptation" includes, but is not limited to, re-using large-scale unsupervised domain corpora to perform secondary pre-training on a pre-trained model, and/or performing model compression on a pre-trained model by model distillation.
The second implementation of step S23 includes the following steps B21 through B22.
Step B21: first auxiliary information input by a user is acquired.
For example, if the first auxiliary information is input into the electronic device by the user, the electronic device may directly obtain the first auxiliary information without analyzing the image containing the first auxiliary information to obtain the first auxiliary information.
Step B22: and determining a composition rule according to the first auxiliary information.
In an alternative implementation manner, there are various implementation manners of step S25, and the embodiments of the present application provide, but are not limited to, the following manners. The method includes steps C1 through C2.
Step C1: and determining the content to be supplemented based on the composition rule.
Still taking fig. 3a to 3d as an example for explanation, if the input content is the component C at the second position of the second auxiliary information, the content to be supplemented is: b in the first position, D in the third position, and D in the fourth position.
Step C2: and combining the content to be supplemented with the input content according to the composition rule to obtain second auxiliary information aiming at the second vertex.
Still taking fig. 3a to 3D as an example for explanation, since the third position 'is the upper corner mark of the C of the second position, the second auxiliary information BC' D is obtained.
For example, the preset position in the implementation manner of the second step S24 may be the first position of the second auxiliary information. Namely, the input content corresponds to the first input composition in the first auxiliary information, namely, the input content is located at the first position of the second auxiliary information; the content to be supplemented corresponds to a non-first input component of the components of the first auxiliary information; the content to be supplemented is a component of the second auxiliary information at a location other than the first location.
Corresponding to the method embodiment, an embodiment of the present application further provides an auxiliary information generating apparatus, where a schematic structural diagram of the apparatus is shown in fig. 4, and the apparatus may include: a determining module 41, a first obtaining module 42, an analyzing module 43, a second obtaining module 44, and a generating module 45, wherein:
a determination module 41 for determining a target geometry;
a first obtaining module 42, configured to obtain first auxiliary information input for a first vertex of the target geometry;
an analysis module 43, configured to analyze the first auxiliary information to determine a configuration rule of the first auxiliary information;
a second obtaining module 44, configured to obtain input content for a second vertex of the target geometry;
a generating module 45, configured to generate second auxiliary information for the second vertex according to the input content and the composition rule.
In an optional implementation, the determining module includes:
a receiving unit for receiving a plurality of input line segments;
and the figure determining unit is used for determining the geometric figure formed by the line segments as the target geometric figure.
In an optional implementation manner, the parsing module includes:
an image acquisition unit configured to acquire an image of a display area of the first auxiliary information;
and the obtaining rule unit is used for processing the image to obtain a composition rule of the first auxiliary information.
In an optional implementation manner, the obtaining rule unit includes:
the identification subunit is used for carrying out character identification on the image to obtain an identification result; determining a composition rule of the first auxiliary information according to the identification result;
or,
and the analysis subunit is used for processing the image based on an analysis engine to obtain a composition rule of the first auxiliary information output by the analysis engine.
In an optional implementation manner, the forming rule of the first auxiliary information includes:
the composition components of the first auxiliary information, and the relative position relationship between the composition components.
In an alternative implementation, the generating module includes:
a content determining unit for determining content to be supplemented based on the composition rule;
and the composition unit is used for combining the content to be supplemented with the input content according to the composition rule to obtain second auxiliary information aiming at the second vertex.
In an optional implementation manner, the input content corresponds to a first input component in the first auxiliary information, and the content to be supplemented corresponds to a non-first input component in the first auxiliary information.
Corresponding to the embodiment of the method, the present application further provides an electronic device, a schematic structural diagram of which is shown in fig. 5, and the electronic device may include: at least one processor 1, at least one communication interface 2, at least one memory 3 and at least one communication bus 4;
in the embodiment of the present application, the number of the processor 1, the communication interface 2, the memory 3, and the communication bus 4 is at least one, and the processor 1, the communication interface 2, and the memory 3 complete mutual communication through the communication bus 4.
The processor 1 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present application, etc.
The memory 3 may comprise a high-speed RAM memory, and may further comprise a non-volatile memory (non-volatile memory) or the like, such as at least one disk memory.
Wherein the memory 3 stores a program, and the processor 1 may call the program stored in the memory 3, the program being configured to:
determining a target geometric figure;
obtaining first auxiliary information input for a first vertex of the target geometry;
analyzing the first auxiliary information to determine a composition rule of the first auxiliary information;
obtaining input content for a second vertex of the target geometry;
and generating second auxiliary information aiming at the second vertex according to the input content and the composition rule.
Alternatively, the detailed function and the extended function of the program may be as described above.
Embodiments of the present application further provide a readable storage medium, where the storage medium may store a program adapted to be executed by a processor, where the program is configured to:
determining a target geometric figure;
obtaining first auxiliary information input for a first vertex of the target geometry;
analyzing the first auxiliary information to determine a composition rule of the first auxiliary information;
obtaining input content for a second vertex of the target geometry;
and generating second auxiliary information aiming at the second vertex according to the input content and the composition rule.
Alternatively, the detailed function and the extended function of the program may be as described above.
In an exemplary embodiment, there is also provided a computer program product directly loadable into an internal memory of a computer, for example a memory comprised by said server, and containing software code enabling, when loaded and executed by the computer, to:
determining a target geometric figure;
obtaining first auxiliary information input for a first vertex of the target geometry;
analyzing the first auxiliary information to determine a composition rule of the first auxiliary information;
obtaining input content for a second vertex of the target geometry;
and generating second auxiliary information aiming at the second vertex according to the input content and the composition rule.
Alternatively, the detailed function and the extended function of the program may be as described above.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It should be understood that the technical problems can be solved by combining and combining the features of the embodiments from the claims.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of auxiliary information generation, the method comprising:
determining a target geometric figure;
obtaining first auxiliary information input for a first vertex of the target geometry;
analyzing the first auxiliary information to determine a composition rule of the first auxiliary information;
obtaining input content for a second vertex of the target geometry;
and generating second auxiliary information aiming at the second vertex according to the input content and the composition rule.
2. The method of claim 1, the determining a target geometry, comprising:
receiving a plurality of input line segments;
and determining the geometric figure formed by the line segments as a target geometric figure.
3. The method of claim 1, the parsing the first assistance information, comprising:
acquiring an image of a display area of the first auxiliary information;
and processing the image to obtain a composition rule of the first auxiliary information.
4. The method according to claim 3, wherein the processing the image to obtain the composition rule of the first auxiliary information comprises:
performing character recognition on the image to obtain a recognition result; determining a composition rule of the first auxiliary information according to the identification result;
or,
and processing the image based on an analysis engine to obtain a composition rule of the first auxiliary information output by the analysis engine.
5. The method according to any one of claims 1 to 4, wherein the first auxiliary information constitutes a rule, and the rule comprises:
the composition components of the first auxiliary information, and the relative position relationship between the composition components.
6. The method of claim 5, the generating second auxiliary information for the second vertex from the input content and the composition rule, comprising:
determining content to be supplemented based on the composition rule;
and combining the content to be supplemented with the input content according to the composition rule to obtain second auxiliary information aiming at the second vertex.
7. The method of claim 6, wherein the input content corresponds to a first input component of the first auxiliary information, and the content to be supplemented corresponds to a non-first input component of the first auxiliary information.
8. An assistance information generating apparatus, the apparatus comprising:
a determination module for determining a target geometry;
the first acquisition module is used for acquiring first auxiliary information input aiming at a first vertex of the target geometric figure;
the analysis module is used for analyzing the first auxiliary information to determine a constitution rule of the first auxiliary information;
a second obtaining module, configured to obtain input content for a second vertex of the target geometry;
a generating module, configured to generate second auxiliary information for the second vertex according to the input content and the composition rule.
9. An electronic device, comprising:
a memory for storing a program;
a processor for calling and executing the program in the memory, the steps of the auxiliary information generating method according to any one of claims 1 to 7 being implemented by executing the program.
10. A readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the assistance information generation method according to any one of claims 1 to 7.
CN202111086983.1A 2021-09-16 2021-09-16 Auxiliary information generation method and device, electronic equipment and storage medium Active CN113778281B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111086983.1A CN113778281B (en) 2021-09-16 2021-09-16 Auxiliary information generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111086983.1A CN113778281B (en) 2021-09-16 2021-09-16 Auxiliary information generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113778281A true CN113778281A (en) 2021-12-10
CN113778281B CN113778281B (en) 2024-06-21

Family

ID=78851399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111086983.1A Active CN113778281B (en) 2021-09-16 2021-09-16 Auxiliary information generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113778281B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6978230B1 (en) * 2000-10-10 2005-12-20 International Business Machines Corporation Apparatus, system, and method for draping annotations on to a geometric surface
CN102163340A (en) * 2011-04-18 2011-08-24 宁波万里电子科技有限公司 Method for labeling three-dimensional (3D) dynamic geometric figure data information in computer system
CN106504181A (en) * 2015-09-08 2017-03-15 想象技术有限公司 For processing graphic processing method and the system of subgraph unit
CN108345440A (en) * 2017-01-22 2018-07-31 亿度慧达教育科技(北京)有限公司 A kind of method and its device of the geometric figure auxiliary line of display addition
CN109976614A (en) * 2019-03-28 2019-07-05 广州视源电子科技股份有限公司 Method, device, equipment and medium for marking three-dimensional graph
CN112308946A (en) * 2020-11-09 2021-02-02 电子科技大学中山学院 Topic generation method and device, electronic equipment and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6978230B1 (en) * 2000-10-10 2005-12-20 International Business Machines Corporation Apparatus, system, and method for draping annotations on to a geometric surface
CN102163340A (en) * 2011-04-18 2011-08-24 宁波万里电子科技有限公司 Method for labeling three-dimensional (3D) dynamic geometric figure data information in computer system
CN106504181A (en) * 2015-09-08 2017-03-15 想象技术有限公司 For processing graphic processing method and the system of subgraph unit
CN108345440A (en) * 2017-01-22 2018-07-31 亿度慧达教育科技(北京)有限公司 A kind of method and its device of the geometric figure auxiliary line of display addition
CN109976614A (en) * 2019-03-28 2019-07-05 广州视源电子科技股份有限公司 Method, device, equipment and medium for marking three-dimensional graph
CN112308946A (en) * 2020-11-09 2021-02-02 电子科技大学中山学院 Topic generation method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN113778281B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
EP3712812A1 (en) Recognizing typewritten and handwritten characters using end-to-end deep learning
KR101486174B1 (en) Method and apparatus for segmenting strokes of overlapped handwriting into one or more groups
CN109358766B (en) Progress display of handwriting input
EP3522038A1 (en) Method for translating characters and apparatus therefor
CN111695518B (en) Method and device for labeling structured document information and electronic equipment
JP6914260B2 (en) Systems and methods to beautify digital ink
WO2011150415A2 (en) Methods and systems for automated creation, recognition and display of icons
CN109074223A (en) For carrying out the method and system of character insertion in character string
CN103902098A (en) Shaping device and shaping method
CN112686134A (en) Handwriting recognition method and device, electronic equipment and storage medium
CN102750552A (en) Handwriting recognition method and system as well as handwriting recognition terminal
CN108700978B (en) Assigning textures to graphical keyboards based on subject textures of an application
CN106650720A (en) Method, device and system for network marking based on character recognition technology
US7911452B2 (en) Pen input method and device for pen computing system
CN114730241A (en) Gesture stroke recognition in touch user interface input
CN115393872A (en) Method, device and equipment for training text classification model and storage medium
JP2017090998A (en) Character recognizing program, and character recognizing device
CN111783393B (en) Handwritten note synchronization method, equipment and storage medium during bilingual comparison reading
CN113687724A (en) Candidate character display method and device and electronic equipment
CN113220125A (en) Finger interaction method and device, electronic equipment and computer storage medium
CN109977873B (en) Handwriting-based note generation method, electronic equipment and storage medium
CN113778281B (en) Auxiliary information generation method and device, electronic equipment and storage medium
CN116311300A (en) Table generation method, apparatus, electronic device and storage medium
US10127478B2 (en) Electronic apparatus and method
CN102375655A (en) Alphabet input processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant