CN117649669A - Formula calculation method, device, equipment and storage medium - Google Patents

Formula calculation method, device, equipment and storage medium Download PDF

Info

Publication number
CN117649669A
CN117649669A CN202311684685.1A CN202311684685A CN117649669A CN 117649669 A CN117649669 A CN 117649669A CN 202311684685 A CN202311684685 A CN 202311684685A CN 117649669 A CN117649669 A CN 117649669A
Authority
CN
China
Prior art keywords
text
target
result
character
calculation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311684685.1A
Other languages
Chinese (zh)
Inventor
王鹏
袁野
白锦峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Century TAL Education Technology Co Ltd
Original Assignee
Beijing Century TAL Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Century TAL Education Technology Co Ltd filed Critical Beijing Century TAL Education Technology Co Ltd
Priority to CN202311684685.1A priority Critical patent/CN117649669A/en
Publication of CN117649669A publication Critical patent/CN117649669A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Machine Translation (AREA)

Abstract

The present disclosure relates to a formula calculation method, apparatus, device, and storage medium, the formula calculation method including: obtaining a target arithmetic expression to be calculated, wherein the target arithmetic expression comprises at least one operation symbol; calculating a target result of a target formula through a pre-trained calculation model corresponding to the operation symbol to obtain an output result; the output result comprises at least one expression and a target result, wherein the at least one expression represents a calculation step for obtaining the target result, and the target result is obtained according to a target character combination in the at least one expression. The method provided by the disclosure can improve the calculation accuracy of the formula.

Description

Formula calculation method, device, equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method, a device, equipment and a storage medium for calculating an arithmetic expression.
Background
With the rapid development of computer technology, artificial intelligence is widely used in the education field, and the calculation formula is calculated by using a large language model, which is a natural language processing (Natural Language Processing) model with excellent understanding and generating capability, so that smooth, consistent and creative text can be generated.
At present, the large language model is mostly a simple operation task realized according to a directly given calculation result, and for a complex operation task of high-order data, the method for directly inputting an arithmetic expression and the calculation result into the large language model has poor learning property, so that the calculation accuracy is lower.
Disclosure of Invention
In order to solve the technical problems, the disclosure provides a method, a device, equipment and a storage medium for calculating an arithmetic expression, which improves the calculation accuracy of the arithmetic expression.
According to an aspect of the present disclosure, there is provided a formula calculation method including:
obtaining a target arithmetic expression to be calculated, wherein the target arithmetic expression comprises at least one operation symbol;
calculating a target result of the target calculation formula through a pre-trained calculation model corresponding to the operation symbol to obtain an output result;
the output result comprises at least one expression and the target result, the at least one expression characterizes a calculation step of obtaining the target result, and the target result is obtained according to a target character combination in the at least one expression.
According to another aspect of the present disclosure, there is provided an arithmetic calculation device including:
the device comprises an acquisition unit, a calculation unit and a calculation unit, wherein the acquisition unit is used for acquiring a target calculation formula to be calculated, and the target calculation formula comprises at least one operation symbol;
The calculation unit is used for calculating a target result of the target calculation formula through a pre-trained calculation model corresponding to the operation symbol to obtain an output result;
the output result comprises at least one expression and the target result, the at least one expression characterizes a calculation step of obtaining the target result, and the target result is obtained according to a target character combination in the at least one expression.
According to another aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory storing a program, wherein the program comprises instructions that when executed by the processor cause the processor to perform a method of calculation according to the above equation.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute a calculation method according to a formula.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the above-described arithmetic calculation method.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
obtaining a target arithmetic expression to be calculated, wherein the target arithmetic expression comprises at least one operation symbol; calculating a target result of a target formula through a pre-trained calculation model corresponding to the operation symbol to obtain an output result; the output result comprises at least one expression and a target result, wherein the at least one expression represents a calculation step for obtaining the target result, and the target result is obtained according to a target character combination in the at least one expression. The method provided by the disclosure improves the calculation accuracy of the formula.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flowchart of a training method for a computing model provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of model calculation accuracy provided by an embodiment of the present disclosure;
FIG. 3 is a flowchart of a method of calculating an equation according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of another method for computing an equation according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a computing device according to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
A large language model is a natural language processing (Natural Language Processing) model with excellent understanding and generating capabilities that can generate smooth, consistent and creative text. The data suitable for different operations is constructed so that the data can be learned by a large language model, thereby enhancing the arithmetic capability of the large language model and having wide technical and product application fields. For example, in an educational technology platform, immediate feedback can be given for personalized exercise questions; in the field of financial science and technology, the complex mathematical calculation can be performed as a high-efficiency and high-accuracy tool; when the robot interacts with the user, basic calculation and the like can be rapidly performed.
Currently, in large language models, simple arithmetic tasks can be learned by directly giving formulas and calculation results. However, for complex arithmetic tasks of complex and high-order data, the data construction mode of directly giving an arithmetic expression and a calculation result brings great difficulty to model learning, namely, a large language model has poor learning property for the complex arithmetic tasks.
In the prior art, for the problem of providing arithmetic capability to large language models, many prior studies have proposed new data construction methods, indicating that complex arithmetic tasks can be broken down into a plurality of learnable steps using a thought chain. However, the current method has the following problems: for training data of high-order division, due to lack of proper thinking chain disassembly steps, model training cost is high, training efficiency is low, and model calculation accuracy is low.
Specifically, the formula calculation method may be performed by a terminal or a server. Specifically, the terminal or the server may estimate the result of the target expression through a calculation model. The execution subject of the training method of the calculation model and the execution subject of the calculation method of the expression may be the same or different.
For example, in one application scenario, a server trains a computing model. The terminal acquires a trained calculation model from the server, and estimates the result of the target expression through the trained calculation model. The target expression may be expression text in a target image obtained by the terminal photographing. Alternatively, the target expression is obtained by the terminal from other devices. Still alternatively, the target image is an image obtained after the terminal performs image processing on a preset image, where the preset image may be obtained by shooting the terminal, or the preset image may be obtained by the terminal from another device. Here, the other devices are not particularly limited.
In another application scenario, a server trains a computing model. Further, the server estimates the result of the target expression by training the completed calculation model. The manner in which the server obtains the target expression may be similar to the manner in which the terminal obtains the target expression as described above, and will not be described here again.
In yet another application scenario, the terminal trains the computational model. Further, the terminal estimates the result of the target expression through the trained calculation model.
It will be appreciated that the computational model training method and the formula calculation method provided by the embodiments of the present disclosure are not limited to the several possible scenarios described above. Since the trained calculation model can be applied to the calculation method of the formula, the training method of the calculation model can be described below before the calculation method of the formula is described.
Taking a server training calculation model as an example, a training method of the calculation model, namely a training process of the calculation model, is introduced. It can be appreciated that the method for training the computing model is also applicable to the scenario of training the computing model by the terminal.
Fig. 1 is a flowchart of a training method of a computing model according to an embodiment of the present disclosure, which specifically includes the following steps S101 to S102 shown in fig. 1:
s101, acquiring a plurality of training samples, analyzing the plurality of training samples, and generating an analysis sample corresponding to each training sample.
The analysis sample comprises at least one calculation formula sample and a calculation result of the training sample.
It can be understood that, a plurality of training samples are obtained, the training samples specifically refer to an equation to be calculated, the training samples may be a high-order division equation, for example, the training samples may be 569/50, specifically, a plurality of target images may be obtained, the equation in the target image is identified as the training sample, the training samples may be directly obtained based on a database, the training samples may be randomly generated based on data, and a specific obtaining manner of the training samples is not limited. Then, each training sample is parsed to generate a parsed sample of each training sample, that is, the training samples are disassembled to construct data suitable for division operation, wherein the parsed sample comprises at least one formula sample and a calculation result, the at least one formula sample characterizes all calculation processes of the formulas to be calculated, that is, in the data construction, quotient and remainder of each step are explicitly given in order to preserve all information of the calculation processes, and in particular, the data construction of the formula sample is D i -i x D (cal (i x D)), wherein D i For dividend, D is a divisor, i is a quotient, and i is a positive integer, cal (·) represents the calculation result of the calculation in brackets. For example, training samples are 569/50, analytical samples are 56-1×50 (50) =69-1×50 (50) =19=11..19, wherein 56-1×50 (50) is the result of the first calculation step (first step), (50) is 1×50, 69-1×50 (50) is the result of the second calculation step (second step), 19 is the third calculation step (third step), i.e., the final remainder, and 11..19 is the result of the calculation. Each of the formula samples is composed of at least one text, each text including at least one character, e.g., 56, 1, and 50 in the first calculation step, the calculation result is composed of at least a portion of the text in each of the formula samples, at least a portion of the text referring to the quotient and the final remainder of each step.
S102, training a pre-constructed calculation model through the training sample and the analysis sample to obtain a trained calculation model.
It can be understood that, on the basis of S101, the parsed sample and the training sample obtained by the above-mentioned disassembly are used as inputs of a model, and the model is trained, for example, model input information is 569/50=56-1×50 (50) =69-1×50 (50) =19=11. The pre-built model may be a large language model or may be another network model that may be used for the calculation of the formula, which is not limited herein. In addition, the output result of the calculation model and the data structure of the input information are the same, that is, the output result includes the to-be-calculated formula, the equal sign, at least one formula and the calculation result.
Preferably, in terms of data construction, for the division calculation method, random numbers within 1,000,000 to 10,000 can be constructed, that is, three million pieces of training data of 3,000,000 are obtained. Training data was input into an untrained generalized linear model (generalize linear model (GLM)) for 10,000 steps of training. After training, 1,000 pieces of test data are constructed, and the model calculation Accuracy (ACC) is exemplified for each 200 steps of test, referring to fig. 2, fig. 2 is a schematic diagram of model calculation accuracy provided by the embodiment of the disclosure, and as the number of iterative steps of the model increases, the model calculation accuracy also increases, based on fig. 2, it can be determined that the model can learn a division calculation method in a small number of training steps, and the effectiveness and high efficiency of constructing training data recording all calculation processes for model training are also illustrated.
It can be understood that for each calculation method, a model only applicable to the calculation method can be constructed, a model applicable to multiple calculation methods can be constructed, the number of calculation methods which can be realized by the constructed model is not limited, and the calculation methods can be determined by the user according to the requirements.
According to the training method for the calculation model, training data capable of retaining all information in a calculation process is constructed, the division calculation complexity is retained by the training data constructed in the mode, then the training data is input into the model in a text form, so that the model can learn arithmetic tasks rapidly, meanwhile, the length of the training data cannot be increased along with the increase of the digital length, the square level is not increased, the learning efficiency is guaranteed, the model obtained through training by the data construction method can be suitable for complex arithmetic tasks, namely, the model is not influenced by difficulty of the arithmetic tasks, and the calculation accuracy of the model obtained through training is high.
On the basis of the above embodiment, fig. 3 is a flowchart of a calculation method of an expression provided in an embodiment of the disclosure, specifically including the following steps S301 to S302 shown in fig. 3;
s301, obtaining a target expression to be calculated, wherein the target expression comprises at least one operation symbol.
It can be understood that the target expression to be calculated is obtained, the target expression includes at least one operation symbol and at least two texts, each text includes at least one character, in the case that the text includes a plurality of characters, the plurality of characters are continuous and uninterrupted, specifically, a target image is obtained, and text recognition is performed on the target image to obtain the target expression. The following embodiments are described in detail with reference to the target expression including a division symbol.
S302, calculating a target result of the target calculation formula through a pre-trained calculation model corresponding to the operation symbol to obtain an output result.
The output result comprises the target expression, at least one expression and the target result, wherein the at least one expression represents a calculation step for obtaining the target result, and the target result is obtained according to a target character combination in the at least one expression.
It can be understood that, based on S301 above, a calculation model corresponding to the operation symbol is selected from the models trained in advance, then, the calculation is performed on the target result of the target expression by the calculation model to obtain an output result, and the structure of the output result is the same as the structure of the training data constructed in the model training above, where the output result includes the target expression, at least one expression and the target result, and the calculation relationship is represented by an equal sign, and the at least one expression is connected according to the calculation order based on the equal sign, and at least one expression includes at least part of information of the target expression in the calculation process, and the target result refers to the calculation result of the target expression.
It is understood that the output result is displayed. By explicitly giving the quotient and remainder of each step, the user can intuitively understand the calculation process, and the learning experience is improved.
Wherein the target expression further includes a plurality of characters.
Optionally, in S302, the target result of the target formula is calculated to obtain an output result, which may be specifically implemented by the following steps:
under the condition that the operation symbol is a preset symbol, taking a section of continuous characters positioned in front of the preset symbol in the target arithmetic expression as a first text, and taking a section of continuous characters positioned behind the preset character in the target arithmetic expression as a second text; and under the condition that the first text is larger than or equal to the second text, carrying out operation according to the first text and the second text to obtain the output result.
It can be understood that, in the case that the operation symbol is a preset symbol, a calculation model capable of realizing the task of the preset symbol is selected. The steps of calculating the target result by the calculation model are as follows: a segment of continuous characters located before and adjacent to the preset character in the target formula is taken as a first text, and a segment of continuous characters located after and adjacent to the preset character is taken as a second text, for example, 569/50 in the above example, where "/" is a preset symbol, "569" is the first text, and "50" is the second text. Subsequently, in the case where the first text and the second text satisfy the first condition, the first condition means that the value represented by the first text is greater than or equal to the value represented by the second text, for example, the first text "569" is greater than the second text "50", in which case calculation is performed based on the first text and the second text, resulting in an output result. The first text and the second text satisfy a first condition, and the number of characters included in the first text is larger than or equal to the number of characters included in the second text, wherein the number of characters refers to the number of significant digits.
Optionally, in a case where the first text is smaller than the second text, the method includes:
adding a second preset character into the first text until the obtained third text is greater than or equal to the second text; according to the third text and the second text, operation is carried out, and an initial result is obtained; and obtaining the output result according to the initial result and the third preset character.
It will be appreciated that in the case where the first text and the second text satisfy the second condition, the second condition means that the value represented by the first text is smaller than the value represented by the second text, in which case a second predetermined character is added after the last character of the first text until the resulting third text and second text satisfy the first condition, the second predetermined character may be 0, for example, the first text is 25, the second text is 50, the first text 25 is smaller than the second text 50, 0 is added after the first text 25 to obtain the third text 250, and the third text 250 is larger than the second text 50. And then, calculating according to the third text and the second text to obtain an initial result. And adding a third preset character before the first character of the initial result to obtain an output result, wherein the third preset character is preferably '0'. Based on the above example, the initial result of 250/50 is 5, and "0" is added before the initial result 5, the final result is 0.5, the third preset character may be selected according to the number of the added second preset characters, if 1 second preset character is added after the last character of the first text, the third preset character is "0", if 2 second preset characters are added after the last character of the first text, the third preset character is "0.0", and so on, and will not be described herein.
According to the calculation method of the calculation formula provided by the embodiment of the disclosure, the thinking chain disassembling step is provided for the high-order data, and the training data is constructed, so that the calculation model trained based on the training data has high efficiency and accuracy, the result of the target calculation formula can be calculated rapidly and accurately, meanwhile, all information related to the calculation process is explicitly given, the user can intuitively know each calculation step conveniently, and the user experience is improved.
On the basis of the foregoing embodiment, fig. 4 is a flowchart of another calculation method according to the embodiment of the present disclosure, and optionally, the calculating according to the first text and the second text, to obtain the output result, specifically includes the following steps S401 to S403 shown in fig. 4:
s401, counting the number of characters included in the second text, and taking the number of characters in the first text as a target text.
It can be understood that counting the number of characters included in the first text, denoted n, counting the number of characters included in the second text, denoted m, considering a common vertical calculation scheme, i.e. the general case that the dividend (first text, denoted S) is larger than the divisor (second text, denoted D), i.e. n is not smaller than m, selecting the first m characters in the first text as the target text, which can be understood as the temporary dividend, i.e. selecting the first m bits of the dividend S as the temporary dividend i
S402, determining a first character, so that a first operation result obtained after the second text and the first character are subjected to a first operation is smaller than or equal to the target text.
Wherein the first character is a positive integer.
It can be understood that, based on S401, the first character is determined, where the first operation result obtained by the first operation of the first character and the second text through the first operation is less than or equal to the target text, and the first operation is multiplication operation, that is, the largest positive integer i makes d×i be equal to or less than D i I is the first character, where i is also the quotient.
S403, calculating a first difference value between the target text and the first operation result, and obtaining the output result when the first difference value is smaller than the second text and the first text does not include other texts except the target text.
Wherein the second text and the first character are subjected to a first operation to obtain a first operation result, and the difference value between the target text and the first operation result is calculated to form a first operation formula, the first character and the first difference constitute the target result, and the at least one equation includes the first equation.
It can be appreciated that, based on the above step S402, the difference between the target text and the first operation result is calculated to obtain a first difference, which is the remainder, i.e., D is calculated i D x i gets the remainder of this step. In the case that the first difference is smaller than the second text and the first text does not include other text than the target text, that is, n=m, the target result can be obtained by completing one-step calculation. And then, combining the first character and the first difference value to obtain a target result, performing first operation on the second text and the first character to obtain a calculation process of the first operation result, and forming a first step formula by the calculation process of the difference value between the target text and the first operation result, and forming an output result according to the target formula, the first step formula and the target result. For example, the output of the target equation 56/50 is: 56/50=56-1×50 (50) =6=1..6, wherein 56-1×50 (50) is the first formula, 1..6 is the target result, and (50) is the first result, and the final remainder is 6.
Optionally, after calculating the first difference between the target text and the first operation result, in a case where the first text includes the rest of texts other than the target text, the method further includes:
Calculating the sum of the first difference value and a first preset character after the first operation is performed to obtain a second operation result, wherein the second character is the first character in the rest texts; determining a third character, so that a third operation result obtained after the second text and the third character pass through the first operation is smaller than or equal to the second operation result; calculating a second difference value of the second operation result and the third operation result until the second difference value is smaller than the second text and the rest texts do not include other characters except the second character, so as to obtain the output result; the second text and the third character are subjected to a first operation to obtain a third operation result, and a difference value between the second text result and the third operation result is calculated to form a second operation formula, and the at least one operation formula further comprises the second operation formula, and accordingly, the first character, the third character and the second difference value form the target result.
It will be appreciated that, in the case where the first text includes the remaining text other than the target text, that is, where n is greater than or equal to m, and the first difference is smaller than the second text, in this case, the first difference and the first preset character are subjected to a first operation, where the first preset character may be 10, and then the numerical value obtained by the first operation and the second character are summed to obtain a second operation result, where the second character is the first character in the remaining text, that is, the remainder is multiplied by 10 and the number of m+1 of S, and the temporary dividend is reconstructed, for example, in the above example, the first difference obtained by the first step of 569/50 is 6, where the target text is 56, the remaining text is 9, and the first character in the remaining text is the m+1 in 569, that is 9, and the second character is 9, where the remainder 6 is multiplied by 10 and 9, to obtain the second operation result 69. Then, the third character is determined such that the product of the third character and the second text results in a third operation result that is less than or equal to the second operation result, that is, the steps of determining the quotient are repeated, and 1×50 is less than or equal to 69, that is, the third character is 1, and the third operation result is 50, based on the above example. And calculating the difference between the second operation result and the third operation result to obtain a second difference, namely repeating the step of determining the remainder, namely 69-50=19, wherein the second difference is 19. And repeating the calculation process until the difference is smaller than the second text and the last character in the first text is calculated to obtain a final output result when the rest texts further comprise other characters except the second character, wherein the second character 9 is the last character of the first text on the basis of the example. Specifically, the second text and the third character undergo the first operation to obtain the calculation step of the third operation result and the calculation step of the difference between the second text result and the third operation result to form a second operation formula, on the basis of the above example, the second operation formula is 69-1×50 (50), the final remainder is 19, in this case, the target operation formula undergoes two-step calculation to obtain the target result, the target result is composed of the quotient of the first operation formula and the second operation formula and the final remainder, the quotient of the first operation formula is 1, the quotient of the second operation formula is also 1, the final remainder is 19, that is, the quotient of each operation formula is sequentially spliced according to the calculation order of each operation formula to obtain the first result, the fourth preset character and the final remainder are sequentially added after the first result to obtain the target result, the target result is 11..19, and the fourth preset character is ".
Optionally, in a case where the target text is smaller than the second text, the method further includes:
calculating the sum of the target text and the first preset character after the first operation is performed on the target text and the first preset character to obtain a fourth operation result, and continuously determining the first character by taking the fourth operation result as the target text; wherein the fourth character refers to the rest of characters except the target text in the first text.
It will be appreciated that in the case where the first character is absent and/or the target text is smaller than the second text, the absence of the first character means that there is no positive integer such that the product of the target text and the first predetermined character is smaller than the second text, and/or the first text is larger than the second text and the number of characters included in the first text is also larger than the number of characters included in the second text, but the selected target text is smaller than the second text, for example 102/50, the first text 102 is larger than the second text 50 and the number of characters included in the first text is 3 and the number of characters included in the second text is larger than the number of characters included in the second text 2, the target text 10 obtained by selecting the first 2 target characters from the first text 102 is smaller than the second text 50, in which case the sum of the fourth characters is calculated after the first operation directly performed on the target text and the first predetermined character, and the fourth operation result is obtained, that the fourth operation result is 10×10+2=102, and then the calculation step of determining the quotient and remainder is repeated as the target text. The output of the target equation 102/50 is 102/50=102-2×50 (100) =2=2..2.
According to the calculation method of the arithmetic formula provided by the embodiment of the disclosure, when the first text is larger than the second text, a temporary text with the same digit as the second text is determined in the first text, quotient and remainder of the temporary text and the second text are calculated, when the first text still has non-calculated characters or the last character is not calculated, the remainder is multiplied by the first preset characters and the first non-calculated characters are added to obtain a new temporary text, the calculation steps are repeated until the final remainder is smaller than the second text, and the last character of the first text is calculated to obtain a calculation result. The calculation efficiency and the calculation precision are effectively improved through a step-by-step calculation scheme of gradually disassembling the high-order data.
Fig. 5 is a schematic structural diagram of an arithmetic computing device according to an embodiment of the present disclosure, where the arithmetic computing device 500 is applied to a server, and the device 500 is configured to execute the above arithmetic computing method, and the device 500 specifically includes an obtaining unit 501 and a computing unit 502, where:
an obtaining unit 501, configured to obtain a target expression to be calculated, where the target expression includes at least one operation symbol;
the calculating unit 502 is configured to calculate, according to a pre-trained calculation model corresponding to the operation symbol, a target result of the target expression, so as to obtain an output result;
The output result comprises at least one expression and the target result, the at least one expression characterizes a calculation step of obtaining the target result, and the target result is obtained according to a target character combination in the at least one expression.
Wherein the target expression further includes a plurality of characters.
Optionally, the computing unit 502 is configured to:
under the condition that the operation symbol is a preset symbol, taking a section of continuous characters positioned in front of the preset symbol in the target arithmetic expression as a first text, and taking a section of continuous characters positioned behind the preset character in the target arithmetic expression as a second text;
and under the condition that the first text is larger than or equal to the second text, carrying out operation according to the first text and the second text to obtain the output result.
Optionally, the computing unit 502 is configured to:
counting the number of characters included in the second text, and taking the characters of the number of characters in the first text as a target text;
determining a first character, so that a first operation result obtained after the second text and the first character are subjected to a first operation is smaller than or equal to the target text, wherein the first character is a positive integer;
Calculating a first difference value between the target text and the first operation result, and obtaining the output result when the first difference value is smaller than the second text and the first text does not include other texts except the target text;
the first text and the first character are subjected to first operation to obtain a first operation result, a first operation formula is formed by the second text and the first operation result difference value, the first character and the first difference value form the target result, and the at least one operation formula comprises the first operation formula.
Optionally, the computing unit 502 is configured to:
calculating the sum of the first difference value and a first preset character after the first operation is performed to obtain a second operation result, wherein the second character is the first character in the rest texts;
determining a third character, so that a third operation result obtained after the second text and the third character pass through the first operation is smaller than or equal to the second operation result;
calculating a second difference value of the second operation result and the third operation result until the second difference value is smaller than the second text and the rest texts do not include other characters except the second character, so as to obtain the output result;
The second text and the third character are subjected to a first operation to obtain a third operation result, and a difference value between the second text result and the third operation result is calculated to form a second operation formula, and the at least one operation formula further comprises the second operation formula, and accordingly, the first character, the third character and the second difference value form the target result.
Optionally, the computing unit 502 is configured to:
if the first character does not exist and/or the target text is smaller than the second text, calculating the sum of the first character and a fourth character after the first operation is carried out on the target text and a first preset character, obtaining a fourth operation result, and continuously determining the first character by taking the fourth operation result as the target text;
wherein the fourth character refers to the rest of characters except the target text in the first text.
Optionally, the computing unit 502 is configured to:
adding a second preset character into the first text until the obtained third text is greater than or equal to the second text;
according to the third text and the second text, operation is carried out, and an initial result is obtained;
And obtaining the output result according to the initial result and the third preset character.
Optionally, the apparatus 500 is further configured to:
obtaining a plurality of training samples, analyzing the plurality of training samples, and generating an analysis sample corresponding to each training sample, wherein the analysis sample comprises at least one arithmetic sample and a calculation result of the training sample;
and training a pre-constructed calculation model through the training sample and the analysis sample to obtain a trained calculation model.
The device provided in this embodiment has the same implementation principle and technical effects as those of the foregoing method embodiment, and for brevity, reference may be made to the corresponding content of the foregoing method embodiment where the device embodiment is not mentioned.
The exemplary embodiments of the present disclosure also provide an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor for causing the electronic device to perform a method according to embodiments of the present disclosure when executed by the at least one processor.
The present disclosure also provides a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to embodiments of the disclosure.
Referring to fig. 6, a block diagram of an electronic device 600 that may be a server or client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606, an output unit 607, a storage unit 608, and a communication unit 609. The input unit 606 may be any type of device capable of inputting information to the electronic device 600, and the input unit 606 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. The output unit 607 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 604 may include, but is not limited to, magnetic disks, optical disks. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the various methods and processes described above. For example, in some embodiments, the computational methods or training methods of the computational model may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. In some embodiments, the computing unit 601 may be configured to perform a computational method or a training method of a computational model by any other suitable means (e.g., by means of firmware).
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As used in this disclosure, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data service), or that includes a middleware component (e.g., an application service), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and the server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of calculating an expression, comprising:
obtaining a target arithmetic expression to be calculated, wherein the target arithmetic expression comprises at least one operation symbol;
calculating a target result of the target calculation formula through a pre-trained calculation model corresponding to the operation symbol to obtain an output result;
the output result comprises at least one expression and the target result, the at least one expression characterizes a calculation step of obtaining the target result, and the target result is obtained according to a target character combination in the at least one expression.
2. The method of claim 1, wherein the target expression further includes a plurality of characters, and the calculating, by using a pre-trained calculation model corresponding to the operation symbol, the target result of the target expression to obtain an output result includes:
under the condition that the operation symbol is a preset symbol, taking a section of continuous characters positioned in front of the preset symbol in the target arithmetic expression as a first text, and taking a section of continuous characters positioned behind the preset character in the target arithmetic expression as a second text;
and under the condition that the first text is larger than or equal to the second text, carrying out operation according to the first text and the second text to obtain the output result.
3. The method according to claim 2, wherein said calculating according to the first text and the second text to obtain the output result includes:
counting the number of characters included in the second text, and taking the characters of the number of characters in the first text as a target text;
determining a first character, so that a first operation result obtained after the second text and the first character are subjected to a first operation is smaller than or equal to the target text, wherein the first character is a positive integer;
calculating a first difference value between the target text and the first operation result, and obtaining the output result when the first difference value is smaller than the second text and the first text does not include other texts except the target text;
wherein the second text and the first character are subjected to a first operation to obtain a first operation result, and the difference value between the target text and the first operation result is calculated to form a first operation formula, the first character and the first difference constitute the target result, and the at least one equation includes the first equation.
4. The method of claim 3, wherein after the calculating the first difference between the target text and the first operation result, in a case where the first text includes the rest of the text other than the target text, the method further comprises:
calculating the sum of the first difference value and a first preset character after the first operation is performed to obtain a second operation result, wherein the second character is the first character in the rest texts;
determining a third character, so that a third operation result obtained after the second text and the third character pass through the first operation is smaller than or equal to the second operation result;
calculating a second difference value of the second operation result and the third operation result until the second difference value is smaller than the second text and the rest texts do not include other characters except the second character, so as to obtain the output result;
the second text and the third character are subjected to a first operation to obtain a third operation result, and a difference value between the second text result and the third operation result is calculated to form a second operation formula, and the at least one operation formula further comprises the second operation formula, and accordingly, the first character, the third character and the second difference value form the target result.
5. A method according to claim 3, characterized in that the method further comprises:
if the first character does not exist and/or the target text is smaller than the second text, calculating the sum of the first character and a fourth character after the first operation is carried out on the target text and a first preset character, obtaining a fourth operation result, and continuously determining the first character by taking the fourth operation result as the target text;
wherein the fourth character refers to the rest of characters except the target text in the first text.
6. The method of claim 2, wherein in the event that the first text is smaller than the second text, the method comprises:
adding a second preset character into the first text until the obtained third text is greater than or equal to the second text;
according to the third text and the second text, operation is carried out, and an initial result is obtained;
and obtaining the output result according to the initial result and the third preset character.
7. The method according to claim 1, wherein the computational model is trained by:
obtaining a plurality of training samples, analyzing the plurality of training samples, and generating an analysis sample corresponding to each training sample, wherein the analysis sample comprises at least one arithmetic sample and a calculation result of the training sample;
And training a pre-constructed calculation model through the training sample and the analysis sample to obtain a trained calculation model.
8. An arithmetic computing device, comprising:
the device comprises an acquisition unit, a calculation unit and a calculation unit, wherein the acquisition unit is used for acquiring a target calculation formula to be calculated, and the target calculation formula comprises at least one operation symbol;
the calculation unit is used for calculating a target result of the target calculation formula through a pre-trained calculation model corresponding to the operation symbol to obtain an output result;
the output result comprises at least one expression and the target result, the at least one expression characterizes a calculation step of obtaining the target result, and the target result is obtained according to a target character combination in the at least one expression.
9. An electronic device, the electronic device comprising:
a processor; and
a memory in which a program is stored,
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method of arithmetic computation according to claims 1 to 7.
10. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the arithmetic calculation method according to claims 1 to 7.
CN202311684685.1A 2023-12-08 2023-12-08 Formula calculation method, device, equipment and storage medium Pending CN117649669A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311684685.1A CN117649669A (en) 2023-12-08 2023-12-08 Formula calculation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311684685.1A CN117649669A (en) 2023-12-08 2023-12-08 Formula calculation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117649669A true CN117649669A (en) 2024-03-05

Family

ID=90047618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311684685.1A Pending CN117649669A (en) 2023-12-08 2023-12-08 Formula calculation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117649669A (en)

Similar Documents

Publication Publication Date Title
EP3446260B1 (en) Memory-efficient backpropagation through time
CN113887701B (en) Method, system and storage medium for generating output for neural network output layer
TWI823571B (en) Methods, systems, and computer storage media for implementing neural networks in fixed point arithmetic computing systems
JP7316453B2 (en) Object recommendation method and device, computer equipment and medium
JP2019535091A (en) Processing sequences using convolutional neural networks
CN108510084B (en) Method and apparatus for generating information
US11651198B2 (en) Data processing method and apparatus for neural network
US11861498B2 (en) Method and apparatus for compressing neural network model
CN110287442A (en) A kind of determination method, apparatus, electronic equipment and the storage medium of influence power ranking
CN112182118B (en) Target object prediction method based on multiple data sources and related equipment thereof
US20230013796A1 (en) Method and apparatus for acquiring pre-trained model, electronic device and storage medium
CN104317892B (en) The temporal aspect processing method and processing device of Portable executable file
CN114860411B (en) Multi-task learning method, device, electronic equipment and storage medium
CN117649669A (en) Formula calculation method, device, equipment and storage medium
US20220113943A1 (en) Method for multiply-add operations for neural network
CN115879455A (en) Word emotion polarity prediction method and device, electronic equipment and storage medium
CN116168403A (en) Medical data classification model training method, classification method, device and related medium
CN111859985B (en) AI customer service model test method and device, electronic equipment and storage medium
CN115840867A (en) Generation method and device of mathematical problem solving model, electronic equipment and storage medium
CN116416018A (en) Content output method, content output device, computer readable medium and electronic equipment
CN113961962A (en) Model training method and system based on privacy protection and computer equipment
CN114297380A (en) Data processing method, device, equipment and storage medium
KR20180094738A (en) Apparatus and method for digitizing sentiment and predicting climax using the same
CN112365046A (en) User information generation method and device, electronic equipment and computer readable medium
CN113344213A (en) Knowledge distillation method, knowledge distillation device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination