CN117994612A - Evaluation method, device, equipment and storage medium for algorithm accuracy - Google Patents

Evaluation method, device, equipment and storage medium for algorithm accuracy Download PDF

Info

Publication number
CN117994612A
CN117994612A CN202410166814.6A CN202410166814A CN117994612A CN 117994612 A CN117994612 A CN 117994612A CN 202410166814 A CN202410166814 A CN 202410166814A CN 117994612 A CN117994612 A CN 117994612A
Authority
CN
China
Prior art keywords
recognition
virtual object
virtual
scene
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410166814.6A
Other languages
Chinese (zh)
Inventor
孔德权
赖祝平
李敏
郭小溪
罗军
尚进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202410166814.6A priority Critical patent/CN117994612A/en
Publication of CN117994612A publication Critical patent/CN117994612A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

One or more embodiments of the present specification provide an algorithm accuracy evaluation method, apparatus, device, and storage medium. In the method for evaluating the accuracy of the algorithm, at least one virtual object in a first virtual scene can be identified by acquiring a test animation which is generated in advance according to the first virtual scene and at least one virtual object in the first virtual scene and inputting the test animation into an identification algorithm; based on the above, after receiving the recognition result output by the recognition algorithm for the test animation, a corresponding evaluation result can be generated according to the recognition result.

Description

Evaluation method, device, equipment and storage medium for algorithm accuracy
Technical Field
One or more embodiments of the present disclosure relate to the field of internet of things, and in particular, to an algorithm accuracy evaluation method, apparatus, device, and storage medium.
Background
In shopping scenes such as an unmanned supermarket, an unmanned vending machine and the like, commodities purchased by a user are identified through a specific identification algorithm, so that self-service shopping and commodity settlement are realized. In order to ensure the accuracy of the identification result, the accuracy of the identification algorithm for identifying the commodity needs to be tested before the identification algorithm is put into use, so that the identification algorithm passing the test can meet the use requirement.
In order to implement the above-mentioned test procedure, the shopping procedure of various commodities is generally simulated in a real shopping scene to verify whether the recognition result of the recognition algorithm is accurate. However, this testing process is not only time consuming and laborious, but also requires a large number of real goods for testing, which is costly.
Disclosure of Invention
In order to reduce test cost and improve test efficiency, one or more embodiments of the present disclosure provide an algorithm accuracy evaluation method, apparatus, device, and storage medium.
In a first aspect, one or more embodiments of the present disclosure provide a method for evaluating accuracy of an algorithm, including: acquiring a test animation corresponding to a first virtual scene, wherein the test animation comprises the first virtual scene and at least one virtual object in the first virtual scene; inputting the test animation into an identification algorithm to identify the at least one virtual object; receiving a recognition result output by the recognition algorithm aiming at the test animation; and generating a corresponding evaluation result according to the identification result.
In a possible manner, obtaining a test animation corresponding to a first virtual scene includes: constructing at least one corresponding virtual object according to at least one entity object in the first physical scene; rendering to obtain a corresponding first virtual scene according to the first physical scene; rendering the at least one virtual object into the first virtual scene; and generating a corresponding test animation aiming at the rendered first virtual scene.
In one possible manner, rendering a first virtual object into the first virtual scene includes: rendering, in the first virtual scene, at least one appearance attribute and at least one spatial attribute for a first virtual object; the first virtual object is any one of the at least one virtual object, and the at least one spatial attribute is used for reflecting the spatial position of the first virtual object in the first virtual scene.
In a possible manner, in the first virtual scene, rendering at least one appearance attribute and at least one spatial attribute on a first virtual object includes: determining a plurality of key positions corresponding to a first virtual object in a first virtual field; at each key location, at least one appearance attribute and at least one spatial attribute are rendered to the first virtual object.
In a possible manner, for the rendered first virtual scene, generating a corresponding test animation includes: and generating a test animation corresponding to each virtual object in the first virtual scene according to the multiple key positions of each virtual object after rendering.
In a possible manner, generating a corresponding evaluation result according to the identification result includes: determining the recognition accuracy of the recognition algorithm according to the recognition result; and generating a corresponding evaluation result according to the identification accuracy.
In a possible manner, determining the recognition accuracy of the recognition algorithm according to the recognition result includes: according to the identification result, determining the identification type corresponding to the at least one virtual object after being identified; and determining the recognition accuracy of the recognition algorithm according to the real type and the recognition type respectively corresponding to the at least one virtual object.
In a possible manner, generating a corresponding evaluation result according to the identification accuracy includes: if the recognition algorithm is to recognize the first virtual scene for the first time, generating a corresponding evaluation result according to the recognition accuracy; and if the recognition algorithm does not recognize the first virtual scene for the first time, generating a corresponding evaluation result according to the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process.
In a possible manner, according to the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the present recognition process, a corresponding evaluation result is generated, including: determining an error between the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process; if the error is greater than a preset difference value, determining whether the at least one virtual object comprises a second virtual object of a new type; and if the at least one virtual object comprises the second virtual object, generating an evaluation result of the recognition algorithm for recognizing the second virtual object according to the error and at least one appearance attribute and at least one spatial attribute of the second virtual object at each key position.
In one possible manner, the method further comprises: if the second virtual object is not included in the at least one virtual object, determining a third virtual object which is wrong in recognition of the at least one virtual object according to the recognition result; and generating an evaluation result of the recognition algorithm for recognizing the third virtual object according to the error, at least one appearance attribute and at least one spatial attribute of the third virtual object at each key position.
In a second aspect, one or more embodiments of the present disclosure further provide an apparatus for evaluating accuracy of an algorithm, including: the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring a test animation corresponding to a first virtual scene, and the test animation comprises the first virtual scene and at least one virtual object in the first virtual scene; the input module is used for inputting the test animation into an identification algorithm to identify the at least one virtual object; the receiving module is used for receiving the recognition result output by the recognition algorithm aiming at the test animation; and the processing module is used for generating a corresponding evaluation result according to the identification result.
In a possible manner, the obtaining module obtains a test animation corresponding to the first virtual scene, for: acquiring the generated test animation from the processing module; the processing module is further configured to construct at least one corresponding virtual object according to at least one physical object in the first physical scene; rendering to obtain a corresponding first virtual scene according to the first physical scene; rendering the at least one virtual object into the first virtual scene; and generating a corresponding test animation aiming at the rendered first virtual scene.
In a possible manner, the processing module renders a first virtual object into the first virtual scene for: rendering, in the first virtual scene, at least one appearance attribute and at least one spatial attribute for a first virtual object; the first virtual object is any one of the at least one virtual object, and the at least one spatial attribute is used for reflecting the spatial position of the first virtual object in the first virtual scene.
In a possible manner, the processing module renders, in the first virtual scene, at least one appearance attribute and at least one spatial attribute for a first virtual object, for: determining a plurality of key positions corresponding to a first virtual object in a first virtual field; at each key location, at least one appearance attribute and at least one spatial attribute are rendered to the first virtual object.
In a possible manner, the processing module generates a corresponding test animation for the rendered first virtual scene, including: and generating a test animation corresponding to each virtual object in the first virtual scene according to the multiple key positions of each virtual object after rendering.
In a possible manner, the processing module generates a corresponding evaluation result according to the identification result, and is used for: determining the recognition accuracy of the recognition algorithm according to the recognition result; and generating a corresponding evaluation result according to the identification accuracy.
In a possible manner, the processing module determines, according to the identification result, an identification accuracy of the identification algorithm, and is configured to: according to the identification result, determining the identification type corresponding to the at least one virtual object after being identified; and determining the recognition accuracy of the recognition algorithm according to the real type and the recognition type respectively corresponding to the at least one virtual object.
In a possible manner, the processing module generates a corresponding evaluation result according to the identification accuracy, and the evaluation result is used for: if the recognition algorithm is to recognize the first virtual scene for the first time, generating a corresponding evaluation result according to the recognition accuracy; and if the recognition algorithm does not recognize the first virtual scene for the first time, generating a corresponding evaluation result according to the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process.
In a possible manner, the processing module generates a corresponding evaluation result according to the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process, and the evaluation result is used for: determining an error between the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process; if the error is greater than a preset difference value, determining whether the at least one virtual object comprises a second virtual object of a new type; and if the at least one virtual object comprises the second virtual object, generating an evaluation result of the recognition algorithm for recognizing the second virtual object according to the error and at least one appearance attribute and at least one spatial attribute of the second virtual object at each key position.
In a possible manner, the processing module is further configured to: if the second virtual object is not included in the at least one virtual object, determining a third virtual object which is wrong in recognition of the at least one virtual object according to the recognition result; and generating an evaluation result of the recognition algorithm for recognizing the third virtual object according to the error, at least one appearance attribute and at least one spatial attribute of the third virtual object at each key position.
In a third aspect, one or more embodiments of the present specification also provide an electronic device including a memory and a processor; the memory is used for storing a computer program product; the processor is configured to execute the computer program product stored in the memory, and when the computer program product is executed, implement the method for evaluating the accuracy of the algorithm of the first aspect.
In a fourth aspect, one or more embodiments of the present specification further provide a computer readable storage medium storing computer program instructions that, when executed, implement the method of evaluating algorithm accuracy of the first aspect described above.
In summary, in the method for evaluating accuracy of an algorithm provided in one or more embodiments of the present disclosure, by rendering a first virtual scene having the same environmental information on a first physical scene, and constructing at least one virtual object corresponding to at least one physical object in the first physical scene, and rendering at least one virtual object into the first virtual scene with different appearance attributes and different spatial attributes, a motion process of each physical object in the first physical scene can be simulated to generate a corresponding test animation. Based on the above, the test animation is input into the recognition algorithm for recognition, and the process of recognizing each entity object in the first physical scene by the recognition algorithm can be simulated, so that the recognition accuracy of the recognition algorithm when the recognition algorithm is used in the first physical scene can be evaluated according to the recognition result output by the recognition algorithm. By the method, a large number of input objects can be easily provided for the recognition algorithm under the condition of not resorting to real physical scenes and entity objects, so that the recognition accuracy of the recognition algorithm can be evaluated according to the output result of the recognition algorithm, the test cost can be effectively reduced, and the test efficiency can be improved.
Drawings
In order to more clearly illustrate the technical solution of one or more embodiments of the present description, the drawings that are required for use in the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of one or more embodiments of the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow diagram of a method for evaluating algorithm accuracy provided by one or more embodiments of the present disclosure;
FIG. 2 is a flow diagram of another method of evaluating algorithm accuracy provided by one or more embodiments of the present disclosure;
FIG. 3 is a block diagram of an apparatus for evaluating algorithm accuracy provided by one or more embodiments of the present disclosure;
fig. 4 is a block diagram of an electronic device according to one or more embodiments of the present disclosure.
Detailed Description
One or more embodiments of the present specification are described in further detail below with reference to the drawings and examples. Features and advantages of one or more embodiments of the present description will become apparent from the description.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
In addition, the technical features mentioned in the different implementations of one or more embodiments of the present specification described below may be combined with each other as long as they do not conflict with each other.
In order to facilitate understanding, an application scenario of the technical solution provided in one or more embodiments of the present disclosure is first described below.
In the field of internet of things (Internet of Things, ioT) technology, intelligent recognition, positioning, tracking, supervision, etc. are often involved, for example, a customer needs a specific recognition algorithm to recognize a commodity purchased by the customer for settlement in an unmanned supermarket or a shopping scenario such as using an intelligent container. In practical application, if an identification algorithm with poor identification capability is put into use in these scenes, normal vending flow will be affected, and economic loss may be caused to customers. Therefore, the accuracy of the recognition result of the recognition algorithm has higher requirements, and the recognition algorithm must be subjected to strict test before being put into use, so that the recognition capability of the recognition algorithm passing the test can reach the use requirements.
In the traditional testing mode, the identification capability of the identification algorithm is usually tested by a tester in a corresponding physical scene by simulating a customer shopping process, however, the testing mode is low in efficiency and labor-consuming, and a large number of real commodities in a real shopping scene need to be prepared in a testing stage, so that the cost is high. Therefore, in order to improve the test efficiency and reduce the cost, one or more embodiments of the present disclosure provide an evaluation method, an apparatus, a device, and a storage medium for algorithm accuracy, which can provide a large number of recognition objects for a recognition algorithm without using a real shopping scene or a real commodity, and evaluate the accuracy of the recognition result of the recognition algorithm, so that the obtained evaluation result is used as a basis for adjusting and improving the recognition algorithm, thereby effectively reducing the cost and improving the test efficiency.
It should be noted that, the shopping scenario is taken as an example for illustration, but the application is not limited to this, so long as the application relates to a test scenario of an identification algorithm of an entity object in an internet of things scenario, the implementation principles of the method, the device, the equipment and the storage medium for evaluating the accuracy of the algorithm provided in one or more embodiments of the present disclosure are applicable, and are not repeated herein.
Methods, apparatuses, devices and storage media for evaluating algorithm accuracy provided by one or more embodiments of the present specification are described below with reference to the accompanying drawings.
FIG. 1 is a flowchart of a method for evaluating algorithm accuracy according to one or more embodiments of the present disclosure, where, as shown in FIG. 1, the method includes:
S102, acquiring a test animation corresponding to a first virtual scene, wherein the test animation comprises the first virtual scene and at least one virtual object in the first virtual scene;
S104, inputting the test animation into an identification algorithm to identify at least one virtual object;
s106, receiving an identification result output by an identification algorithm aiming at the test animation;
s108, generating a corresponding evaluation result according to the identification result.
For convenience of explanation, one or more embodiments of the present disclosure take an evaluation device as an execution body of the above method as an example, where the specific form and type of the evaluation device are not limited, and may alternatively be an independent computer device, or may be a system server, or any one of servers in a cluster server, or the like, and specifically may be selected according to actual needs.
In order to realize that the recognition object of the recognition algorithm can be obtained without the help of a real physical scene and the entity object in the real physical scene so as to test the recognition algorithm, one or more embodiments of the present disclosure provide a virtual scene corresponding to the real physical scene and a virtual object corresponding to the entity object in the physical scene; and according to the motion process of the entity object in the physical scene, a corresponding test animation is constructed and is used as an identification object of an identification algorithm. Alternatively, the type of the constructed virtual scene and the kinds and the number of the virtual objects therein may be different according to the actual test requirements, which is not limited herein. In one or more embodiments of the present disclosure, a first virtual scene and at least one virtual object therein are described as an example.
Based on the above, the evaluation device may first obtain a test animation corresponding to the first virtual scene, where the test animation includes the first virtual scene and at least one virtual object therein; based on the result, the acquired test is input into a recognition algorithm, and at least one virtual object can be recognized; further, receiving the recognition result output by the recognition algorithm aiming at the test animation, and generating a corresponding evaluation result according to the recognition result, wherein the evaluation result can be used as a basis for adjusting and improving the recognition algorithm.
It should be noted that, one or more embodiments of the present disclosure are not limited to the processing logic of the recognition algorithm to be tested, and the processing logic of the recognition algorithm may be different according to the different requirements for the first virtual scene test. Further, one or more embodiments of the present disclosure also do not limit the relationship between the execution entity of the identification algorithm and the evaluation device, and alternatively, the execution entity of the identification algorithm may be the evaluation device itself, or may be another device communicatively connected to the evaluation device, and may specifically be determined according to actual requirements.
It should be further noted that, in one or more embodiments of the present disclosure, the relationship between the device main body and the evaluation device for generating the test animation is not limited, and alternatively, the test animation may be generated directly by the evaluation device, or may be generated by another device communicatively connected to the evaluation device, and may be specifically determined according to an actual system structure. Based on this, if the test animation is directly generated by the evaluation device, the evaluation device can directly acquire the generated test animation from the local; if the test animation is generated by another device, the evaluation device may acquire the test animation previously generated by the device from the device.
In one or more embodiments of the present specification, an example in which the evaluation apparatus directly generates the test animation is exemplified. Optionally, when the evaluation device generates the test animation corresponding to the first virtual scene, at least one entity object included in the first physical scene may be determined first according to the first physical scene. Further, according to at least one entity object corresponding to the first physical scene, constructing at least one corresponding virtual object, and according to the first physical scene, rendering by adopting a virtual rendering mode to obtain a corresponding first virtual scene. Further, at least one constructed virtual object is rendered into the first virtual scene, and corresponding test animation can be generated aiming at the rendered first virtual scene.
In one or more embodiments of the present description, the specific manner in which the assessment device builds the at least one virtual object is not limited, alternatively the at least one virtual object may be built in a 3D modeling manner, e.g., the at least one virtual object may be built using 3D modeling tools including, but not limited to, 3D Studio Max, maya, rhinocero, ZBrush, etc. Accordingly, the embodiment also does not limit the specific manner in which the evaluation device renders the first virtual scene and renders the at least one virtual object into the first virtual scene, and optionally, the first virtual scene and the at least one virtual object may be rendered using development tools including, but not limited to, unity, unreal Engine (UE), cocos, and the like. Based on the above, when the test animation is generated, the self-contained animation generation mechanism of the development tool can be directly utilized to generate the test animation for the rendered first virtual scene.
Optionally, when the first virtual scene is rendered, according to the real scene environment information of the first physical scene, the environment layout, the illumination intensity, the relative position relationship of each entity object, the shielding angle and the like of the first physical scene can be determined, and based on the environment layout, the illumination intensity, the relative position relationship, the shielding angle and the like, the first virtual scene with the same virtual environment information is rendered. Further, at least one appearance attribute and at least one spatial attribute may be rendered for any one of the at least one virtual object when the at least one virtual object is rendered into the first virtual scene; wherein the at least one spatial attribute is used to reflect a spatial position of the first virtual object in the first virtual scene.
For example, taking a self-service shopping scenario as an example, the first virtual object may be a virtual object corresponding to any kind of real commodity, and assuming that the first virtual object is a virtual object corresponding to "canned cola", the at least one appearance attribute rendered on the first virtual object may include a shape, a size, an appearance color, a pattern, a barcode, and the like. Since a camera is generally installed at a designated location in a real shopping scene for identifying goods purchased by a customer, it is possible to render a first virtual scene from the perspective of the virtual camera and at least one spatial attribute of the first virtual object according to virtual environment information such as distance, angle, illumination intensity, shielding degree, etc. with respect to the virtual camera.
Further optionally, when rendering at least one appearance attribute and at least one spatial attribute to the first virtual object in the first virtual scene, a plurality of key positions corresponding to the first virtual object in the first virtual field may be determined first, and further, at each key position, at least one appearance attribute and at least one spatial attribute are rendered to the first virtual object. Based on this, in the case where the above-described rendering operation is performed on each virtual object in the first virtual scene, a test animation corresponding to each virtual object in the first virtual scene may be generated according to a plurality of key positions of each virtual object after rendering.
It should be noted that, in one or more embodiments of the present disclosure, a specific manner of generating the test animation corresponding to each virtual object in the first virtual scene is not limited, and alternatively, a test animation corresponding to each virtual object in the first virtual scene may be generated, that is, a plurality of test animations are generated for the first virtual scene; or, the test animation corresponding to the first virtual scene can be generated for each virtual object, namely, one test animation is generated for the first virtual scene, the test animation sequentially comprises animation fragments corresponding to each virtual object, and the specific mode can be determined according to actual requirements.
Based on the above, in the case of obtaining the test animation corresponding to the first virtual scene, the evaluation device may input the generated test animation into the recognition algorithm to be applied to the first physical scene, so that the recognition algorithm recognizes each virtual object included in the test animation. Further, after receiving the recognition result output by the recognition algorithm for the test animation, the evaluation device may generate a corresponding evaluation result according to the recognition result, so as to determine whether the recognition algorithm can accurately recognize each virtual object in the test animation, and further determine whether the recognition algorithm can be applied to the first physical scene.
It should be noted that, in one or more embodiments of the present disclosure, the specific content of the evaluation result is not limited, and the content of the evaluation result may be different according to the test requirement. For example, the test requirement only needs to know the recognition accuracy of the evaluation recognition algorithm, and the content of the evaluation result can only give the recognition accuracy of the recognition algorithm; for another example, the test requirement requires that, in addition to determining the recognition accuracy of the recognition algorithm, a virtual object with a recognition error is also determined, and then the content of the evaluation result should identify the virtual object with the recognition error; for another example, in addition to the above requirements, the test requirements also require determining spatial attributes respectively corresponding to the correct virtual object and the incorrect virtual object to be identified, and then the content of the evaluation result should also include spatial attributes respectively corresponding to the correct virtual object and the incorrect virtual object to be identified, and so on. The specific content of the evaluation result is not described in detail herein, and the evaluation result of the corresponding content can be generated according to specific test requirements for targeted improvement of the recognition algorithm.
Optionally, when the evaluation device generates a corresponding evaluation result according to the recognition result, the recognition accuracy of the recognition algorithm may be determined according to the recognition result, and then the corresponding evaluation result may be generated according to the recognition accuracy. Further alternatively, in order to distinguish each virtual object, the real type corresponding to at least one entity object in the first physical scene may be determined as the real type of the virtual object corresponding to each entity object. Based on the above, when the evaluation device determines the recognition accuracy of the recognition algorithm according to the recognition result, the recognition type corresponding to the recognized at least one virtual object can be determined according to the recognition result. Further, according to the real type and the recognition type respectively corresponding to the at least one virtual object, the recognition accuracy of the recognition algorithm is determined. For example, 100 virtual objects are constructed, after being identified by the identification algorithm, the identification types of 65 virtual objects are the same as the real types, and the identification types of the other 35 virtual objects are different from the real types, based on which it can be determined that the identification accuracy of the identification algorithm is 65% for 100 virtual objects in the first virtual scene.
Further optionally, when the evaluation device generates the evaluation result according to the recognition accuracy, the evaluation device may further generate the evaluation result in combination with the recognition algorithm for the number of times of recognition of the first virtual scene. For example, if the recognition algorithm is to recognize the first virtual scene for the first time, generating a corresponding evaluation result directly according to the recognition accuracy; if the recognition algorithm does not recognize the first virtual scene for the first time, generating a corresponding evaluation result according to the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process.
In one or more embodiments of the present disclosure, the specific manner in which the evaluation device generates the evaluation result in each manner is not limited, and optionally, if the recognition algorithm is to recognize the first virtual scene for the first time, an accuracy threshold for measuring the recognition capability of the recognition algorithm may be set for the recognition accuracy of the recognition algorithm. Based on the above, when the corresponding evaluation result is generated, the evaluation device may generate the corresponding evaluation result according to the relationship between the recognition accuracy corresponding to the present recognition process and the accuracy threshold. For example, the set accuracy threshold may be 90%, and if it is determined that the accuracy of the recognition algorithm in the recognition of the first virtual scene is greater than or equal to 90%, it is indicated that the recognition capability of the recognition algorithm meets the use requirement; if the recognition accuracy of the recognition algorithm on the first virtual scene is less than 90%, the recognition capability of the recognition algorithm is not in accordance with the use requirement. Based on the above, the evaluation device may generate a corresponding evaluation result according to the determined recognition accuracy, and the evaluation result may give a corresponding evaluation suggestion for the recognition capability of the recognition algorithm based on the relationship between the recognition accuracy and the accuracy threshold.
It should be noted that, in the actual test scenario, the specific value of the preset difference value is not limited to this, and optionally, the adaptive preset difference value may be set according to the difference of the recognition capability requirement of the first physical scenario on the recognition algorithm. For example, if the first physical scene has a higher requirement on the recognition capability of the recognition algorithm, the set accuracy threshold may be larger, and if the first physical scene has a lower requirement on the recognition capability of the recognition algorithm, the set accuracy threshold may be smaller, which may be specifically determined according to the use requirement of the recognition algorithm.
Accordingly, if the recognition algorithm does not recognize the first virtual scene for the first time, a corresponding preset difference value may be set for the error between the recognition accuracy of the recognition algorithm for the first virtual scene and the recognition accuracy of the present recognition process. Based on the above, when the evaluation device generates the corresponding evaluation result, an error between the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process can be determined first, and the corresponding evaluation result is generated further according to the relationship between the error and the preset difference value. For the specific value of the preset difference, one or more embodiments of the present disclosure are not limited, alternatively, the preset difference may be set to 10%, and of course, in an actual test scenario, the specific value of the preset difference is not limited to this and may be determined according to the use requirement of the recognition algorithm.
It should be noted that, the evaluation method provided in one or more embodiments of the present disclosure may be performed only once, or may be performed repeatedly in combination with the improvement of the recognition algorithm. For example, the evaluation method may be performed only once and a corresponding evaluation result may be generated for use as a basis for improvement of the recognition algorithm. For another example, if it is determined that the recognition algorithm has a recognition capability problem based on the evaluation result generated by the evaluation device, after the recognition algorithm is improved, the evaluation method may be repeatedly performed to generate a new evaluation result so as to determine the improvement effect of the recognition algorithm according to the new evaluation result. And so on until it is determined that the recognition capability of the recognition algorithm meets the use requirement.
Based on the above, if the historical recognition accuracy of the recognition algorithm on the first virtual scene meets the use requirement, it can determine whether the recognition process meets the use requirement according to the relationship between the error between the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process and the preset difference value. If the historical recognition accuracy of the recognition algorithm on the first virtual scene does not meet the use requirement, determining whether the recognition rate of the recognition process is better than the historical recognition accuracy according to the relation between the error between the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process and a preset difference value so as to determine the improvement effect of the recognition algorithm.
For how to determine the historical recognition accuracy compared with the recognition accuracy of the present recognition process, one or more embodiments of the present specification are not limited, alternatively, if the recognition algorithm meets the use requirement on the historical recognition accuracy of the first virtual scene, any one of the historical recognition accuracy may be selected to be compared with the recognition accuracy of the present recognition process; or the average value of the plurality of historical recognition accuracy rates may be compared with the recognition accuracy rate in the present recognition process, which is not limited to this. Correspondingly, if the recognition algorithm does not accord with the use requirement on the history recognition accuracy of the first virtual scene, the history recognition accuracy corresponding to the last history recognition process can be selected to be compared with the recognition accuracy of the current recognition process; or the average value of the plurality of historical recognition accuracy rates can be compared with the recognition accuracy rate of the current recognition process; alternatively, the maximum value of the plurality of historical recognition accuracy rates may be compared with the recognition accuracy rate of the present recognition process, which is not limited to this.
Assuming that the historical recognition accuracy of the recognition algorithm on the first virtual scene accords with the use requirement, setting the preset difference value to be 10%, and if the error between the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process is less than or equal to 10%, determining that the recognition capability of the recognition algorithm accords with the use requirement; if the error between the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process is more than 10%, determining that the recognition capability of the recognition algorithm does not meet the use requirement. Further, according to the magnitude relation between the error and the preset difference, a corresponding evaluation result can be generated, and corresponding evaluation suggestions can be given to the recognition capacity of the recognition algorithm based on the magnitude relation in the evaluation result.
Further optionally, when the evaluation device generates the evaluation result, the cause of the error may be determined according to an error between the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the current recognition process. For example, for a new type of virtual object, since the recognition algorithm has never been recognized, there may be a case where the recognition is inaccurate, and the reason why the error is larger than the preset difference may be because the recognition process does not have a history recognition result as a reference. Based on this, when the evaluation device generates the evaluation result, it may further determine whether the at least one virtual object in the first virtual scene includes a new type of the second virtual object according to the type of the at least one virtual object rendered in the first virtual scene at this time and the type of the historically rendered virtual object. And if the at least one virtual object comprises the second virtual object, generating an evaluation result for identifying the second virtual object by an identification algorithm according to the error and at least one appearance attribute and at least one spatial attribute of the second virtual object at each key position. Optionally, the evaluation result may give the type of the second virtual object, and at least one appearance attribute and at least one spatial attribute at each key location, so as to improve the type, appearance attribute and recognition capability for the two virtual objects at each spatial location when the recognition algorithm is improved subsequently.
Accordingly, if it is determined that the at least one virtual object does not include the second virtual object, it is indicated that the reason for causing the error to be greater than the preset difference is not caused by adding a new type of virtual object, and may be caused by that the historically identified virtual object changes appearance attribute, spatial position, and the like when the currently rendered virtual object is in the first virtual scene. Based on this, the evaluation device may determine a third virtual object that is erroneous in recognition of the at least one virtual object, based on the recognition result of the recognition algorithm and the type of the at least one virtual object rendered in the first virtual scene, when generating the evaluation result. Further, according to the error and at least one appearance attribute and at least one spatial attribute of the third virtual object at each key position, an evaluation result of the third virtual object identified by an identification algorithm is generated. Optionally, the evaluation result may give the type of the third virtual object, and at least one appearance attribute and at least one spatial attribute at each key location, so as to improve the type, appearance attribute and identification capability for the three virtual objects at each spatial location when the identification algorithm is improved subsequently.
It should be noted that, in this specification, one or more embodiments are not limited to the adaptation modes of the recognition algorithm and various physical scenes, alternatively, the recognition algorithm may be a general recognition algorithm, that is, the recognition algorithm may be applied to various physical scenes, or the recognition algorithm may also be a special recognition algorithm, that is, the recognition algorithm may only be applied to the first physical scene, specifically may be determined according to actual use requirements, and the foregoing embodiment is only exemplified by using the application of the recognition algorithm to the first physical scene as an example.
Based on the foregoing, the overall flow of the method for evaluating algorithm accuracy provided in one or more embodiments of the present specification is briefly described below with reference to the accompanying drawings.
Fig. 2 is a schematic diagram of an overall flow of the above-mentioned evaluation method, and still taking an evaluation device executing the evaluation method as an example, as shown in fig. 2, the evaluation device may render a corresponding first virtual scene for a first physical scene in a real environment; for at least one physical object in the first physical scene, the evaluation device may construct a corresponding at least one virtual object; based on this, rendering each virtual object into a first virtual scene, and rendering at least one appearance attribute and at least one spatial attribute for each virtual object in the first virtual scene; further, generating a corresponding test video aiming at the rendered first virtual scene, inputting the test animation into an identification algorithm, and receiving an identification result returned by the identification algorithm; the recognition accuracy of the recognition algorithm can be determined according to the recognition result to generate a corresponding evaluation result. Wherein the recognition algorithm is for use in the first physical scene to recognize at least one physical object in the first physical scene.
It should be noted that, the detailed execution process of each step in the flow shown in fig. 2 may be referred to the description of the corresponding portion in the above embodiment, and will not be repeated here.
In the method for evaluating the accuracy of the algorithm provided in one or more embodiments of the present disclosure, by rendering a first virtual scene having the same environmental information on a first physical scene, and constructing at least one virtual object corresponding to at least one physical object in the first physical scene, respectively, rendering the at least one virtual object into the first virtual scene with different appearance attributes and different spatial attributes, respectively, a motion process of each physical object in the first physical scene can be simulated, so as to generate a corresponding test animation. Based on the above, the test animation is input into the recognition algorithm for recognition, and the process of recognizing each entity object in the first physical scene by the recognition algorithm can be simulated, so that the recognition accuracy of the recognition algorithm when the recognition algorithm is used in the first physical scene can be evaluated according to the recognition result output by the recognition algorithm. By the method, a large number of input objects can be easily provided for the recognition algorithm under the condition of not resorting to real physical scenes and entity objects, so that the recognition accuracy of the recognition algorithm can be evaluated according to the output result of the recognition algorithm, the test cost can be effectively reduced, and the test efficiency can be improved.
It should be understood that the foregoing embodiments are merely examples, and modifications may be made to the foregoing embodiments in actual implementation, and those skilled in the art may understand that the modification methods of the foregoing embodiments without performing any inventive effort fall within the protection scope of one or more embodiments of the present disclosure, and the embodiments are not repeated herein.
Based on the same inventive concept, one or more embodiments of the present disclosure further provide an apparatus for evaluating algorithm accuracy, and since the principle of the problem solved by the apparatus for evaluating algorithm accuracy is similar to that of the foregoing method for evaluating algorithm accuracy, implementation of the apparatus for evaluating algorithm accuracy may refer to implementation of the foregoing method for evaluating algorithm accuracy, and repeated parts are omitted.
Referring to fig. 3, fig. 3 is a block diagram of an apparatus for evaluating accuracy of an algorithm according to one or more embodiments of the present disclosure. As shown in fig. 3, the evaluation device 300 for algorithm accuracy may include: an acquisition module 301, an input module 302, a receiving module 303 and a processing module 304. Wherein,
The obtaining module 301 is configured to obtain a test animation corresponding to the first virtual scene, where the test animation includes the first virtual scene and at least one virtual object therein; the input module 302 is configured to input a test animation into an identification algorithm to identify at least one virtual object; the receiving module 303 is configured to receive a recognition result output by the recognition algorithm for the test animation; the processing module 304 is configured to generate a corresponding evaluation result according to the identification result.
In a possible manner, the obtaining module 301 obtains a test animation corresponding to the first virtual scene, for: obtaining the generated test animation from the processing module 304; the processing module 304 is further configured to construct at least one corresponding virtual object according to at least one physical object in the first physical scene; rendering according to the first physical scene to obtain a corresponding first virtual scene; rendering at least one virtual object into a first virtual scene; and generating a corresponding test animation aiming at the rendered first virtual scene.
In one possible approach, the processing module 304 renders the first virtual object into a first virtual scene for: rendering, in the first virtual scene, at least one appearance attribute and at least one spatial attribute for the first virtual object; the first virtual object is any one of at least one virtual object, and the at least one spatial attribute is used for reflecting the spatial position of the first virtual object in the first virtual scene.
In a possible manner, the processing module 304 renders, in the first virtual scene, the at least one appearance attribute and the at least one spatial attribute for the first virtual object for: determining a plurality of key positions corresponding to the first virtual object in the first virtual field; at each key location, at least one appearance attribute and at least one spatial attribute are rendered for the first virtual object.
In one possible manner, the processing module 304 generates a corresponding test animation for the rendered first virtual scene, including: and generating a test animation corresponding to each virtual object in the first virtual scene according to the multiple key positions of each virtual object after rendering.
In a possible manner, the processing module 304 generates a corresponding evaluation result according to the identification result, where the evaluation result is used for: determining the recognition accuracy of a recognition algorithm according to the recognition result; and generating a corresponding evaluation result according to the identification accuracy.
In a possible manner, the processing module 304 determines, according to the recognition result, a recognition accuracy of the recognition algorithm, for: according to the identification result, determining the identification type corresponding to the at least one virtual object after being identified; and determining the recognition accuracy of the recognition algorithm according to the real type and the recognition type respectively corresponding to the at least one virtual object.
In a possible manner, the processing module 304 generates a corresponding evaluation result according to the recognition accuracy, for: if the recognition algorithm is to recognize the first virtual scene for the first time, generating a corresponding evaluation result according to the recognition accuracy; if the recognition algorithm does not recognize the first virtual scene for the first time, generating a corresponding evaluation result according to the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process.
In a possible manner, the processing module 304 generates a corresponding evaluation result according to the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process, where the evaluation result is used for: determining an error between the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process; if the error is greater than the preset difference value, determining whether the at least one virtual object comprises a second virtual object of a new type; and if the at least one virtual object comprises a second virtual object, generating an evaluation result for identifying the second virtual object by an identification algorithm according to the error and at least one appearance attribute and at least one spatial attribute of the second virtual object at each key position.
In one possible approach, the processing module 304 is further configured to: if the at least one virtual object does not comprise the second virtual object, determining a third virtual object which is wrong in recognition of the at least one virtual object according to the recognition result; and generating an evaluation result for identifying the third virtual object by an identification algorithm according to the error, at least one appearance attribute and at least one spatial attribute of the third virtual object at each key position.
Referring to fig. 4, fig. 4 is a block diagram of an electronic device according to one or more embodiments of the present disclosure. As shown in fig. 4, the electronic device 400 may include a processor 401 and a memory 402; memory 402 may be coupled into processor 401. Notably, this fig. 4 is exemplary; other types of structures may also be used in addition to or in place of the structures to implement telecommunications functions or other functions.
In a possible implementation, the functionality of the evaluation means 300 of the algorithm accuracy may be integrated into the processor 401. Wherein the processor 401 may be configured to perform the following operations:
acquiring a test animation corresponding to the first virtual scene, wherein the test animation comprises the first virtual scene and at least one virtual object in the first virtual scene;
inputting the test animation into an identification algorithm to identify at least one virtual object;
receiving a recognition result output by a recognition algorithm aiming at the test animation;
And generating a corresponding evaluation result according to the identification result.
In another possible implementation, the evaluation device 300 for algorithm accuracy may be configured separately from the processor 401, for example, the evaluation device 300 for algorithm accuracy may be configured as a chip connected to the processor 401, and the evaluation of algorithm accuracy is implemented by control of the processor 401.
Furthermore, in some alternative implementations, the electronic device 400 may further include: communication module, input unit, audio processor, display, power etc.. It is noted that the electronic device 400 need not include all of the components shown in fig. 4; in addition, the electronic device 400 may further include components not shown in fig. 4, to which reference is made to the prior art.
In some alternative implementations, the processor 401, sometimes referred to as a controller or operational control, may include a microprocessor or other processor device and/or logic device, with the processor 401 receiving inputs and controlling the operation of the various components of the electronic device 400.
The memory 402 may be, for example, one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable device. The above information about the evaluation device 300 of algorithm accuracy may be stored, and a program for executing the information may be stored. And the processor 401 may execute the program stored in the memory 402 to realize information storage or processing, etc.
The input unit may provide input to the processor 401. The input unit is for example a key or a touch input device. The power source may be used to provide power to the electronic device 400. The display can be used for displaying display objects such as images and characters. The display may be, for example, but not limited to, an LCD display.
The memory 402 may be a solid state memory such as Read Only Memory (ROM), random Access Memory (RAM), SIM card, and the like. But also a memory which holds information even when powered down, can be selectively erased and provided with further data, an example of which is sometimes referred to as EPROM or the like. Memory 402 may also be some other type of device. Memory 402 includes a buffer memory (sometimes referred to as a buffer). The memory 402 may include an application/function storage section for storing application programs and function programs or a flow chart for executing operations of the electronic device 400 by the processor 401.
The memory 402 may also include a data store for storing data, such as contacts, digital data, pictures, sounds, and/or any other data used by the electronic device. The driver store of memory 402 may include various drivers for the computer device for communication functions and/or for performing other functions of the computer device (e.g., messaging applications, address book applications, etc.).
The communication module is a transmitter/receiver that transmits and receives signals via an antenna. A communication module (transmitter/receiver) is coupled to the processor 401 to provide an input signal and to receive an output signal, as may be the case with a conventional mobile communication terminal.
Based on different communication technologies, a plurality of communication modules, such as a cellular network module, a bluetooth module, and/or a wireless local area network module, etc., may be provided in the same computer device. The communication module (transmitter/receiver) is also coupled to the speaker and microphone via the audio processor to provide audio output via the speaker and to receive audio input from the microphone to implement the usual telecommunications functions. The audio processor may include any suitable buffers, decoders, amplifiers and so forth. In addition, an audio processor is coupled to the processor 401 such that sound can be recorded locally by a microphone and sound stored locally can be played by a speaker.
One or more embodiments of the present specification further provide a computer-readable storage medium capable of implementing all steps in the method for evaluating algorithm accuracy in the above embodiments, the computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements all steps in the method for evaluating algorithm accuracy in the above embodiments, for example, the processor implements the following steps when executing the computer program:
acquiring a test animation corresponding to the first virtual scene, wherein the test animation comprises the first virtual scene and at least one virtual object in the first virtual scene;
inputting the test animation into an identification algorithm to identify at least one virtual object;
receiving a recognition result output by a recognition algorithm aiming at the test animation;
And generating a corresponding evaluation result according to the identification result.
Although one or more embodiments of the present description provide method operational steps as embodiments or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented by an actual device or client product, the instructions may be executed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment) as shown in the embodiments or figures.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, apparatus (system) or computer program product. Accordingly, the present specification embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more embodiments of the present specification are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to one or more embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus and system embodiments, the description is relatively simple, as it is substantially similar to method embodiments, with reference to the description of method embodiments in part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The specific meaning of the terms in one or more embodiments of the present specification may be understood by those of ordinary skill in the art in view of the specific circumstances.
It should be noted that, without conflict, one or more embodiments and features of the embodiments may be combined with each other. The one or more embodiments of the present specification are not limited to any single aspect, nor to any single embodiment, nor to any combination and/or permutation of these aspects and/or embodiments. Moreover, each aspect and/or embodiment of one or more embodiments of the present description may be utilized alone or in combination with one or more other aspects and/or embodiments.
Finally, it should be noted that: the above embodiments are merely for illustrating the technical solution of one or more embodiments of the present disclosure, and are not limiting thereof; while one or more embodiments of the present disclosure have been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding embodiments from the scope of the one or more embodiments of the present disclosure, which are intended to be covered by the claims and the scope of the present disclosure.
The foregoing description of one or more embodiments of the present specification has been presented in conjunction with alternative embodiments, but such embodiments are merely exemplary and serve only as illustrations. On the basis of the above, various substitutions and improvements can be made on one or more embodiments of the present specification, and all of them fall within the protection scope of one or more embodiments of the present specification.

Claims (22)

1. A method for evaluating accuracy of an algorithm, comprising:
acquiring a test animation corresponding to a first virtual scene, wherein the test animation comprises the first virtual scene and at least one virtual object in the first virtual scene;
inputting the test animation into an identification algorithm to identify the at least one virtual object;
receiving a recognition result output by the recognition algorithm aiming at the test animation;
And generating a corresponding evaluation result according to the identification result.
2. The method of claim 1, wherein obtaining a test animation corresponding to the first virtual scene comprises:
Constructing at least one corresponding virtual object according to at least one entity object in the first physical scene;
rendering to obtain a corresponding first virtual scene according to the first physical scene;
rendering the at least one virtual object into the first virtual scene;
and generating a corresponding test animation aiming at the rendered first virtual scene.
3. The method of claim 2, wherein rendering a first virtual object into the first virtual scene comprises:
Rendering, in the first virtual scene, at least one appearance attribute and at least one spatial attribute for a first virtual object; the first virtual object is any one of the at least one virtual object, and the at least one spatial attribute is used for reflecting the spatial position of the first virtual object in the first virtual scene.
4. A method according to claim 3, wherein rendering the at least one appearance attribute and the at least one spatial attribute for the first virtual object in the first virtual scene comprises:
determining a plurality of key positions corresponding to a first virtual object in a first virtual field;
At each key location, at least one appearance attribute and at least one spatial attribute are rendered to the first virtual object.
5. The method of claim 4, wherein generating the corresponding test animation for the rendered first virtual scene comprises:
And generating a test animation corresponding to each virtual object in the first virtual scene according to the multiple key positions of each virtual object after rendering.
6. The method of claim 5, wherein generating a corresponding evaluation result from the recognition result comprises:
determining the recognition accuracy of the recognition algorithm according to the recognition result;
And generating a corresponding evaluation result according to the identification accuracy.
7. The method of claim 6, wherein determining the recognition accuracy of the recognition algorithm based on the recognition result comprises:
according to the identification result, determining the identification type corresponding to the at least one virtual object after being identified;
And determining the recognition accuracy of the recognition algorithm according to the real type and the recognition type respectively corresponding to the at least one virtual object.
8. The method of claim 7, wherein generating a corresponding evaluation result based on the recognition accuracy comprises:
if the recognition algorithm is to recognize the first virtual scene for the first time, generating a corresponding evaluation result according to the recognition accuracy;
And if the recognition algorithm does not recognize the first virtual scene for the first time, generating a corresponding evaluation result according to the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process.
9. The method of claim 8, wherein generating the corresponding evaluation result according to the historical recognition accuracy of the recognition algorithm for the first virtual scene and the recognition accuracy of the present recognition process comprises:
Determining an error between the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process;
If the error is greater than a preset difference value, determining whether the at least one virtual object comprises a second virtual object of a new type;
And if the at least one virtual object comprises the second virtual object, generating an evaluation result of the recognition algorithm for recognizing the second virtual object according to the error and at least one appearance attribute and at least one spatial attribute of the second virtual object at each key position.
10. The method as recited in claim 9, further comprising:
If the second virtual object is not included in the at least one virtual object, determining a third virtual object which is wrong in recognition of the at least one virtual object according to the recognition result;
And generating an evaluation result of the recognition algorithm for recognizing the third virtual object according to the error, at least one appearance attribute and at least one spatial attribute of the third virtual object at each key position.
11. An apparatus for evaluating accuracy of an algorithm, comprising:
The system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring a test animation corresponding to a first virtual scene, and the test animation comprises the first virtual scene and at least one virtual object in the first virtual scene;
the input module is used for inputting the test animation into an identification algorithm to identify the at least one virtual object;
the receiving module is used for receiving the recognition result output by the recognition algorithm aiming at the test animation;
and the processing module is used for generating a corresponding evaluation result according to the identification result.
12. The apparatus of claim 11, wherein the means for obtaining obtains a test animation corresponding to the first virtual scene for:
Acquiring the generated test animation from the processing module;
the processing module is further configured to construct at least one corresponding virtual object according to at least one physical object in the first physical scene;
rendering to obtain a corresponding first virtual scene according to the first physical scene;
rendering the at least one virtual object into the first virtual scene;
and generating a corresponding test animation aiming at the rendered first virtual scene.
13. The apparatus of claim 12, wherein the processing module renders a first virtual object into the first virtual scene for:
Rendering, in the first virtual scene, at least one appearance attribute and at least one spatial attribute for a first virtual object; the first virtual object is any one of the at least one virtual object, and the at least one spatial attribute is used for reflecting the spatial position of the first virtual object in the first virtual scene.
14. The apparatus of claim 13, wherein the processing module is to render, in the first virtual scene, at least one appearance attribute and at least one spatial attribute for a first virtual object for:
determining a plurality of key positions corresponding to a first virtual object in a first virtual field;
At each key location, at least one appearance attribute and at least one spatial attribute are rendered to the first virtual object.
15. The apparatus of claim 14, wherein the processing module generates the corresponding test animation for the rendered first virtual scene, comprising:
And generating a test animation corresponding to each virtual object in the first virtual scene according to the multiple key positions of each virtual object after rendering.
16. The apparatus of claim 15, wherein the processing module generates a corresponding evaluation result based on the recognition result for:
determining the recognition accuracy of the recognition algorithm according to the recognition result;
And generating a corresponding evaluation result according to the identification accuracy.
17. The apparatus of claim 16, wherein the processing module determines an identification accuracy of the identification algorithm based on the identification result for:
according to the identification result, determining the identification type corresponding to the at least one virtual object after being identified;
And determining the recognition accuracy of the recognition algorithm according to the real type and the recognition type respectively corresponding to the at least one virtual object.
18. The apparatus of claim 17, wherein the processing module generates a corresponding evaluation result according to the recognition accuracy, for:
if the recognition algorithm is to recognize the first virtual scene for the first time, generating a corresponding evaluation result according to the recognition accuracy;
And if the recognition algorithm does not recognize the first virtual scene for the first time, generating a corresponding evaluation result according to the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process.
19. The apparatus of claim 18, wherein the processing module generates a corresponding evaluation result according to the recognition accuracy of the recognition algorithm on the history of the first virtual scene and the recognition accuracy of the present recognition process, for:
Determining an error between the historical recognition accuracy of the recognition algorithm on the first virtual scene and the recognition accuracy of the recognition process;
If the error is greater than a preset difference value, determining whether the at least one virtual object comprises a second virtual object of a new type;
And if the at least one virtual object comprises the second virtual object, generating an evaluation result of the recognition algorithm for recognizing the second virtual object according to the error and at least one appearance attribute and at least one spatial attribute of the second virtual object at each key position.
20. The apparatus of claim 19, wherein the processing module is further configured to:
If the second virtual object is not included in the at least one virtual object, determining a third virtual object which is wrong in recognition of the at least one virtual object according to the recognition result;
And generating an evaluation result of the recognition algorithm for recognizing the third virtual object according to the error, at least one appearance attribute and at least one spatial attribute of the third virtual object at each key position.
21. An electronic device, the electronic device comprising:
a memory for storing a computer program product;
a processor for executing a computer program product stored in said memory, which, when executed, implements the method of any of the preceding claims 1-10.
22. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon computer program instructions which, when executed, implement the method of any of the preceding claims 1-10.
CN202410166814.6A 2024-02-05 2024-02-05 Evaluation method, device, equipment and storage medium for algorithm accuracy Pending CN117994612A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410166814.6A CN117994612A (en) 2024-02-05 2024-02-05 Evaluation method, device, equipment and storage medium for algorithm accuracy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410166814.6A CN117994612A (en) 2024-02-05 2024-02-05 Evaluation method, device, equipment and storage medium for algorithm accuracy

Publications (1)

Publication Number Publication Date
CN117994612A true CN117994612A (en) 2024-05-07

Family

ID=90889217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410166814.6A Pending CN117994612A (en) 2024-02-05 2024-02-05 Evaluation method, device, equipment and storage medium for algorithm accuracy

Country Status (1)

Country Link
CN (1) CN117994612A (en)

Similar Documents

Publication Publication Date Title
CN109117831B (en) Training method and device of object detection network
CN109858445B (en) Method and apparatus for generating a model
CN106547678B (en) Method and apparatus for white-box testing
CN109637525B (en) Method and apparatus for generating an on-board acoustic model
CN110674788A (en) Vehicle damage assessment method and device
CN112233216B (en) Game image processing method and device and electronic equipment
CN110033423B (en) Method and apparatus for processing image
CN109582550B (en) Method, device and server for acquiring full-service scene fault set
CN110518990A (en) Calibration method, system and the computer readable storage medium of multiple antennas WiFi product
CN108846861B (en) Image homography matrix calculation method and device, mobile terminal and storage medium
CN112926471A (en) Method and device for identifying image content of business document
CN110910628A (en) Interactive processing method and device for vehicle damage image shooting and electronic equipment
CN113850859A (en) Methods, systems, articles, and apparatus for enhancing image depth confidence maps
CN110008823B (en) Vehicle damage assessment method and device and electronic equipment
US20120026902A1 (en) Computing device and crosstalk information detection method
CN113516697A (en) Image registration method and device, electronic equipment and computer-readable storage medium
CN117994612A (en) Evaluation method, device, equipment and storage medium for algorithm accuracy
CN116665170A (en) Training of target detection model, target detection method, device, equipment and medium
CN111353526A (en) Image matching method and device and related equipment
CN111179129A (en) Courseware quality evaluation method and device, server and storage medium
CN110245068A (en) Automated testing method, device and the computer equipment of the H5 page
CN114004674A (en) Model training method, commodity pushing method and device and electronic equipment
CN108234729B (en) Method for adjusting verification model, verification method, server and storage medium
US9128957B2 (en) Apparatus and method of filtering geographical data
WO2022230639A1 (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination