CN115509933A - Scene expression method, model quality quantization method, computer device, and medium - Google Patents

Scene expression method, model quality quantization method, computer device, and medium Download PDF

Info

Publication number
CN115509933A
CN115509933A CN202211252727.XA CN202211252727A CN115509933A CN 115509933 A CN115509933 A CN 115509933A CN 202211252727 A CN202211252727 A CN 202211252727A CN 115509933 A CN115509933 A CN 115509933A
Authority
CN
China
Prior art keywords
scene
path
verification
topological
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211252727.XA
Other languages
Chinese (zh)
Inventor
何科君
郭璁
乔晓飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202211252727.XA priority Critical patent/CN115509933A/en
Publication of CN115509933A publication Critical patent/CN115509933A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Operations Research (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a scene representation method, a model quality quantification method, a computer device and a medium. The method comprises the following steps: the method comprises the steps of obtaining a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested, obtaining an intersection scene set between the test scene set and the verification scene set, and generating a scene expression degree according to the intersection scene set and the verification scene set. The method can be suitable for calculating the scene expression degree between the relative verification scenes of the to-be-tested scenes in complex scenes and large-scale scenes, so that the difficulty of calculating the verification index of the to-be-quantized model through large-scale verification data can be solved based on the scene expression degree, the verification index of the to-be-quantized model can be quickly calculated through the scene expression degree and the test index of the to-be-quantized model, and the calculation efficiency of the verification index is improved.

Description

Scene expression method, model quality quantization method, computer device, and medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a scene representation method, a model quality quantification method, a computer device, and a medium.
Background
In a model quality evaluation system, generally, an index of a model to be quantized on a test set is obtained first, and a corresponding index of the model to be quantized on a verification set is obtained after a test is completed, and then the quality of the model to be quantized is comprehensively determined based on the two indexes.
In the related technology, a test index and a verification index of a model to be quantized are respectively calculated through a test set and a verification set, so that a quantization value of the quality of the model to be quantized is determined. However, for large-scale data, the process of calculating the quantization value of the quality of the model to be quantized using the correlation technique is complicated.
Disclosure of Invention
In view of the above, it is necessary to provide a scene representation method, a model quality quantification method, a computer device, and a medium for solving the above technical problems.
In a first aspect, the present application provides a method for expressing a scene, including:
acquiring a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested; verifying that the scene is a real scene of the scene to be tested;
acquiring an intersection scene set between the test scene set and the verification scene set;
generating a scene expression degree according to the intersection scene set and the verification scene set; the scene expressive degree is used for representing the representing degree of the scene to be tested relative to the verification scene.
In one embodiment, the scene to be tested is a road test scene; the verification scene is a road verification scene;
the method for acquiring the test scene set of the scene to be tested and the verification scene set of the verification scene corresponding to the scene to be tested comprises the following steps:
the method comprises the steps of obtaining a test scene set of a scene to be tested by executing a preset scene set acquisition step on a road test scene, and obtaining a verification scene set of a verification scene by executing a scene set acquisition step on a road verification scene.
In one embodiment, the scene collection step includes:
acquiring a data set of the robot in a target scene; the data set comprises attribute information of all topological paths in the target scene; the target scene is a road test scene or a road verification scene;
and according to the attribute information of each topological path, dividing the corresponding topological path to obtain a scene set of the target scene.
In one embodiment, the dividing the corresponding topological paths according to the attribute information of each topological path to obtain a scene set of the target scene includes:
according to the attribute information of each topological path, carrying out road section division processing on each topological path to obtain a sub-topological path corresponding to each topological path;
for any topological path, carrying out interval division processing on each sub-topological path in the topological path to obtain a scene subset corresponding to each sub-topological path in the topological path;
and determining a set formed by scene subsets of all topological paths in the target scene as a scene set of the target scene.
In one embodiment, the attribute information includes a crossing state between each topological path; according to the attribute information of each topological path, performing road segment division processing on each topological path to obtain a sub-topological path corresponding to each topological path, comprising the following steps:
determining the total number of path crossing nodes on each topological path according to the crossing state between each topological path and other topological paths;
and if the total number is greater than the preset threshold value, performing road section division processing on the corresponding topological paths according to the positions of the path crossing nodes in the topological paths to obtain sub-topological paths corresponding to the topological paths.
In one embodiment, the attribute information further includes distances from each point on the center line of each sub-topology path to the left and right edges;
the method for carrying out interval division processing on each sub-topology path in the topology path to obtain a scene subset corresponding to each sub-topology path in the topology path comprises the following steps:
acquiring a left-right distance set of each sub-topology path through the distance from each point on the central line of each sub-topology path to the edges of the left side and the right side; the left and right distance sets comprise the distances from each point on the central line of each sub-topology path to the edges on the left and right sides;
and according to a preset interval, carrying out interval division processing on the left and right distance set of each sub-topology path to obtain a scene subset corresponding to each sub-topology path.
In one embodiment, the performing interval division processing on the left-right distance set of each sub-topology path according to a preset interval to obtain a scene subset corresponding to each sub-topology path includes:
according to a preset interval, carrying out interval division processing on the left interval set of each sub-topology path to obtain a left scene subset, and according to the preset interval, carrying out interval division processing on the right interval set of each sub-topology path to obtain a right scene subset;
and correspondingly combining the left scene subset and the right scene subset left and right to obtain the scene subsets corresponding to the sub-topology paths.
In a second aspect, the present application provides a method for quantifying model quality, the method comprising:
inputting a test set of a test scene into a model to be quantized to obtain a test index;
obtaining a scene expression degree between a test scene and a verification scene through the steps of the method in any embodiment of the first aspect; the verification scene is a real scene corresponding to the test scene;
generating a verification index of the model to be quantized according to the scene expression degree and the test index;
and determining the quantization value of the quality of the model to be quantized according to the verification index and the test index.
In a third aspect, the present application provides a scene representation apparatus, including:
the scene set acquisition module is used for acquiring a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested; verifying that the scene is a real scene of the scene to be tested;
the intersection acquisition module is used for acquiring an intersection scene set between the test scene set and the verification scene set;
the scene expression degree generating module is used for generating a scene expression degree according to the intersection scene set and the verification scene set; the scene expressive degree is used for representing the representing degree of the scene to be tested relative to the verification scene.
In a fourth aspect, the present application provides a model quality quantification apparatus, comprising:
the test index acquisition module is used for inputting the test set of the test scene into the model to be quantized to obtain a test index;
a scene expression degree obtaining module, configured to obtain a scene expression degree between the test scene and the verification scene through the steps of the method in any embodiment of the first aspect; the verification scene is a real scene corresponding to the test scene;
the verification index acquisition module is used for generating a verification index of the model to be quantized according to the scene expression degree and the test index;
and the quantization value determining module is used for determining the quantization value of the quality of the model to be quantized according to the verification index and the test index.
In a fifth aspect, the present application provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of the method in any of the embodiments of the first aspect when executing the computer program.
In a sixth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method in any of the embodiments of the first aspect described above.
In a seventh aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of the method in any of the embodiments of the first aspect.
According to the scene expression method, the model quality quantification method, the computer device and the medium, the computer device can obtain a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested, obtain an intersection scene set between the test scene set and the verification scene set, and generate a scene expression degree according to the intersection scene set and the verification scene set; the method can be suitable for calculating the scene expression degree between the relative verification scenes of the to-be-tested scenes under the complex scene and the large-scale scene, so that the difficulty of calculating the verification index of the to-be-quantized model through large-scale verification data can be solved based on the scene expression degree, the verification index of the to-be-quantized model can be quickly calculated through the scene expression degree and the test index of the to-be-quantized model, and the calculation efficiency of the verification index is improved; in addition, the method can be suitable for complex scenes and large-scale scenes, the application scenes of the scene expression method are increased, and the universality of the scene expression method is improved.
Drawings
FIG. 1 is a diagram of the internal structure of a computer device in one embodiment;
FIG. 2 is a flow diagram illustrating a method for scene representation in one embodiment;
FIG. 3 is a diagram of elements in a verification scenario set in another embodiment;
FIG. 4 is a diagram illustrating elements in a test scenario set in another embodiment;
FIG. 5 is a flowchart illustrating a method for scene set acquisition according to one embodiment;
fig. 6 is a schematic flowchart of a method for dividing the corresponding topological paths to obtain a scene set of a target scene according to the attribute information of each topological path in another embodiment;
fig. 7 is a schematic flow chart of a method for performing segment division processing on each topological path in another embodiment;
FIG. 8 is a schematic diagram illustrating a topological path in a road test scenario or a road verification scenario according to another embodiment
FIG. 9 is a flowchart illustrating a method for performing interval division processing on sub-topology paths in a topology path in another embodiment;
fig. 10 is a schematic flowchart of a specific method for performing interval division processing on the left-right distance sets of each sub-topology path in another embodiment;
FIG. 11 is a flow diagram that illustrates a method for quantifying model quality, according to one embodiment;
FIG. 12 is a block diagram showing the construction of a scene representation apparatus according to an embodiment;
fig. 13 is a block diagram showing the structure of a model quality quantifying unit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The scene representation method provided by the application can be applied to the computer equipment shown in fig. 1. Optionally, the computer device may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and may also be implemented by an independent server or a server cluster formed by a plurality of servers, and the embodiment does not limit the specific form of the computer device.
The computer equipment can obtain a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested, determine the representative degree of the scene to be tested relative to the verification scene through the test scene set and the verification scene set, namely scene expression degree, and further quickly calculate the verification index of the model to be quantized through the scene expression degree and the test index of the model to be quantized so as to reduce the complexity of calculating the quantization value of the quality of the model to be quantized. Based on this, the embodiment of the present application first specifically introduces a process how to determine the scene expression degree of the scene to be tested relative to the verification scene.
Fig. 2 is a schematic flow chart of a scene representation method provided in the embodiment of the present application, which is described by taking the method as an example applied to the computer device in fig. 1, and the method includes the following steps:
s100, a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested are obtained. Wherein, the verification scene is a real scene of the scene to be tested.
Specifically, the scene to be tested can be a scene actually set up by the user according to the verification scene. Optionally, the verification scenario may be a scenario within a preset spatial range. The preset space range can be an indoor specific space range and can also be an outdoor specific space range; the size of the specific space range can be determined according to actual requirements to be tested, and the embodiment of the application is not limited. Optionally, the verification scene may be a restaurant, a library, or the like, and the scene may include obstacles, pedestrians, robots, hardware facilities, or the like. Illustratively, the hardware facilities may be tables, chairs, cabinets and the like in a restaurant, and may also be bookshelves, book borrowing tables, desks, chairs and the like in a library, and of course, may also be infrastructures and the like which should be provided in other scenes.
The verification scene is a real scene in real life, dynamic objects or pedestrians can enter the verification scene at any time, however, the scene to be tested is a scene which is actually built by a user in advance according to the verification scene and is a static scene, and the scene to be tested cannot be changed in real time along with the dynamic objects or the pedestrians appearing in the verification scene, so that in practical application, the test scene set of the scene to be tested is different from the verification scene set of the verification scene corresponding to the scene to be tested.
It should be noted that the computer device may obtain the test scene set of the scene to be tested in real time, and may also obtain the pre-stored test scene set of the scene to be tested from the local, cloud, memory, and other locations. In the embodiment of the application, in order to acquire real data in a verification scene and improve the accuracy of the determined quantization value of the quality of the model to be quantized, when the quantization value of the quality of the model to be quantized is calculated, the computer device may acquire a verification scene set of the verification scene corresponding to the scene to be tested in real time. Alternatively, the quantized value of the model quality to be quantized may represent the evaluation result of the model quality to be quantized.
S200, acquiring an intersection scene set between the test scene set and the verification scene set.
Specifically, the computer device may determine the same elements in the test scene set and the verification scene set according to the test scene set and the verification scene set, and combine the same elements to generate an intersection scene set between the test scene set and the verification scene set. Optionally, the total number of elements in the test scenario set and the total number of elements in the verification scenario set may be the same or different.
In the embodiment of the application, the total number of the elements in the test scene set and the total number of the elements in the verification scene set are not limited, so that the scene expression method can be suitable for complex scenes in a limited space range and also can be suitable for complex scenes or simple scenes in a large space range. And when the total number of the elements in the test scene set is the same as that of the elements in the verification scene set, the elements in the test scene set are different from the elements in the verification scene set.
Each element included in the verification scene set may be information of a size, a position, a shape, and the like of an obstacle, a pedestrian, a robot, a hardware facility, and the like in the scene to be tested, and correspondingly, each element included in the test scene set may also be information of a size, a position, a shape, and the like of an obstacle, a pedestrian, a robot, a hardware facility, and the like in the verification scene.
And S300, generating a scene expression degree according to the intersection scene set and the verification scene set. The scene expressive degree is used for representing the representing degree of the scene to be tested relative to the verification scene.
Specifically, the computer device may perform arithmetic operation on the elements in the intersection scene set and the elements in the verification scene set to obtain a representative degree of the scene to be tested with respect to the verification scene, i.e., a scene expression degree. Alternatively, the arithmetic operations may be addition operations, subtraction operations, multiplication operations, logarithm operations, exponential operations, and/or derivative operations, among others.
In the embodiment of the application, the computer device may determine the total number of elements in the intersection scene set and the total number of elements in the verification scene set, and then divide the total number of elements in the intersection scene set by the total number of elements in the verification scene set to obtain the scene expression degree of the to-be-tested scene relative to the verification scene.
Exemplarily, fig. 3 is a schematic diagram of elements (circle, triangle, heart and parallelogram) in the verification scene set, fig. 4 is a schematic diagram of elements (circle, triangle, heart and semicircle) in the test scene set of the scene to be tested, then the intersection scene set of the verification scene set and the test scene set is a triangle, a circle and a heart, then the total number of elements in the intersection scene set is 3, the total number of elements in the verification scene set is 7, namely three circles, two triangles, a parallelogram and a heart, and further, the scene expression degree of the scene to be tested relative to the verification scene is equal to 3/7.
The larger the scene expression degree is, the more elements in the test scene set are the same as the corresponding elements in the verification scene set; the smaller the scene expression degree is, the smaller the number of elements in the test scene set is, the same as the corresponding elements in the verification scene set; the scene expression degree is equal to 0, which means that no element in the test scene set is the same as the corresponding element in the verification scene set.
The scene expression method in the embodiment of the application can obtain a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested, obtain an intersection scene set between the test scene set and the verification scene set, and generate a scene expression degree according to the intersection scene set and the verification scene set; the method can obtain the scene expression degree between the relative verification scenes of the scene to be tested in an algorithm mode, and the calculation process is simple; meanwhile, the method can also be suitable for calculating the scene expression degree between the relative verification scenes of the to-be-tested scenes in complex scenes and large-scale scenes, so that the difficulty of calculating the verification index of the to-be-quantized model through large-scale verification data can be solved based on the scene expression degree, the verification index of the to-be-quantized model can be quickly calculated through the scene expression degree and the test index of the to-be-quantized model, and the calculation efficiency of the verification index is improved; in addition, the method can be suitable for complex scenes and large-scale scenes, the application scenes of the scene expression method are increased, and the universality of the scene expression method is improved.
The following embodiment of the present application will describe how to obtain a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested. In an embodiment, the scene to be tested is a road test scene, and the verification scene is a road verification scene; the step in S100 may include: the method comprises the steps of obtaining a test scene set of a scene to be tested by executing a preset scene set acquisition step on a road test scene, and obtaining a verification scene set of a verification scene by executing a scene set acquisition step on a road verification scene.
In the embodiment of the application, the verification scene is a road verification scene, and the scene to be tested is a road test scene, that is, the verification scene and the scene to be tested not only include obstacles, pedestrians, robots, hardware facilities and the like, but also include paths of passable areas of the pedestrians and the robots.
It should be noted that the computer device may obtain an image acquired by the image acquisition device for the road test scene, directly obtain a test scene set of the scene to be tested, and may also obtain an image acquired by the image acquisition device for the road test scene, and then perform preprocessing, analysis processing, data conversion processing, operation processing, and the like on the acquired image, so as to obtain a test scene set of the scene to be tested.
Meanwhile, the computer device can acquire the image acquired by the image acquisition device on the road verification scene to directly obtain the verification scene set of the verification scene, and can also acquire the image acquired by the image acquisition device on the road verification scene first, and then perform preprocessing, analysis processing, data conversion processing, operation processing and the like on the acquired image to obtain the verification scene set of the verification scene.
It is understood that the image capturing device may be a camera, a video camera, or the like capable of capturing images. The preprocessing can be data enhancement processing, abnormal data elimination processing, normalization processing and the like. Alternatively, the data conversion process may be a data shift process, a data rotation process, or the like. Alternatively, the above-described arithmetic processing may be a process of performing an arithmetic operation on data.
As shown in fig. 5, the scene set acquiring step may include the following steps:
and S110, acquiring a data set of the robot in the target scene. The data set comprises attribute information of all topological paths in a target scene; the target scene is a road test scene or a road verification scene.
In the embodiment of the application, in order to quantify the quality of the model to be quantified through an intelligent scene, a robot is included in a road test scene and a road verification scene. Alternatively, the above topological paths may be understood as traversable paths in addition to hardware facilities in the road test scenario and the road verification scenario. Alternatively, the attribute information of the topological path may be size, color, shape, etc. information of the topological path.
It should be noted that, the computer device may acquire an image of the robot in the road test scene in real time through the image acquisition device, acquire an image recognition algorithm to recognize the image corresponding to the acquired road test scene and obtain a data set in the road test scene according to a recognition result, acquire an image of the robot in the road verification scene in real time through the image acquisition device, acquire an image recognition algorithm to recognize the image corresponding to the acquired road verification scene and obtain a data set in the road verification scene according to a recognition result.
Or the computer device may further acquire an image of the robot in the road test scene, which is acquired in advance, from a local location, a cloud location, a memory location, and the like, acquire an image recognition algorithm to recognize the image corresponding to the acquired road test scene, obtain a data set in the road test scene according to a recognition result, acquire an image of the robot in the road verification scene, acquire an image recognition algorithm to recognize the image corresponding to the acquired road verification scene, and obtain a data set in the road verification scene according to a recognition result.
The purpose of the image identification is to detect attribute information of the topological path in the image, and therefore, the identification result may be the attribute information of the topological path. Alternatively, the image recognition algorithm may be a fast convolutional neural network algorithm, a target detection algorithm, a feature extraction algorithm, or the like.
And S120, dividing the corresponding topological paths according to the attribute information of each topological path to obtain a scene set of the target scene.
Specifically, the topological path is a straight path or a polygonal path; the broken line path may be understood as a path formed by connecting two short straight paths, wherein one end point of one short straight path is connected with one end point of the other short straight path. Alternatively, the end point of the path may be the start point or the end point of the path.
Optionally, for each topological path in the road test scenario and the road verification scenario, the computer device may divide the topological path to obtain a plurality of sub-topological paths in the corresponding topological path, determine attribute information of each sub-topological path in the topological path according to the attribute information of the topological path, perform arithmetic operation and/or data conversion on the attribute information of each sub-topological path in the topological path to obtain a first processing result, and determine the first processing result as a scenario set of a target scenario, that is, a test scenario set of a to-be-tested scenario and a verification scenario set of a verification scenario.
The scene expression method in the embodiment of the application can obtain the test scene set of the to-be-tested scene by executing the preset scene set acquisition step on the road test scene, obtain the verification scene set of the verification scene by executing the scene set acquisition step on the road verification scene, and then quickly calculate the scene expression degree between the to-be-tested scene and the verification scene based on the test scene set and the verification scene set, so that the difficulty of calculating the verification index of the to-be-quantized model through large-scale verification data can be solved based on the scene expression degree, and the calculation method of the verification index is simpler.
In order to reduce the complexity of calculating the scene expression between the to-be-tested scenes and the relative verification scenes, the topological paths can be divided. In an embodiment, as shown in fig. 6, the step of dividing, in the step S120, the corresponding topological paths according to the attribute information of each topological path to obtain a scene set of the target scene may include:
and S121, performing road section division processing on each topological path according to the attribute information of each topological path to obtain sub-topological paths corresponding to each topological path.
Specifically, for each topological path in the road test scene and the road verification scene, the computer device may perform the road segment division processing on the topological path according to the shape of the topological path, determine the path with the same shape in the topological path as the sub-topological path corresponding to the topological path, further perform the road segment division processing on the topological path according to the color of the topological path, and determine the path with the same color in the topological path as the sub-topological path corresponding to the topological path. In addition, the computer device may further perform road segment division processing on each topological path according to other information in the attribute information of the topological path to obtain sub-topological paths corresponding to each topological path, that is, sub-topological paths corresponding to each topological path in the road test scene and sub-topological paths corresponding to each topological path in the road verification scene.
And S122, aiming at any topological path, carrying out interval division processing on each sub-topological path in the topological path to obtain a scene subset corresponding to each sub-topological path in the topological path.
Specifically, for any one of the topology paths in the road test scenario and the road verification scenario, the computer device may perform interval division processing on each sub-topology path in the topology path based on the attribute information of the topology path to obtain a scene subset corresponding to each sub-topology path in the topology path, that is, the scene subset corresponding to each sub-topology path in the road test scenario and the scene subset corresponding to each sub-topology path in the road verification scenario.
And S123, determining a set formed by scene subsets of all topological paths in the target scene as a scene set of the target scene.
It should be noted that the computer device may combine scene subsets corresponding to all the topological paths in the road test scene to obtain a scene set of the road test scene. Meanwhile, the computer device can combine scene subsets corresponding to all the topological paths in the road verification scene to obtain a scene set of the road verification scene.
According to the scene expression method, the corresponding topological paths can be divided according to the attribute information of each topological path to obtain the scene set of the target scene, so that the dimension information of the attribute information of the topological paths is reduced, and then the scene expression degree is calculated based on the scene set of the low-dimension target scene, so that the complexity of calculating the scene expression degree between the to-be-tested scenes and the verification scenes can be reduced, and the calculation speed and efficiency of the scene expression degree are improved.
The following description will be given to how to perform the road segment division processing on each topological path according to the attribute information of each topological path to obtain the sub-topological paths corresponding to each topological path. In an embodiment, the attribute information includes a crossing state between topology paths; as shown in fig. 7, the step in S121 described above may be implemented by:
s1211, determining the total number of path crossing nodes on each topological path according to the crossing state between each topological path and other topological paths.
Specifically, for any topological path, intersection path intersection nodes may exist between the topological path and other topological paths other than the topological path in the road test scene, or path intersection nodes may not exist, and therefore, the intersection state between each topological path and other topological paths may be intersection or non-intersection.
It should be noted that there is at most one path crossing node between two topological paths, and therefore, the total number of path crossing nodes on a topological path is equal to the total number of the topological path and other topological paths having crossing points. Optionally, for each topological path in the road test scenario and the road verification scenario, the computer device may count, according to the intersection state between the topological path and another topological path other than the topological path, the total number of the other topological paths whose intersection state is intersection, and determine the counted total number of the other topological paths as the total number of the path intersection nodes on the topological path, that is, the total number of the path intersection nodes on the topological path in the road test scenario and the total number of the path intersection nodes on the topological path in the road verification scenario.
For example, as shown in fig. 8, a schematic diagram of topological paths in a road test scenario or a road verification scenario is shown, in the diagram, areas outside a solid line area are all non-passable path areas, and areas inside the solid line area are passable path areas, fig. 8 shows a center line D (i.e., a line segment with an arrow) of a longest main topological path, and in the diagram, topological paths on both sides of the main topological path are all topological paths having path intersection nodes with the main topological path. The topology paths on both sides of the main topology path are shown as topology path 1, topology path 2, topology path 3, topology path 4, topology path 5, topology path 6, topology path 7, topology path 8 and topology path 9, and the topology paths have widths, so that the topology path connecting the main topology path to any one has at least two path crossing nodes.
And S1212, if the total number is greater than the preset threshold, performing road segment division processing on the corresponding topological paths according to the positions of the path crossing nodes in the topological paths to obtain sub-topological paths corresponding to the topological paths.
It should be noted that the position of the path intersection node may be located at an end point, i.e., a starting point or an end point, of the topology path, or may be located in the middle of the topology path. Optionally, for any one of the road test scenario and the road verification scenario, the computer device may determine whether the total number of the path crossing nodes on the topological path is greater than a preset threshold, and if so, perform road segment division processing on the corresponding topological path according to the path crossing nodes located at the end points of the topological path in the topological path to obtain sub-topological paths corresponding to the topological path, or perform road segment division processing on the corresponding topological path according to the path crossing nodes located in the middle of the topological path in the topological path to obtain sub-topological paths corresponding to the topological path, that is, the sub-topological paths corresponding to the topological path in the road test scenario and the sub-topological paths corresponding to the topological path in the road verification scenario.
Of course, the computer device may also perform the road segment division processing on the corresponding topological path according to the path crossing node located in the middle of the topological path and the path crossing node located at the end point of the topological path in the topological path, so as to obtain the sub-topological path corresponding to the topological path.
In the embodiment of the present application, the computer device may perform, according to a path crossing node located in the middle of the topological path in the topological path, a road segment division process on the corresponding topological path to obtain a sub-topological path corresponding to the topological path.
Continuing with the example of fig. 8, there are two path crossing nodes between the topology path 1 and the main topology path, where one path crossing node is located at an end point of the main topology path, and at this time, the main topology path may be subjected to the segment division processing only according to the path crossing node in the middle of the main topology path.
Optionally, the positions of the path intersection nodes on the left and right sides of the topological path may or may not correspond to each other. During the road segment division processing, the positions of the path crossing nodes on the left and right sides of the topological path need to be combined, and the road segment division processing is executed to obtain the sub-topological paths corresponding to the topological path. For example, the path crossing node on the right side of the topology path 1 in fig. 8 corresponds to the path crossing node on the left side of the topology path 6, so that in the link division processing, the main topology path may be divided by using one of the two path crossing nodes as the division position.
In this embodiment of the present application, since the path crossing node on each topological path includes the start point and the end point of the topological path, the preset threshold may be 3, that is, the total number of the path crossing nodes on the topological path is greater than 3, and then the road segment division processing is performed on the topological path. Optionally, if it is determined that the total number of the path crossing nodes on the topological path is smaller than the preset threshold, the road segment division processing is not performed on the topological path under the condition.
Illustratively, the total number of the path crossing nodes on the topological path is equal to 2, which means that the path crossing nodes on the topological path only have the starting point and the end point of the topological path, and at this time, the segment division processing is not required to be performed on the topological path, and the segment division processing can be directly performed on the topological path as a whole.
The scene expression method in the embodiment of the application can determine the total number of path crossing nodes on each topological path according to the crossing state between each topological path and other topological paths, and if the total number is greater than a preset threshold value, perform road segment division processing on the corresponding topological path according to the position of the path crossing nodes in each topological path to obtain sub-topological paths corresponding to each topological path; the method can determine the topological paths needing to be subjected to the road section division processing, then carry out the road section division processing on the topological paths, do not carry out unnecessary road section division processing on the topological paths without the road section division processing, then reduce the operation amount in the scene expression method, and reduce the complexity of the scene expression method on the basis of saving the operation time, so that the scene expression method is simplified.
In one embodiment, the attribute information further includes distances from each point on a center line of each sub-topology path to the left and right edges; as shown in fig. 9, the step of performing interval division processing on each sub-topology path in the topology path in S122 to obtain the scene subset corresponding to each sub-topology path in the topology path may be implemented by the following steps:
and S1221, acquiring a left-right distance set of each sub-topology path according to the distance from each point on the central line of each sub-topology path to the left-right side edge. The left and right distance sets comprise distances from each point on the central line of each sub-topology path to the left and right edges.
Specifically, since the distances from each point on the center line of different sub-topology paths to the left and right edges may be different, the robot may make different motion decisions in different sub-topology paths even if the robot moves in the same direction.
Illustratively, if the robot moves from the sub-topology path 1 to the sub-topology path 2, and the sub-topology path 2 has a width greater than that of the sub-topology path 1, due to the limited field of view of the robot, there is a case where a pedestrian suddenly jumps out from the blind area of the sub-topology path 2, and at this time, the robot should decelerate and move along the middle of the sub-topology path during movement to reduce the collision problem between the robot and the pedestrian. If the widths of the sub-topology path 1 and the sub-topology path 2 are the same, the robot can move at a uniform speed along the middle of the sub-topology path during movement, so that the problem of collision between the robot and a pedestrian is reduced.
It should be noted that, for each topological path in the road test scenario and the road verification scenario, the computer device may split the distances from each point on the center line of all sub-topological paths in the topological path to the edges on the left and right sides, to obtain a left-right distance set, that is, a left-right distance set and a right-left distance set, of each sub-topological path in the topological path.
Optionally, the distance from each point on the center line of the sub-topology path to the left edge and the distance from each point on the center line of the sub-topology path to the right edge may be equal or unequal. Optionally, the distance from each point on the center line of the sub-topology path to the edge may be the distance from each point on the center line of the sub-topology path to the edge of the sub-topology path, or the distance from each point on the center line of the sub-topology path to the end point of another topology path connected to the sub-topology path. For example, the distance between a point on the center point of one of the sub-topology paths on the main topology path and the end point of the topology path 1 connected to the main topology path is shown by a double-headed arrow in fig. 8.
The left distance set only comprises the distance from each point on the center line of each sub-topology path of the topology path to the left edge, and the right distance set only comprises the distance from each point on the center line of each sub-topology path of the topology path to the right edge. Optionally, the arrangement order of the distances from the points on the center line of each sub-topology path in the left distance set to the left edge may be any order, and the arrangement order of the distances from the points on the center line of each sub-topology path in the right distance set to the right edge may be any order.
And S1222, according to the preset interval, performing interval division processing on the left and right distance sets of each sub-topology path to obtain a scene subset corresponding to each sub-topology path.
Specifically, for each topological path in the road test scenario and the road verification scenario, the computer device may perform interval division processing on the left interval set of each sub-topological path in the topological path according to a preset interval to obtain a division processing result of the left interval set, perform data amplification processing on the right interval set based on the division processing result of the left interval set to obtain a data amplification result, and perform analysis processing, arithmetic operation processing, data conversion processing, and the like on the division processing result of the left interval set and the data amplification result corresponding to the right interval set to obtain a scene subset corresponding to each sub-topological path in the topological path.
Or, the computer device may further perform interval division processing on the right interval set of each sub-topology path in the topology path according to preset intervals to obtain a division processing result of the right interval set, perform data amplification processing on the left interval set based on the division processing result of the right interval set to obtain a data amplification result, and perform analysis processing, arithmetic operation processing, data conversion processing and the like on the division processing result of the right interval set and the data amplification result corresponding to the left interval set to obtain a scene subset corresponding to each sub-topology path in the topology path.
It should be noted that the preset interval may be a constant greater than 0 and smaller than the difference between the maximum interval and the minimum interval in the left interval set and smaller than the difference between the maximum interval and the minimum interval in the right interval set.
The following describes how to perform interval division processing on the left-right distance set of each sub-topology path according to a preset interval to obtain a scene subset corresponding to each sub-topology path according to the embodiment of the present application. In an embodiment, as shown in fig. 10, the step in S1222 may include:
and S1222a, performing interval division processing on the left interval set of each sub-topology path according to a preset interval to obtain a left scene subset, and performing interval division processing on the right interval set of each sub-topology path according to a preset interval to obtain a right scene subset.
It can be understood that, for each topological path in the road test scenario and the road verification scenario, the computer device may perform, according to a preset interval, interval division processing on the left interval set corresponding to each sub-topological path in the topological path to obtain a left scene subset, and perform, according to the preset interval, interval division processing on the right interval set corresponding to each sub-topological path in the topological path to obtain a right scene subset.
Exemplarily, if the left distance set is {5, 20, 25, 30, 33, 40, 60, 72}, and the preset interval is 20, performing interval division processing on the left distance set according to the preset interval of 20 to obtain left scene subsets { a1, a2, a3, and a4}; the value of a1 in the interval (0, 20), namely a1 is {5, 20}, the value of a2 in the interval (20, 40), namely a2 is {25, 30, 33, 40}, the value of a3 in the interval (40, 60), namely a3 is {60}, and the value of a4 in the interval (60, 80), namely a4 is {72}. Optionally, the minimum value of the minimum interval is smaller than the minimum interval in the left interval set, and the maximum value of the maximum interval is larger than the maximum interval in the left interval set.
And S1222b, correspondingly combining the left scene subset and the right scene subset left and right to obtain the scene subsets corresponding to the sub-topology paths.
The computer device may perform left-right corresponding one-to-one combination on each interval in the left scene subset and the corresponding interval in the right scene subset according to any combination mode or a preset combination rule, so as to obtain a scene subset corresponding to each sub-topology path.
According to the scene expression method, the left and right distance sets of each sub-topology path can be obtained through the distance from each point on the central line of each sub-topology path to the edges of the left and right sides, interval division processing is carried out on the left and right distance sets of each sub-topology path according to the preset interval, and the scene subsets corresponding to each sub-topology path are obtained, so that the reality and the accuracy of the obtained scene subsets can be improved, the scene set of a target scene can be determined through the scene subsets, and the reality and the accuracy of the obtained scene set are improved.
Fig. 11 is a schematic flowchart of a method for quantifying model quality according to an embodiment of the present application, which is illustrated by applying the method to the computer device in fig. 1, and includes the following steps:
s400, inputting the test set of the test scene into the model to be quantified to obtain a test index.
Specifically, the test scenario may be the road test scenario. Optionally, the computer device may obtain an image acquired by the image acquisition device on the test scene, and directly obtain the test set of the test scene, and may also obtain an image acquired by the image acquisition device on the test scene, and then perform preprocessing, analysis processing, data conversion processing, operation processing, and the like on the acquired image, so as to obtain the test set of the test scene.
Further, the computer device may input the test set of the test scenario into the model to be quantized, so as to obtain a test index of the model to be quantized. Optionally, the model to be quantized may be a neural network model or an algorithm model, and the embodiment of the present application is not limited thereto.
S500, acquiring a scene expression degree between a test scene and a verification scene by executing the steps of the scene expression method in any embodiment corresponding to the above-mentioned FIG. 2, FIGS. 5-7 and FIGS. 9-11. And the verification scene is a real scene corresponding to the test scene.
Specifically, the verification scenario may be the road verification scenario described above. Optionally, the specific implementation process of the step S500 is explained in detail in the embodiments corresponding to fig. 2, fig. 5 to 7, and fig. 9 to 11, and is not described again. The test scene and the verification scene can be complex scenes in a limited space range, and can also be complex scenes or simple scenes in a large space range.
S600, generating a verification index of the model to be quantized according to the scene expression degree and the test index.
It should be noted that, the computer device may perform arithmetic operation on the scene expression level and the test index of the model to be quantized, so as to obtain the verification index of the model to be quantized. However, in the embodiment of the present application, the arithmetic operation is a multiplication operation, that is, the scene expression degree and the test index of the model to be quantized are multiplied to obtain the verification index of the model to be quantized.
S700, determining a quantization value of the quality of the model to be quantized according to the verification index and the test index.
In practical applications, the computer device may perform arithmetic operations, comparison operations, analysis operations, and the like on the verification index and the test index to determine a quantization value of the quality of the model to be quantized.
The model quality quantization method provided by the embodiment of the application can input a test set of a test scene into a model to be quantized to obtain a test index, obtain a scene expression degree between the test scene and a verification scene, generate the verification index of the model to be quantized according to the scene expression degree and the test index, and determine a quantization value of the quality of the model to be quantized according to the verification index and the test index; the method can rapidly calculate the verification index of the model to be quantized by calculating the scene expression degree between the test scene and the verification scene, and determine the quantization value of the quality of the model to be quantized by the verification index and the test index of the model to be quantized, so that the speed of model quality quantization is increased, the time length of model quality quantization is shortened, and the complexity of the model quality quantization is reduced; meanwhile, the method can be suitable for complex scenes and large-scale scenes, so that the applicable scenes of the model quality quantification method can be increased, and the universality of the model quality quantification method is improved.
In order to facilitate understanding of those skilled in the art, the method for expressing a scene provided by the present application is described by taking an execution subject as a computer device as an example, and specifically, the method includes:
(1) The method comprises the steps of obtaining a test scene set of a scene to be tested by executing a preset scene set acquisition step on a road test scene, and obtaining a verification scene set of a verification scene by executing a scene set acquisition step on a road verification scene.
Wherein, the scene set acquisition step comprises:
(2) Acquiring a data set of the robot in a target scene; the data set comprises attribute information of all topological paths in the target scene; the target scene is a road test scene or a road verification scene.
(3) And determining the total number of path crossing nodes on each topological path according to the crossing state between each topological path and other topological paths in the attribute information.
(4) And if the total number is greater than the preset threshold value, segmenting the corresponding topological paths according to the positions of the path cross nodes in the topological paths to obtain sub-topological paths corresponding to the topological paths.
(5) Aiming at any topological path, acquiring a left-right distance set of each sub-topological path through the distance from each point on the central line of each sub-topological path to the edges of the left side and the right side in the attribute information; the left and right distance sets comprise distances from each point on the central line of each sub-topology path to the left and right edges.
(6) And according to the preset interval, carrying out interval division processing on the left interval set of each sub-topology path to obtain a left scene subset, and according to the preset interval, carrying out interval division processing on the right interval set of each sub-topology path to obtain a right scene subset.
(7) And correspondingly combining the left scene subset and the right scene subset left and right to obtain the scene subsets corresponding to the sub-topology paths.
(8) And determining a set formed by scene subsets of all topological paths in the target scene as a scene set of the target scene.
(9) And acquiring an intersection scene set between the test scene set and the verification scene set.
(10) Generating a scene expression degree according to the intersection scene set and the verification scene set; the scene expressive degree is used for representing the representing degree of the scene to be tested relative to the verification scene.
The implementation processes in (1) to (10) above may specifically refer to the description of the above embodiments, and the implementation principles and technical effects thereof are similar, and are not described herein again.
It should be understood that although the various steps in the flowcharts of fig. 2, 5-7, and 9-11 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least some of the steps in fig. 2, 5-7, and 9-11 may include multiple steps or phases that are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the steps or phases is not necessarily sequential, but may be alternated or alternated with other steps or at least some of the other steps or phases.
In one embodiment, as shown in fig. 12, there is provided a scene representation apparatus including: a scene set obtaining module 11, an intersection obtaining module 12 and a scene expression degree generating module 13, wherein:
a scene set acquiring module 11, configured to acquire a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested; verifying that the scene is a real scene of the scene to be tested;
the intersection acquisition module 12 is configured to acquire an intersection scene set between the test scene set and the verification scene set;
a scene expression degree generating module 13, configured to generate a scene expression degree according to the intersection scene set and the verification scene set; the scene expressive degree is used for representing the representing degree of the scene to be tested relative to the verification scene.
The scene representation apparatus provided in this embodiment may implement the method embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
In one embodiment, the scene to be tested is a road test scene; the verification scene is a road verification scene; the scene set acquisition module 11 includes: a scene set acquisition unit, wherein:
a scene set acquisition unit for acquiring a test scene set of a scene to be tested by executing a preset scene set acquisition step on a road test scene, and acquiring a verification scene set of a verification scene by executing a scene set acquisition step on a road verification scene
The scene representation apparatus provided in this embodiment may implement the method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
In one embodiment, the scene set acquiring unit includes: a dataset acquisition subunit and a partition processing subunit, wherein:
the data set acquisition subunit is used for acquiring a data set of the robot in a target scene; the data set comprises attribute information of all topological paths in the target scene; the target scene is a road test scene or a road verification scene;
and the division processing subunit is used for carrying out division processing on the corresponding topological paths according to the attribute information of each topological path to obtain a scene set of the target scene.
The scene representation apparatus provided in this embodiment may implement the method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
In one embodiment, the partitioning processing subunit includes: the road segment dividing subunit, the interval dividing subunit and the scene set determining subunit are provided, wherein:
the road section dividing subunit is used for performing road section dividing processing on each topological path according to the attribute information of each topological path to obtain sub-topological paths corresponding to each topological path;
the interval division subunit is used for carrying out interval division processing on each sub-topology path in the topology paths aiming at any topology path to obtain a scene subset corresponding to each sub-topology path in the topology paths;
and the scene set determining subunit is used for determining a set formed by scene subsets of all the topological paths in the target scene as a scene set of the target scene.
The scene representation apparatus provided in this embodiment may implement the method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
In one embodiment, the attribute information includes a crossing state between topology paths; the segment division subunit includes: a node number determining subunit and a path segmentation subunit, wherein:
the node number determining subunit is used for determining the total number of path crossing nodes on each topological path according to the crossing state between each topological path and other topological paths;
and the path segmenting subunit is used for segmenting the corresponding topological paths according to the positions of the path cross nodes in each topological path when the total number is greater than a preset threshold value, so as to obtain the sub-topological paths corresponding to each topological path.
The scene representation apparatus provided in this embodiment may implement the method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
In one embodiment, the attribute information further includes distances from each point on the center line of each sub-topology path to the left and right edges; the compartmentalized molecular unit comprises: a span set acquisition subunit and a scene subset determination subunit, wherein:
the distance set acquisition subunit is used for acquiring a left and right distance set of each sub-topology path according to the distance from each point on the central line of each sub-topology path to the edges of the left and right sides; the left and right distance set comprises the distance from each point on the central line of each sub-topology path to the left and right edges;
and the scene subset determining subunit is used for carrying out interval division processing on the left and right spacing set of each sub-topology path according to a preset interval to obtain a scene subset corresponding to each sub-topology path.
The scene representation apparatus provided in this embodiment may implement the method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
In one embodiment, the scene subset determination subunit is specifically configured to perform interval division processing on the left interval set of each sub-topology path according to a preset interval to obtain a left scene subset, perform interval division processing on the right interval set of each sub-topology path according to a preset interval to obtain a right scene subset, and perform left-right corresponding combination on the left scene subset and the right scene subset to obtain a scene subset corresponding to each sub-topology path.
The scene representation apparatus provided in this embodiment may implement the method embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
In one embodiment, as shown in fig. 13, there is provided a model quality quantifying apparatus including: a test index obtaining module 21, a scene expression degree obtaining module 22, a verification index obtaining module 23, and a quantization value determining module 24, wherein:
the test index acquisition module 21 is configured to input a test set of a test scenario into a model to be quantized to obtain a test index;
a scene expression degree obtaining module 22, configured to obtain a scene expression degree between the test scene and the verification scene through the steps of the scene expression method in any embodiment corresponding to the foregoing fig. 2, fig. 5 to 7, and fig. 9 to 11; the verification scene is a real scene corresponding to the test scene;
the verification index acquisition module 23 is configured to generate a verification index of the model to be quantized according to the scene expression level and the test index;
and the quantized value determining module 24 is configured to determine a quantized value of the quality of the model to be quantized according to the verification index and the test index.
The model quality quantization apparatus provided in this embodiment may implement the method embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
For specific limitations of the scene representation apparatus and the model quality quantification apparatus, reference may be made to the above limitations of the scene representation method and the model quality quantification method, which are not described herein again. The respective blocks in the scene representation apparatus and the model quality apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 1. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing a data set of the target scene. The network interface of the computer device is used for communicating with an external endpoint through a network connection. The computer program is executed by a processor to implement a scene representation method and a model quality quantification method.
It will be appreciated by those skilled in the art that the configuration shown in fig. 1 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested; verifying that the scene is a real scene of the scene to be tested;
acquiring an intersection scene set between the test scene set and the verification scene set;
generating a scene expression degree according to the intersection scene set and the verification scene set; the scene expressive degree is used for representing the representing degree of the scene to be tested relative to the verification scene.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
inputting a test set of a test scene into a model to be quantized to obtain a test index;
obtaining a scene expression degree between a test scene and a verification scene through the steps of the scene expression method in any embodiment corresponding to the above fig. 2, fig. 5 to 7 and fig. 9 to 11; the verification scene is a real scene corresponding to the test scene;
generating a verification index of the model to be quantized according to the scene expression degree and the test index;
and determining the quantization value of the quality of the model to be quantized according to the verification index and the test index.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, performs the steps of:
acquiring a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested; verifying that the scene is a real scene of the scene to be tested;
acquiring an intersection scene set between the test scene set and the verification scene set;
generating a scene expression degree according to the intersection scene set and the verification scene set; the scene expression degree is used for representing the representing degree of the scene to be tested relative to the verification scene.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, performs the steps of:
inputting a test set of a test scene into a model to be quantized to obtain a test index;
obtaining a scene expression degree between a test scene and a verification scene through the steps of the scene expression method in any embodiment corresponding to fig. 2, fig. 5-7 and fig. 9-11; the verification scene is a real scene corresponding to the test scene;
generating a verification index of the model to be quantized according to the scene expression degree and the test index;
and determining the quantization value of the quality of the model to be quantized according to the verification index and the test index.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested; verifying that the scene is a real scene of the scene to be tested;
acquiring an intersection scene set between the test scene set and the verification scene set;
generating a scene expression degree according to the intersection scene set and the verification scene set; the scene expressive degree is used for representing the representing degree of the scene to be tested relative to the verification scene.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of:
inputting a test set of a test scene into a model to be quantized to obtain a test index;
obtaining a scene expression degree between a test scene and a verification scene through the steps of the scene expression method in any embodiment corresponding to fig. 2, fig. 5-7 and fig. 9-11; the verification scene is a real scene corresponding to the test scene;
generating a verification index of the model to be quantized according to the scene expression degree and the test index;
and determining the quantization value of the quality of the model to be quantized according to the verification index and the test index.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for scene representation, the method comprising:
acquiring a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested; the verification scene is a real scene of the scene to be tested;
acquiring an intersection scene set between the test scene set and the verification scene set;
generating a scene expression degree according to the intersection scene set and the verification scene set; the scene expressive degree is used for representing the representing degree of the scene to be tested relative to the verification scene.
2. The method of claim 1, wherein the scene to be tested is a road test scene; the verification scene is a road verification scene;
the acquiring a test scene set of a scene to be tested and a verification scene set of a verification scene corresponding to the scene to be tested includes:
and executing a preset scene set acquisition step on the road test scene to obtain a test scene set of the scene to be tested, and executing the scene set acquisition step on the road verification scene to obtain a verification scene set of the verification scene.
3. The method of claim 2, wherein the scene set acquisition step comprises:
acquiring a data set of the robot in a target scene; the data set comprises attribute information of all topological paths in the target scene; the target scene is the road test scene or the road verification scene;
and according to the attribute information of each topological path, dividing the corresponding topological path to obtain a scene set of the target scene.
4. The method according to claim 3, wherein the dividing, according to the attribute information of each topological path, the corresponding topological path to obtain a scene set of the target scene includes:
according to the attribute information of each topological path, performing road section division processing on each topological path to obtain sub-topological paths corresponding to each topological path;
for any topological path, carrying out interval division processing on each sub-topological path in the topological path to obtain a scene subset corresponding to each sub-topological path in the topological path;
and determining a set formed by scene subsets of all topological paths in the target scene as a scene set of the target scene.
5. The method of claim 4, wherein the attribute information includes a crossing status between topological paths; the step of performing road segment division processing on each topological path according to the attribute information of each topological path to obtain a sub-topological path corresponding to each topological path includes:
determining the total number of path crossing nodes on each topological path according to the crossing state between each topological path and other topological paths;
and if the total number is greater than a preset threshold value, segmenting the corresponding topological paths according to the positions of the path cross nodes in the topological paths to obtain sub-topological paths corresponding to the topological paths.
6. The method according to claim 4 or 5, wherein the attribute information further comprises distances from points on a center line of each sub-topology path to left and right edges;
the performing interval division processing on each sub-topology path in the topology path to obtain a scene subset corresponding to each sub-topology path in the topology path includes:
acquiring a left-right distance set of each sub-topology path according to the distance from each point on the central line of each sub-topology path to the left-right side edges; the left and right distance set comprises the distance from each point on the central line of each sub-topology path to the edges of the left and right sides;
and according to a preset interval, carrying out interval division processing on the left and right distance set of each sub-topology path to obtain a scene subset corresponding to each sub-topology path.
7. The method according to claim 6, wherein the performing interval division processing on the left and right distance sets of each sub-topology path according to a preset interval to obtain a scene subset corresponding to each sub-topology path comprises:
according to the preset interval, performing interval division processing on the left interval set of each sub-topology path to obtain a left scene subset, and according to the preset interval, performing interval division processing on the right interval set of each sub-topology path to obtain a right scene subset;
and correspondingly combining the left scene subset and the right scene subset left and right to obtain the scene subset corresponding to each sub-topology path.
8. A method for quantifying quality of a model, the method comprising:
inputting a test set of a test scene into a model to be quantized to obtain a test index;
acquiring a scene representation degree between the test scene and the verification scene through the steps of the scene representation method according to any one of claims 1 to 7; the verification scene is a real scene corresponding to the test scene;
generating a verification index of the model to be quantized according to the scene expression degree and the test index;
and determining the quantization value of the quality of the model to be quantized according to the verification index and the test index.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any of claims 1-8 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202211252727.XA 2022-10-13 2022-10-13 Scene expression method, model quality quantization method, computer device, and medium Pending CN115509933A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211252727.XA CN115509933A (en) 2022-10-13 2022-10-13 Scene expression method, model quality quantization method, computer device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211252727.XA CN115509933A (en) 2022-10-13 2022-10-13 Scene expression method, model quality quantization method, computer device, and medium

Publications (1)

Publication Number Publication Date
CN115509933A true CN115509933A (en) 2022-12-23

Family

ID=84509416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211252727.XA Pending CN115509933A (en) 2022-10-13 2022-10-13 Scene expression method, model quality quantization method, computer device, and medium

Country Status (1)

Country Link
CN (1) CN115509933A (en)

Similar Documents

Publication Publication Date Title
CN109858461B (en) Method, device, equipment and storage medium for counting dense population
Taha et al. An efficient algorithm for calculating the exact Hausdorff distance
WO2016054779A1 (en) Spatial pyramid pooling networks for image processing
Giachetti et al. Radial symmetry detection and shape characterization with the multiscale area projection transform
JP2006338313A (en) Similar image retrieving method, similar image retrieving system, similar image retrieving program, and recording medium
KR20220081261A (en) Method and apparatus for object pose estimation
KR101618996B1 (en) Sampling method and image processing apparatus for estimating homography
JP2019045894A (en) Retrieval program, retrieval method and information processing apparatus operating retrieval program
CN110610143A (en) Crowd counting network method, system, medium and terminal for multi-task joint training
CN113129311B (en) Label optimization point cloud instance segmentation method
CN111008631A (en) Image association method and device, storage medium and electronic device
CN112215332A (en) Searching method of neural network structure, image processing method and device
CN110807379A (en) Semantic recognition method and device and computer storage medium
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN114612774A (en) Target detection and model construction method thereof, electronic device and storage medium
CN113436223B (en) Point cloud data segmentation method and device, computer equipment and storage medium
Bokovoy et al. Map-merging algorithms for visual slam: Feasibility study and empirical evaluation
CN106033613B (en) Method for tracking target and device
KR20190105147A (en) Data clustering method using firefly algorithm and the system thereof
CN107203916B (en) User credit model establishing method and device
KR20210018114A (en) Cross-domain metric learning system and method
Morell et al. 3d maps representation using gng
CN115509933A (en) Scene expression method, model quality quantization method, computer device, and medium
Zhou et al. A lightweight neural network for loop closure detection in indoor visual slam
Orts-Escolano et al. Real-time 3D semi-local surface patch extraction using GPGPU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination