CN116625409B - Dynamic positioning performance evaluation method, device and system - Google Patents

Dynamic positioning performance evaluation method, device and system Download PDF

Info

Publication number
CN116625409B
CN116625409B CN202310863675.8A CN202310863675A CN116625409B CN 116625409 B CN116625409 B CN 116625409B CN 202310863675 A CN202310863675 A CN 202310863675A CN 116625409 B CN116625409 B CN 116625409B
Authority
CN
China
Prior art keywords
positioning
value
error
camera
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310863675.8A
Other languages
Chinese (zh)
Other versions
CN116625409A (en
Inventor
陈震
李桓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangke Intelligent Technology Beijing Co ltd
Original Assignee
Xiangke Intelligent Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangke Intelligent Technology Beijing Co ltd filed Critical Xiangke Intelligent Technology Beijing Co ltd
Priority to CN202310863675.8A priority Critical patent/CN116625409B/en
Publication of CN116625409A publication Critical patent/CN116625409A/en
Application granted granted Critical
Publication of CN116625409B publication Critical patent/CN116625409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Manufacturing & Machinery (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a dynamic positioning performance evaluation method, equipment and a system. The method comprises the following steps: setting a driving mechanism and a vision module and obtaining position information data of the vision module corresponding to a first time stamp; setting a calibration module, and acquiring an image of the calibration module under a preset condition through the vision module to obtain positioning information data corresponding to the image and a positioning time stamp; comparing the first time stamp with the positioning time stamp by adopting a dynamic displacement compensation algorithm to calculate and obtain various positioning errors; the weight of the preset parameter is analyzed by adopting a dynamic positioning performance evaluation algorithm to obtain a corresponding evaluation value, wherein the dynamic positioning performance evaluation algorithm is established based on an expert system and a gray clustering evaluation model; and designing and training a BP neural network model, and inputting the evaluation value into the trained BP neural network model to output a dynamic positioning performance evaluation scheme.

Description

Dynamic positioning performance evaluation method, device and system
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a dynamic positioning performance evaluation method, device, and system.
Background
With the continuous development of artificial intelligence, the application of visual positioning technology is increasingly widespread, and the visual positioning technology can shoot a target image (such as a label image) in a visual field range through a camera and convert the image into target data for transmission by an image processing system. Visual positioning requires the effectiveness of positioning, which results in positioning speed and accuracy that pose a high challenge.
In a dynamic positioning scene compared with a static scene, the relative motion of the camera and the tag greatly increases the difficulty of the tag positioning result calibration and performance evaluation, and increases the difficulty of the camera and the tag specification in the type selection. Therefore, how to find an optimal dynamic positioning performance evaluation scheme in a dynamic scene to meet the requirement of visual positioning becomes a problem to be solved.
It should be appreciated that the description in this background section is only for aiding in the understanding of the disclosed aspects of the application and is not necessarily prior art prior to the filing date of this application.
Disclosure of Invention
In one aspect, the present application provides a dynamic positioning performance evaluation method, including: setting a driving mechanism and a visual module, wherein the driving mechanism is configured to control the movement of the visual module so as to obtain position information data of the visual module corresponding to a first time stamp through feedback information calculation of the driving mechanism; setting a calibration module, and acquiring an image of the calibration module under a preset condition through the vision module to obtain positioning information data corresponding to the image and a positioning time stamp, wherein the vision module and the calibration module move relatively; comparing the first time stamp with the positioning time stamp by adopting a dynamic displacement compensation algorithm to calculate and obtain various positioning errors; analyzing the weight of preset parameters by adopting a dynamic positioning performance evaluation algorithm to obtain corresponding evaluation values, wherein the preset parameters comprise a plurality of positioning errors, and the dynamic positioning performance evaluation algorithm is established based on an expert system and a gray clustering evaluation model; and designing and training a BP neural network model, and inputting the evaluation value into the trained BP neural network model to output a dynamic positioning performance evaluation scheme.
In one embodiment, the vision module comprises a camera; the driving mechanism comprises a motor and a driving belt, the motor drives the driving belt to move, and the camera is positioned on the driving belt and moves together with the driving belt; and the calibration module comprises a label, and the positioning error comprises a label positioning error, wherein the label and the camera move relatively in the process of the joint movement of the camera and the driving belt.
In one embodiment, the step of setting the preset condition includes: providing a plurality of said labels having different sizes; and configuring a shooting specification of the camera to different resolutions and/or different frame rates to acquire images of the tag of different sizes under the shooting specification.
In one embodiment, the plurality of positioning errors comprises: a first positioning error in a first direction, a second positioning error in a second direction, and a third positioning error in a third direction, wherein the first direction, the second direction, and the third direction are perpendicular to each other; and the step of obtaining the first positioning error comprises: obtaining the movement speed, the movement acceleration and the first timestamp of the camera at a first position through the feedback information of the driving mechanism, and calculating to obtain a first position parameter of the camera; inputting the first position parameter, the first timestamp, the movement speed and the movement acceleration; inputting the positioning time stamp, wherein the first position is the position with the minimum time deviation from the label corresponding to the positioning time stamp; and calculating to obtain a difference value between the positioning timestamp and the first timestamp, and calculating to obtain a second position parameter of the camera corresponding to the positioning timestamp, wherein the first positioning error is obtained based on the difference value between the second position parameter and positioning information data corresponding to the positioning timestamp.
In one embodiment, the step of obtaining the second positioning error and the third positioning error comprises: measuring to obtain a first positioning measurement value and a second positioning measurement value of the camera at the first position; obtaining a first positioning calculation value and a second positioning calculation value based on a self-recognition visual marker positioning algorithm; and comparing the first positioning calculation value with the first positioning measurement value to obtain the second positioning error, and comparing the second positioning calculation value with the second positioning measurement value to obtain the third positioning error, wherein the first positioning measurement value is a measurement value along the second direction, the second positioning measurement value is a measurement value along the third direction, the first positioning calculation value is a calculation value along the second direction, and the second positioning calculation value is a calculation value along the third direction.
In one embodiment, the step of analyzing the weights of the preset parameters to obtain the corresponding evaluation values further includes locating the calculated speed average value: analyzing the weight of the positioning calculation speed average value by adopting the dynamic positioning performance evaluation algorithm so as to obtain a corresponding evaluation value of the positioning calculation speed average value; and analyzing the weights of the positioning errors by adopting the dynamic positioning performance evaluation algorithm so as to obtain corresponding evaluation values of each positioning error.
In one embodiment, the first positioning error includes a first error mean, a first error standard deviation, a first maximum error value, a first minimum error value, a first error absolute value maximum, a first error absolute value minimum; the second positioning error comprises a second error mean value, a second error standard deviation, a second maximum error value, a second minimum error value, a second error absolute value maximum value and a second error absolute value minimum value; and the third positioning error comprises a third error mean value, a third error standard deviation, a third maximum error value, a third minimum error value, a third error absolute value maximum value and a third error absolute value minimum value.
In one embodiment, the dynamic positioning performance evaluation algorithm based on expert system and gray cluster evaluation model establishment includes: dividing the influence degree of the performance index of the preset parameter on the dynamic positioning performance evaluation scheme into a plurality of grades, and setting a corresponding score for each grade; determining a sample matrix of performance index scores of the experts in the expert system on the preset parameters; determining gray class and constructing a probability function, and calculating a gray evaluation matrix; and outputting a gray cluster analysis result.
In one embodiment, the BP neural network model includes an input layer, a hidden layer, and an output layer, and the steps of designing and training the BP neural network model include: setting an initial weight parameter and a critical value of the BP neural network model, wherein the initial weight parameter and the critical value are random values; inputting the preset parameters and the corresponding evaluation values thereof to calculate to obtain an intermediate output value; the intermediate output value is processed through an excitation function to obtain an output value of the hidden layer; the output value of the hidden layer is input to the output layer after being calculated and processed, and the calculated evaluation value is output after being processed by the excitation function so as to complete forward propagation; adopting a gradient descent method to adjust weights and thresholds of the input layer, the hidden layer and the output layer so as to finish back propagation; and reciprocally performing the forward propagation and the backward propagation until the error is less than a preset desired value.
In one embodiment, the camera captures more than one cycle of the label image under the preset condition, wherein the camera reciprocates once in the direction of the conveyor belt from an initial position and returns to the initial position by one cycle.
Another aspect of the present application provides a dynamic positioning performance evaluation apparatus, including: a calibration module; the visual module is used for acquiring the image of the calibration module under the preset condition to obtain positioning information data corresponding to the image and the positioning time stamp, wherein the visual module and the calibration module move relatively; the driving mechanism is configured to control the movement of the vision module so as to calculate position information data of the vision module corresponding to the first time stamp through feedback information of the driving mechanism; the first algorithm module is used for comparing the first timestamp with the positioning timestamp based on a dynamic displacement compensation algorithm so as to calculate and obtain various positioning errors; the second algorithm module is used for analyzing the weight of preset parameters based on a dynamic positioning performance evaluation algorithm to obtain corresponding evaluation values, wherein the preset parameters comprise a plurality of positioning errors, and the dynamic positioning performance evaluation algorithm is established based on an expert system and a gray clustering evaluation model; and the trained BP neural network model is used for processing the evaluation value and outputting a dynamic positioning performance evaluation scheme.
In one embodiment, the vision module comprises a camera; the driving mechanism comprises a motor and a driving belt, the motor drives the driving belt to move, and the camera is positioned on the driving belt and moves together with the driving belt; and the calibration module comprises a label, and the positioning error comprises a label positioning error, wherein the label and the camera move relatively in the process of the joint movement of the camera and the driving belt.
In one embodiment, the preset condition includes: the number of the labels is a plurality of, and the labels are respectively of different sizes; and the shooting specification of the camera is configured to be different in resolution and/or different in frame rate, wherein the camera acquires the image of the tag under the shooting specification.
In one embodiment, the preset parameters further include a location calculation speed average.
In still another aspect, the present application provides a dynamic positioning performance evaluation system, including: the dynamic positioning performance evaluation apparatus according to any one of the above; and the mechanical arm is used for completing the preset action, wherein the preset action is completed under the condition of the dynamic positioning performance evaluation scheme output by the dynamic positioning performance evaluation equipment.
The dynamic performance evaluation method provided by the application can have at least one of the following beneficial effects:
according to the method in some embodiments of the application, the position of the vision module at a certain moment in the movement process of the vision module can be accurately measured;
according to the method in some embodiments of the application, the position information data of the vision module and the positioning information data of the calibration module are accurately synchronized in time, so that measurement deviation caused by time dislocation of the vision module and the positioning information data is reduced;
According to the method in some embodiments of the application, the automatic calibration of the positioning of the calibration module under the preset condition is realized by adopting a dynamic displacement compensation algorithm, so that the workload of manual calibration is reduced; and
according to the method in some embodiments of the application, the specification schemes of the optimal vision module and the calibration module in the appointed dynamic scene can be rapidly output through the dynamic positioning performance evaluation algorithm, so that the workload of manual data analysis and scheme selection is greatly reduced.
Drawings
Other features, objects and advantages of the present application will become more apparent from the following detailed description of non-limiting embodiments, taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1 is a flowchart of a dynamic positioning performance evaluation method according to an exemplary embodiment of the present application;
FIG. 2 is a functional schematic of the modules in the dynamic positioning performance evaluation method according to an exemplary embodiment of the present application;
fig. 3 is a schematic structural view of a dynamic positioning performance evaluation apparatus according to an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of tag positioning according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a BP neural network model according to an exemplary embodiment of the application; and
Fig. 6 is a dynamic performance evaluation flowchart of a dynamic positioning performance evaluation apparatus according to an exemplary embodiment of the present application.
Detailed Description
For a better understanding of the application, various aspects of the application will be described in more detail with reference to the accompanying drawings. It should be understood that the detailed description is merely illustrative of exemplary embodiments of the application and is not intended to limit the scope of the application in any way. Like reference numerals refer to like elements throughout the specification. The expression "and/or" includes any and all combinations of one or more of the associated listed items.
It should be noted that in this specification, the expressions "first", "second", "third", etc. are used only to distinguish one feature from another feature, and do not denote any limitation of the features, particularly do not denote any order of precedence. Thus, a first direction discussed in this disclosure may also be referred to as a second direction and vice versa without departing from the teachings of the present disclosure.
In the description, references to "one embodiment," "an example embodiment," "some embodiments," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the relevant art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
In the drawings, the thickness, size, and shape of the components have been slightly adjusted for convenience of description. The figures are merely examples and are not drawn to scale. For example, the dimensions of the labels drawn in the figures in the present application are not to scale in actual production. As used herein, "about," "approximately," and similar terms are used as terms of a table approximation, not as terms of a table degree, and are intended to illustrate inherent deviations in measured or calculated values that would be recognized by one of ordinary skill in the art.
It should be understood that expressions such as "comprising," "including," "having," "containing," and/or "comprising" are open-ended, rather than closed-ended, which indicates the presence of stated features, elements and/or components, but does not preclude the presence or addition of one or more other features, elements, components and/or groups thereof. Furthermore, when a statement such as "at least one of the following" appears after a list of features listed, it modifies the entire list of features rather than just modifying the individual elements in the list. Furthermore, when describing embodiments of the application, use of "may" means "one or more embodiments of the application. Also, the term "exemplary" is intended to refer to an example or illustration.
It will also be understood that meanings such as "on", "over" and "over" should be interpreted in the broadest sense such that "on" means not only "directly on" something but also includes "on" and having an intermediate feature or layer therebetween, and "over" or "over" means not only "over" or "over" something but also may include "over" or "over" something and having no intermediate feature or layer therebetween (i.e., directly on something).
Unless otherwise defined, all terms (including engineering and technical terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the present application pertains. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. In addition, unless explicitly defined or contradicted by context, the particular steps included in the methods described herein need not be limited to the order described, but may be performed in any order or in parallel.
The features, principles, and other aspects of the present application are described in detail below.
The inventors of the present application have found that dynamic positioning of tags in dynamic scenes where the camera and tag are moving relative to each other is faced with a number of problems compared to static scenes. For example:
firstly, the accurate measurement of the position of a camera or a tag is difficult to implement by a manual mode under the condition of movement;
secondly, the time synchronization of the positioning of the camera to the tag and the camera position is difficult to ensure, and the dislocation of the camera position and positioning data in the time direction easily occurs, so that the positioning error is increased;
third, under specified dynamic scenes and specified camera motion speeds, system conditions may include different tag sizes, camera specifications to take pictures at different resolutions and/or frame rates, etc., which makes the measurement effort of tag positioning errors large;
fourth, in a dynamic scenario, the tag positioning error may fluctuate in different forms due to systematic errors, random errors, and the like; on the other hand, because the conditions of mutual restriction and mutual influence exist in various indexes of the positioning performance (including positioning calculation speed, positioning error mean value, positioning error fluctuation and the like), different emphasis requirements are placed on various indexes of the positioning performance under different scenes.
The application provides a dynamic positioning performance evaluation method, which can find out an optimal dynamic positioning performance evaluation scheme under a dynamic scene to meet the requirement of visual positioning, thereby at least partially improving or solving the problems.
Fig. 1 is a flowchart of a dynamic positioning performance evaluation method 1000 according to an embodiment of the present application. As shown in fig. 1, the present application provides a dynamic positioning performance evaluation method 1000, which includes:
step S1100, setting a driving mechanism and a vision module, wherein the driving mechanism is configured to control the movement of the vision module so as to obtain position information data of the vision module corresponding to the first timestamp through feedback information calculation of the driving mechanism;
step 1200, setting a calibration module, and acquiring an image of the calibration module under a preset condition through a vision module to obtain positioning information data corresponding to the image and a positioning timestamp, wherein the vision module and the calibration module move relatively;
step S1300, comparing the first time stamp with the positioning time stamp by adopting a dynamic displacement compensation algorithm to calculate and obtain various positioning errors;
step S1400, analyzing weights of preset parameters to obtain corresponding evaluation values by adopting a dynamic positioning performance evaluation algorithm, wherein the preset parameters comprise various positioning errors, and the dynamic positioning performance evaluation algorithm is built based on an expert system and a gray clustering evaluation model; and
Step S1500, designing and training a BP neural network model, and inputting the evaluation value into the trained BP neural network model to output a dynamic positioning performance evaluation scheme.
It should be understood that the steps shown in method 1000 are not exclusive and that other steps may be performed before, after, or between any of the steps shown. Further, some of the illustrated steps may be performed concurrently or may be performed in a different order than that shown in fig. 1. The above steps S1100 to S1500 are further described below in conjunction with fig. 1 to 5.
Step S1100, setting a driving mechanism and a vision module, wherein the driving mechanism is configured to control the movement of the vision module And (3) calculating and obtaining the position information data of the vision module corresponding to the first time stamp through the feedback information of the driving mechanism.
Fig. 2 is a functional schematic diagram of each module in the dynamic positioning performance evaluation method according to an exemplary embodiment of the present application, and in combination with fig. 2 and 3, in step S1100, a driving mechanism 110 and a vision module 120 may be provided, and the driving mechanism 110 is configured to control the movement of the vision module 120. In some embodiments, an upper computer 140 may be further provided, the driving mechanism 110 may feed back its own working information to the upper computer 140, and perform information processing through the upper computer 140 to obtain position information data P1 corresponding to the vision module 120 and the first timestamp t1, so as to be beneficial to implementing accurate measurement of the position information of the vision module 120 in a dynamic scene;
The first timestamp t1 is unique data corresponding to the first time, which is generated by the host computer 140 based on the operation information fed back by the driving mechanism 110.
Fig. 3 is a schematic structural view of a dynamic positioning performance evaluation apparatus according to an exemplary embodiment of the present application. As shown in fig. 3, the driving mechanism 110 may include a motor 111 and a driving belt 112, and the vision module 120 may include a camera 121. The motor 111 is used for driving the motion of the driving belt 112, and the camera 121 is disposed on the driving belt 112. Illustratively, the camera 121 is fixed to the conveyor belt and remains co-moving with the conveyor belt 112 as it moves.
In some embodiments, the motor 111 starts to operate after receiving the start command, and drives the belt 112 to drive in the y direction, for example, while the belt 112 drives the camera 121 to move together. The motor 111 may feed back its own operation information to the host computer 140, and calculate and obtain the position information data P1 of the camera 121 corresponding to the first timestamp t1 via the host computer 140. Illustratively, the motor 111 comprises a stepper motor and the conveyor 112 comprises a belt.
Step S1200, setting a calibration module, and collecting images of the calibration module under preset conditions through a vision module to obtain the image And obtaining positioning information data corresponding to the image and the positioning time stamp, wherein the vision module and the calibration module move relatively.
With continued reference to fig. 2 and 3, the calibration module 130 may be provided, and an image of the calibration module 130 may be collected by the vision module 120, and the vision module 120 may transmit the collected image information to the upper computer 140, and may obtain positioning information data P2 corresponding to the positioning timestamp t 2. The positioning time stamp t2 is unique data corresponding to the second time generated by the upper computer 140 based on the operation information of the vision module 110. Illustratively, calibration module 130 includes a label 131, label 131 being securable to substrate 160. The vision module 120 includes a camera 120, and the tag 131 and the camera 121 move relatively during the common movement of the camera 121 with the conveyor belt 112.
In a dynamic scenario, when the camera 121 needs to perform image acquisition at a first time, setting of its own acquisition mode (for example, acquiring images at a fixed frame rate) may cause the time that the camera 121 actually records the images to be delayed or advanced to a second time, thereby causing misalignment of the position and positioning data of the camera 121 in the time direction, and increasing the positioning error.
In some embodiments, preset conditions may be set and images of the tag 131 are acquired using the camera 121 under the preset conditions. The step of setting the preset condition may include: first, the number of the tags 131 is set to be plural, and the sizes of different tags 131 may be set to be different; second, the photographing specifications of the camera 121 are configured to different resolutions and/or different frame rates to capture images of the different-sized tags 131 under the different photographing specifications.
The camera 121 starts from the initial position, moves to the designated position along the y direction, moves in the y reverse direction and returns to the initial position, and the camera 121 starts from the initial position and reciprocates once and returns to the initial position again, which is recorded as one round. For example, the camera 121 may capture an image of the tag 131 corresponding to a plurality of time stamps during one rotation, and the positioning time stamp t2 of the plurality of time stamps corresponds to a first image of the captured images of the tag 131. The upper computer 140 processes the first image and obtains corresponding positioning information data P2, where the positioning information data P2 corresponds to the positioning timestamp t 2.
Illustratively, the tag 131 includes a first tag 1311, a second tag 1312, and a third tag 1313 arranged in sequence along the y-direction. The resolution specification of the camera 121 at the time of shooting includes at least one of 1920×1080, 1280×720, and 640×480, and the frame rate specification includes at least one of 15fps, 30fps, and 60 fps. The camera 121 may take at least one of the first, second, and third tags 1311, 1312, and 1313 for multiple rounds to acquire multiple sets of data.
In some rounds, the shooting specification of the camera 121 is set to 1920×1080, for example, the frame rate is 30fps, and the camera 121 captures an image of the first tag 1311 under the specification, the first tag 1311 is square, for example, and the side length thereof is 0.05m, for example; in other rounds, the shooting specification of the camera 121 is set to 640×480, for example, and the frame rate is 60fps, and the camera 121 captures an image of the second tag 1312 under the specification, and the second tag 1312 is square, for example, and the side length thereof is 0.04m, for example; in still other rounds, the shooting specification of the camera 121 is set to, for example, 1280×720 in resolution and 60fps in frame rate, and the camera 121 captures an image of the first tag 1311 under the specification, the first tag 1311 being, for example, a square, and the side length thereof being, for example, 0.05m.
It should be noted that the description of the resolution, the frame rate, and the sizes of the first tag 1311 and the second tag 1312 in the shooting specification of the camera 121 in the context of the present application is merely illustrative, and not a limitation of the combination of the two, and those skilled in the art may select other suitable specifications for free combination according to different application conditions and different schemes.
Step S1300, adopting dynamic displacement compensation algorithm to make first time stamp and positioningTime stamps are compared to calculate A variety of positioning errors are obtained.
With continued reference to fig. 2, the upper computer 140 may include a first algorithm module 141, where the first algorithm module 141 is established based on a dynamic displacement compensation algorithm, and the first algorithm module 141 compares the first timestamp t1 with the positioning timestamp t2 and outputs a plurality of positioning errors.
In some embodiments, the plurality of positioning errors includes a first positioning error in a first direction, a second positioning error in a second direction, and a third positioning error in a third direction, the first direction, the second direction, and the third direction being perpendicular to each other. Illustratively, the first direction is parallel to the x-direction, the second direction is parallel to the y-direction, and the third direction is parallel to the z-direction.
Fig. 4 is a schematic diagram of tag positioning according to an exemplary embodiment of the present application. As shown in fig. 4, in the working state, the driving mechanism 110 feeds back its own working information to the upper computer 140, and obtains the movement speed v, the movement acceleration a and the first timestamp t1 of the camera 121 at the first position a after processing by the upper computer 140, and obtains the first position parameter P1 of the camera 121. The first position a is at a distance y1 from the tag 131 in the y-direction and at a distance z1 from the tag 131 in the z-direction. The first position parameter P1, the first timestamp t1, the motion velocity v, and the motion acceleration a are input to the first algorithm module 141 and processed via a dynamic displacement compensation algorithm. In addition, the positioning time stamp t2 is also input to the algorithm module 141.
In some embodiments, the dynamic displacement compensation algorithm includes searching for a timestamp corresponding to the position of the camera 121 closest to the positioning timestamp t2, where the timestamp is the first timestamp t1. In other words, the first position a is the position with the smallest time deviation from the tag 131 corresponding to the positioning time stamp t 2. The difference between the positioning timestamp t2 and the first timestamp t1 is Δt, which satisfies Δt=t2-t 1. From the motion velocity v and the difference Δt corresponding to the first position a, a second position parameter P2 of the camera 121 corresponding to the positioning time stamp t2 can be calculated, which satisfies p2=p1+vΔt+1/2aΔt 2 . The first positioning error (including the positioning error in the x-direction) is based onThe difference between the second position parameter P2 and the positioning information data of the first image is calculated.
In some embodiments, the step of obtaining a second positioning error (including a positioning error in the y-direction) and a third positioning error (including a positioning error in the z-direction) includes measuring first and second positioning measurements of the camera 121 at the first position a; obtaining a first positioning calculation value and a second positioning calculation value based on a self-recognition visual marker positioning algorithm; and comparing the first positioning calculation value with the first positioning measurement value to obtain a second positioning error, and comparing the second positioning calculation value with the second positioning measurement value to obtain the third positioning error. Illustratively, the first positioning measurement value is a measurement value in a second direction (y-direction), the second positioning measurement value is a measurement value in a third direction (z-direction), the first positioning calculation value is a calculation value in the second direction, and the second positioning calculation value is a calculation value in the third direction.
The application can acquire images of the labels 131 with different sizes under different resolutions and frame rates by multiple times, and calculate and obtain various positioning errors of the labels 131 by adopting a dynamic displacement compensation algorithm, can combine parameters of different shooting specifications of the camera 121, different sizes of the labels 131 and the like, and automatically process the parameters by the first algorithm module 141, thereby optimizing the measuring workload and the measuring efficiency with larger positioning errors of the labels.
Step S1400, analyzing the weights of the preset parameters by adopting a dynamic positioning performance evaluation algorithm to obtain the corresponding values Evaluating values, wherein the preset parameters comprise various positioning errors, and the dynamic positioning performance evaluation algorithm is based on an expert system and gray And (5) establishing a clustering evaluation model.
With continued reference to fig. 2, the upper computer 140 may further include a second algorithm module 142, where the second algorithm module 142 is established based on a dynamic performance evaluation algorithm, and the second algorithm module 141 processes and analyzes weights of preset parameters. Illustratively, the preset parameters include a variety of positioning errors, including, for example, a first positioning error, a second positioning error, and a third positioning error. Further, the first positioning error comprises a first error mean value, a first error standard deviation, a first maximum error value, a first minimum error value, a first error absolute value maximum value and a first error absolute value minimum value; the second positioning error comprises a second error mean value, a second error standard deviation, a second maximum error value, a second minimum error value, a second error absolute value maximum value and a second error absolute value minimum value; and the third positioning error comprises a third error mean value, a third error standard deviation, a third maximum error value, a third minimum error value, a third error absolute value maximum value and a third error absolute value minimum value. The weights of the various positioning errors can be analyzed by adopting a dynamic positioning performance evaluation algorithm, so that corresponding evaluation values of each positioning error are obtained.
It can be understood that the first error mean value is an error mean value of the label positioned in the x direction, the second error mean value is an error mean value of the label positioned in the y direction, and the third error mean value is an error mean value of the label positioned in the z direction;
the first error standard deviation is the error standard deviation of the label positioning in the x direction, the second error standard deviation is the error standard deviation of the label positioning in the y direction, and the third error standard deviation is the error standard deviation of the label positioning in the z direction;
the first maximum error value is the maximum error value of the label positioned in the x direction, the second maximum error value is the maximum error value of the label positioned in the y direction, and the third maximum error value is the maximum error value of the label positioned in the z direction;
the first minimum error value is the maximum error value of the positioning of the label in the x direction, the second minimum error value is the maximum error value of the positioning of the label in the y direction, and the third minimum error value is the minimum error value of the positioning of the label in the z direction;
the first absolute value maximum value is the absolute value maximum value of the error of the label positioning in the x direction, the second absolute value maximum value of the error of the label positioning in the z direction, and the third absolute value maximum value of the error of the label positioning in the z direction;
The first error absolute value minimum value is the maximum value of the error absolute value of the label positioned in the x direction, the second error absolute value minimum value is the maximum value of the error absolute value of the label positioned in the z direction, and the third error absolute value minimum value is the maximum value of the error absolute value of the label positioned in the z direction.
In some embodiments, the preset parameters further include a location calculation speed average of the tag 131. The analysis of the weight of the preset parameter further comprises the steps of adopting the dynamic positioning performance evaluation algorithm to analyze the weight of the average value of the positioning calculation speed, and obtaining the corresponding evaluation value of the average value of the positioning calculation speed.
The dynamic positioning performance evaluation algorithm is built based on an expert system and a gray clustering evaluation model. The influence degree of the performance index of the preset parameter on the dynamic positioning performance evaluation scheme can be divided into a plurality of grades, and corresponding scores are set for each grade; determining a sample matrix and gray class of performance index scores of experts in the expert system on preset parameters, constructing a probability function, calculating a gray evaluation matrix, and finally outputting a gray cluster analysis result. The calculation flow comprises the steps of inputting preset parameters and inputting the resolution, frame rate and label size data information of a camera. The weight of the positioning performance index can be analyzed through an expert system and a gray clustering evaluation model, and an analysis result is obtained.
Illustratively, the performance index R ij Is classified into 5 classes, namely: very low, medium, high, very high. By [0,1 ]]The values between which score the corresponding classes to represent the magnitude of their affected extent. Table 1 shows the scores corresponding to the impact levels (impact levels between two levels, expressed as values between the two levels, which may be below 0.1 or above 0.9, with minimum not below 0 and maximum not exceeding 1).
Grade Is very high High height Medium and medium Low and low Very low
Score value 0.9 0.7 0.5 0.3 0.1
TABLE 1
Performance index R can be obtained by w-level expert ij (i=1, 2..n, j=1, 2..q.) the evaluation is performed, wherein j represents the number of factors included in the i-th performance index. Mth expert (m=1, 2. W) vs. Performance index R ij The evaluation observations of (2) are d respectively ijm The sample matrix D formed by the m-bit expert is:
because subjective randomness exists in expert judgment, the data base numbers of the pairwise comparison judgment matrixes may be inconsistent or missing, and the expert is difficult to determine each performance index R ij The indirect influence relation between the two components, so that the application can obtain the performance index R ij The relationship and gray cluster analysis model set correct the indirect relationship between the factors which are difficult to be determined by the expert, and reduce the error caused by subjective judgment of the expert to a certain extent.
In addition, because the gray cluster analysis has low requirements on samples, no special distribution rule of sample data is needed, and the number of the gray cluster analysis is several, tens of the smallThe sample range can well reflect the actual situation, the calculation is relatively uncomplicated, the gray weighting clustering analysis method can be adopted to firstly determine the gray construction probability function, calculate the gray evaluation coefficient, determine the gray evaluation matrix and then calculate the performance index R ij Corresponding weight is used as a fixed weight coefficient of each gray class in the dynamic positioning process, and the performance index R can be realized by the composite operation of the evaluation matrix and the fixed weight coefficient ij And converting subjective language evaluation into objective digitization, so that risks which cannot be quantized subjectively are quantized into comparable objective values through fuzzy languages, and dynamically positioned evaluation quantification is realized.
Table 2 shows scoring results obtained by the expert system based on the influence of the tag locating performance parameters.
Performance parameters Expert 1
Positioning calculation speed average value 0.8
Mean value of y-direction error 0.3
Mean value of x-direction error 0.7
Mean value of z-direction error 0.5
Standard deviation of y-direction error 0.4
X-direction error markDifference of accuracy 0.6
Standard deviation of z direction error 0.3
Maximum error value in y direction 0.1
Maximum error value in x direction 0.4
Maximum error value in z direction 0.2
Minimum error value in y-direction 0.1
Minimum error value in x-direction 0.4
Minimum error value in z direction 0.2
Maximum value of absolute value of y-direction error 0.1
Maximum value of absolute value of x-direction error 0.3
Maximum value of z-direction error 0.2
Minimum absolute value of y-direction error 0.1
Minimum absolute value of x-direction error 0.3
Minimum absolute value of z-direction error 0.2
TABLE 2
It should be noted that, the number of experts w in the expert system may be plural, and the expert 1 in table 2 is only an exemplary illustration, and the number of experts w may be other values in the embodiment, for example, w=5, w=10, w=20, etc., which is not limited in the present application.
Table 3 shows the analysis results of the gray cluster analysis model based on the expert system score results of w=10.
TABLE 3 Table 3
In table 3, the position calculation speed average has an important role in dynamic positioning, which has a great influence on the timeliness of the motion planning based on the tag positioning. It should be noted that the secondary index influence weight is a normalized weight.
Step S1500, designing and training BP neural network model, inputting the evaluation value into the trained BP neural network model And outputting the dynamic positioning performance evaluation scheme.
Fig. 5 is a schematic diagram of a BP neural network model according to an exemplary embodiment of the present application. As shown in fig. 5, a BP neural network model 150 may be designed, which includes an input layer 151, a hidden layer 152, and an output layer 153. The BP neural network model 150 is designed to improve the objectivity in the comprehensive evaluation of error performance.
In some embodiments, the input layer 151 of the BP neural network model 150 is designed to include 19 neurons (I1 to I19) according to the positioning scene and the error model, and the input parameters thereof are, for example, performance parameters (including a positioning calculation speed mean value, a positioning error mean value in three directions of y, x and z, a positioning error standard deviation in three directions of y, x and z, a positioning maximum error value in three directions of y, x and z, a positioning minimum error value in three directions of y, x and z, a positioning error absolute value maximum value in three directions of y, x and z, and a positioning error absolute value minimum value in three directions of y, x and z). The number of layers of the hidden layer 152 is 1, which is set to include 12 neurons (H1 to H12), and the output layer 153 includes 1 neuron for outputting the final evaluation value.
For example, the tag locating performance result samples and their evaluation quantized values may be input into the BP neural network model 150 for training, as follows:
setting an initial weight omega and a critical value theta of the BP neural network. The initial weight omega and the critical value theta are set to be smaller values, and a group of random numbers between-0.5 and 0.5 can be generated through a random generator program and used as the initial weight omega and the critical value theta;
Inputting preset parameters and calculating corresponding evaluation values to obtain an intermediate output value O 1j Which satisfies O 1j =ΣW ij *y i -q j . Wherein y is i Is the ith, W, of the 19 performance parameters ij Q is the weight between the ith neuron of the input layer 151 and the jth neuron of the hidden layer 152 j Is a bias term. Illustratively, a random generator program may be used to generate a set of random numbers between-0.001 and 0.001 as the bias term q j
The hidden layer 152 is processed by the excitation function sigmod as the output value H of the hidden layer 152 j . Illustratively, the excitation function sigmod is f (y) =1/(1+e) -y ),H j Satisfy H j =f(O 1j )=1/(1+e -O1j )。
The output value H of the hidden layer 152 j The weighted sum is input to the output layer 153 again, and the actual output value O is obtained after processing by the excitation function sigmod k To complete forward propagation, the actual output value O k Namely, calculate the evaluation value;
The weights and thresholds of the input layer 151, hidden layer 152, output layer 153 are adjusted using a gradient descent method to accomplish back propagation. Illustratively, the counter-propagating value of the error uses the output value O k And the actual evaluation value T of the sample k Error sum of squares E p Which satisfies 2E p =Σ(T k -O k ) 2 . And if the error value is larger than the preset expected value, performing a back propagation process of the error. The weight and threshold of each layer are adjusted by gradient descent method. Forward propagation and the reverse propagation are reciprocally performed until the error is less than a preset desired value, for example, 0.001, to complete training of the BP neural network model 150.
The input value and the evaluation output value of the tag positioning performance evaluation sample are processed through the trained BP neural network model 150, so that an optimal dynamic performance evaluation scheme can be obtained, and the problem that the shooting specification of the camera 121, the size of the tag 131 and the like are difficult to select due to mutual restriction and mutual influence of various indexes (including positioning calculation speed, positioning error mean value, positioning error fluctuation and the like) of the positioning performance under different scenes is solved.
Still another aspect of the present application provides a dynamic positioning performance evaluation apparatus 100, with continued reference to fig. 2 and 3, the dynamic positioning performance evaluation apparatus 100 includes a driving mechanism 110, a vision module 120, a calibration module 130, a first algorithm module 141, a second algorithm module 142, and a trained BP neural network model 150. Illustratively, the driving mechanism 110 may control the movement of the vision module 120, and may feedback its own operation information to be calculated to obtain the position information data P1 of the vision module 120 corresponding to the first time stamp t 1. The vision module 120 is configured to collect an image of the calibration module 130 under a preset condition to obtain positioning information data P2 corresponding to the positioning timestamp t 2. The calibration module 130 may be configured to be fixed such that the vision module 120 moves relative to the calibration module 130 when the vision module 120 is driven by the driving mechanism 110.
In some embodiments, the first algorithm module 141 is built based on a dynamic displacement compensation algorithm that can calculate a variety of positioning errors from a comparison of the first timestamp t1 with the second timestamp t 2.
In some embodiments, the second algorithm module 142 analyzes the weights of the preset parameters based on a dynamic positioning performance evaluation algorithm to obtain corresponding evaluation values. Illustratively, the preset parameters may include the various alignment errors described above, and may also include the position calculation speed average of the calibration module 130. It should be noted that the dynamic positioning performance evaluation algorithm is built based on an expert system and a gray clustering evaluation model, the expert system can be utilized to score the weight of each performance index, the gray clustering evaluation model is adopted to analyze based on the scoring condition, the multi-level index weight is obtained, and the weight evaluation values of various performance indexes of preset parameters can be effectively obtained in different scenes.
In some embodiments, the trained BP neural network model 150 may be used to process the weight evaluation values of the various performance indicators obtained based on the second algorithm module 142 and ultimately output a dynamic positioning performance evaluation scheme, which is the optimal choice scheme.
In some embodiments, vision module 120 includes a camera 121, drive mechanism 110 includes a motor 111 and a belt 112, motor 111 drives belt 112 in motion, camera 111 is positioned on belt 112 and moves with belt 112, calibration module 130 includes a label 131, and the positioning error includes a label positioning error. It will be appreciated that the camera 121 moves relative to the tag 131 and the camera 111 during movement in concert with the belt 112.
In some embodiments, the preset conditions include that the number of tags 131 is plural and each has a different size; the photographing specification of the camera 121, under which the camera 121 captures the image of the tag 131, is configured to different resolutions and/or different frame rates. Illustratively, the resolution specification of the camera 121 at the time of photographing includes at least one of 1920×1080, 1280×720, and 640×480, and the frame rate specification includes at least one of 15fps, 30fps, and 60 fps. During operation of the dynamic positioning performance evaluation apparatus 100, the camera 121 may take at least one of the first, second, and third tags 1311, 1312, and 1313 in a plurality of passes, wherein the camera 131 reciprocates once from the initial position and returns to the initial position again, denoted as one pass.
In some embodiments, the dynamic positioning performance evaluation apparatus 100 further includes a host computer 140, and the host computer 140 includes a first algorithm module 141, a second algorithm module 142, and a third algorithm module 143. The third algorithm module 143 is built based on the BP neural network model and can be used for positioning performance evaluation in dynamic scenarios.
Fig. 6 is a dynamic performance evaluation flowchart of a dynamic positioning performance evaluation apparatus according to an exemplary embodiment of the present application. The dynamic performance evaluation flow of the dynamic positioning performance evaluation apparatus 100 is described below with reference to fig. 3 and 6.
The dynamic positioning performance evaluation device 100 may include a control layer 210, an algorithm layer 220, and an analysis layer 230. Calibration control software in, for example, a calibration device (not shown) may be turned on to establish a connection of the calibration control software to the motor 111, camera 121. The calibration control software reads the configuration file, obtains calibration configuration data and issues movement instructions to the motor 111. The motor 111 drives the camera 121 to move along the y direction through the belt 112, and acquires data such as position data, movement speed, movement acceleration, and the like of the camera 121 returned by driving the motor 111 in real time.
Illustratively, the calibration control software controls the camera 121 to take images at regular time and process each target data in the images in real time, such as calculating the positioning information data of the tag 131 and counting the time elapsed for positioning calculation. The position information data of the camera 121 corresponding to the currently processed image frame can be obtained through a dynamic displacement compensation algorithm, and then the distance theoretical value and the actual measurement value of the camera 121 and the tag 131 along the y, x and z directions are respectively calculated according to the positioning information data of the tag 131 and the corresponding position information data of the camera 121, and the positioning error of the positioning information data of the tag 131 relative to the actual measurement value is counted.
Illustratively, the positioning errors may include a first positioning error (positioning error in the y-direction), a second positioning error (positioning error in the x-direction), and a third positioning error (positioning error in the z-direction). Further, the first positioning error comprises a first error mean value, a first error standard deviation, a first maximum error value, a first minimum error value, a first error absolute value maximum value and a first error absolute value minimum value; the second positioning error comprises a second error mean value, a second error standard deviation, a second maximum error value, a second minimum error value, a second error absolute value maximum value and a second error absolute value minimum value; and the third positioning error comprises a third error mean value, a third error standard deviation, a third maximum error value, a third minimum error value, a third error absolute value maximum value and a third error absolute value minimum value.
When the camera 121 reaches the specified position, the calibration control software controls the motor 111 to stop moving, and at the same time controls the camera 121 to stop shooting images. The calibration control software may then control the camera 121 to move back to the initial position. In the moving process, if the calibration device determines that the camera 121 does not return to the initial position according to the information returned by the motor 111, the calibration control software sends a control instruction to control the motor 111 to drive the camera 121 to return to the initial position, so as to complete the calibration operation of one round. It should be noted that, the calibration control software may control the motor 111 to drive the camera 121 to perform calibration operation for one or more rounds according to the calibration configuration data for different shooting specifications (including resolution and frame rate) of the camera 121.
For example, after calibration is completed, the calibration control software may invoke a dynamic positioning performance evaluation algorithm to evaluate the positioning calculation speed and positioning error data of the camera 121 under the conditions of different resolutions, frame rates, different sizes of the tag 131, and the like, and output an optimal solution meeting the performance requirement.
Still another aspect of the present application provides a dynamic positioning performance evaluation system (not shown). The dynamic positioning performance evaluation system may include a dynamic positioning performance evaluation apparatus 100 and a robot arm (not shown) that may complete a predetermined action in response to an input action instruction. With continued reference to fig. 2 and 3, an optimal combination scheme of the resolution, frame rate, and size of the tag 131 of the camera 121 may be selected according to the dynamic positioning performance evaluation scheme output from the dynamic positioning performance evaluation apparatus 100, and the robot arm may complete its predetermined action under the optimal combination scheme condition.
The above description is only illustrative of the embodiments of the application and of the technical principles applied. It will be appreciated by those skilled in the art that the scope of the application is not limited to the specific combination of the above technical features, but also encompasses other technical solutions which may be formed by any combination of the above technical features or their equivalents without departing from the technical concept. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.

Claims (14)

1. A dynamic positioning performance evaluation method, characterized in that the method comprises:
setting a driving mechanism and a visual module, wherein the driving mechanism is configured to control the movement of the visual module so as to obtain position information data of the visual module corresponding to a first time stamp through feedback information calculation of the driving mechanism;
setting a calibration module, and acquiring an image of the calibration module under a preset condition through the vision module to obtain positioning information data corresponding to the image and a positioning time stamp, wherein the vision module and the calibration module move relatively;
comparing the first time stamp with the positioning time stamp by adopting a dynamic displacement compensation algorithm to calculate and obtain various positioning errors;
analyzing the weight of preset parameters to obtain corresponding evaluation values by adopting a dynamic positioning performance evaluation algorithm, wherein the preset parameters comprise a plurality of positioning errors, the dynamic positioning performance evaluation algorithm is established based on an expert system and a gray clustering evaluation model,
wherein the establishing comprises: dividing the influence degree of the performance index of the preset parameter on the dynamic positioning performance evaluation scheme into a plurality of grades, setting corresponding scores for each grade, determining a sample matrix of the performance index scores of the preset parameter by the experts in the expert system, determining gray and constructing a probability function, calculating a gray evaluation matrix and outputting gray cluster analysis results; and
And designing and training a BP neural network model, and inputting the evaluation value into the trained BP neural network model to output a dynamic positioning performance evaluation scheme.
2. The method of claim 1, wherein the vision module comprises a camera;
the driving mechanism comprises a motor and a driving belt, the motor drives the driving belt to move, and the camera is positioned on the driving belt and moves together with the driving belt; and
the calibration module comprises a label, and the positioning error comprises a label positioning error, wherein the label and the camera move relatively in the process of the joint movement of the camera and the driving belt.
3. The method of claim 2, wherein the step of setting the preset condition comprises:
providing a plurality of said labels having different sizes; and
the shooting specifications of the camera are configured to different resolutions and/or different frame rates to acquire images of the tag of different sizes under the shooting specifications.
4. The method of claim 2, wherein the plurality of positioning errors comprises:
a first positioning error in a first direction, a second positioning error in a second direction, and a third positioning error in a third direction, wherein the first direction, the second direction, and the third direction are perpendicular to each other; and
The step of obtaining the first positioning error comprises:
obtaining the movement speed, the movement acceleration and the first timestamp of the camera at a first position through the feedback information of the driving mechanism, and calculating to obtain a first position parameter of the camera;
inputting the first position parameter, the first timestamp, the movement speed and the movement acceleration;
inputting the positioning time stamp, wherein the first position is the position with the minimum time deviation from the label corresponding to the positioning time stamp; and
calculating to obtain the difference between the positioning time stamp and the first time stamp, calculating to obtain the second position parameter of the camera corresponding to the positioning time stamp,
the first positioning error is obtained based on the difference value calculation of the second position parameter and the positioning information data corresponding to the positioning time stamp.
5. The method of claim 4, wherein the step of obtaining the second positioning error and the third positioning error comprises:
measuring to obtain a first positioning measurement value and a second positioning measurement value of the camera at the first position;
obtaining a first positioning calculation value and a second positioning calculation value based on a self-recognition visual marker positioning algorithm; and
Comparing the first positioning calculation value with the first positioning measurement value to obtain the second positioning error, comparing the second positioning calculation value with the second positioning measurement value to obtain the third positioning error,
the first positioning measurement value is a measurement value along the second direction, the second positioning measurement value is a measurement value along the third direction, the first positioning calculation value is a calculation value along the second direction, and the second positioning calculation value is a calculation value along the third direction.
6. The method of claim 2, wherein the preset parameters further comprise a location calculation speed average, and the step of analyzing weights of the preset parameters to obtain corresponding evaluation values comprises:
analyzing the weight of the positioning calculation speed average value by adopting the dynamic positioning performance evaluation algorithm so as to obtain a corresponding evaluation value of the positioning calculation speed average value; and
and analyzing the weights of a plurality of positioning errors by adopting the dynamic positioning performance evaluation algorithm so as to obtain corresponding evaluation values of each positioning error.
7. The method of claim 4, wherein the first positioning error comprises a first error mean, a first error standard deviation, a first maximum error value, a first minimum error value, a first error absolute value maximum, a first error absolute value minimum;
The second positioning error comprises a second error mean value, a second error standard deviation, a second maximum error value, a second minimum error value, a second error absolute value maximum value and a second error absolute value minimum value; and
the third positioning error comprises a third error mean value, a third error standard deviation, a third maximum error value, a third minimum error value, a third error absolute value maximum value and a third error absolute value minimum value.
8. The method of claim 1 or 6, wherein the BP neural network model comprises an input layer, a hidden layer, and an output layer, and wherein the step of designing and training the BP neural network model comprises:
setting an initial weight parameter and a critical value of the BP neural network model, wherein the initial weight parameter and the critical value are random values;
inputting the preset parameters and the corresponding evaluation values thereof to calculate to obtain an intermediate output value;
the intermediate output value is processed through an excitation function to obtain an output value of the hidden layer;
the output value of the hidden layer is input to the output layer after being calculated and processed, and the calculated evaluation value is output after being processed by the excitation function so as to complete forward propagation;
Adopting a gradient descent method to adjust weights and thresholds of the input layer, the hidden layer and the output layer so as to finish back propagation; and
and reciprocally performing the forward propagation and the backward propagation until the error is less than a preset expected value.
9. The method of claim 3, wherein the camera captures the label image more than one pass under the predetermined condition,
the camera moves back and forth once along the direction of the driving belt from the initial position and returns to the initial position for one round.
10. A dynamic positioning performance evaluation apparatus, characterized in that the apparatus comprises:
a calibration module;
the visual module is used for acquiring the image of the calibration module under the preset condition to obtain positioning information data corresponding to the image and the positioning time stamp, wherein the visual module and the calibration module move relatively;
the driving mechanism is configured to control the movement of the vision module so as to calculate position information data of the vision module corresponding to the first time stamp through feedback information of the driving mechanism;
the first algorithm module is used for comparing the first timestamp with the positioning timestamp based on a dynamic displacement compensation algorithm so as to calculate and obtain various positioning errors;
A second algorithm module for analyzing the weights of preset parameters based on a dynamic positioning performance evaluation algorithm to obtain corresponding evaluation values, wherein the preset parameters comprise a plurality of positioning errors, the dynamic positioning performance evaluation algorithm is established based on an expert system and a gray clustering evaluation model,
wherein the establishing comprises: dividing the influence degree of the performance index of the preset parameter on the dynamic positioning performance evaluation scheme into a plurality of grades, setting corresponding scores for each grade, determining a sample matrix of the performance index scores of the preset parameter by the experts in the expert system, determining gray and constructing a probability function, calculating a gray evaluation matrix and outputting gray cluster analysis results; and
and the trained BP neural network model is used for processing the evaluation value and outputting a dynamic positioning performance evaluation scheme.
11. The apparatus of claim 10, wherein the vision module comprises a camera;
the driving mechanism comprises a motor and a driving belt, the motor drives the driving belt to move, and the camera is positioned on the driving belt and moves together with the driving belt; and
The calibration module comprises a label, and the positioning error comprises a label positioning error, wherein the label and the camera move relatively in the process of the joint movement of the camera and the driving belt.
12. The apparatus of claim 11, wherein the preset condition comprises:
the number of the labels is a plurality of, and the labels are respectively of different sizes; and
the shooting specifications of the cameras are configured to be of different resolutions and/or different frame rates, wherein the cameras acquire images of the tag under the shooting specifications.
13. The apparatus of claim 10, wherein the predetermined parameters further comprise a position calculation speed average.
14. A dynamic positioning performance evaluation system, the system comprising:
the dynamic positioning performance evaluation apparatus according to any one of claims 10 to 13; and
and the mechanical arm is used for completing the preset action, wherein the preset action is completed under the condition of the dynamic positioning performance evaluation scheme output by the dynamic positioning performance evaluation equipment.
CN202310863675.8A 2023-07-14 2023-07-14 Dynamic positioning performance evaluation method, device and system Active CN116625409B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310863675.8A CN116625409B (en) 2023-07-14 2023-07-14 Dynamic positioning performance evaluation method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310863675.8A CN116625409B (en) 2023-07-14 2023-07-14 Dynamic positioning performance evaluation method, device and system

Publications (2)

Publication Number Publication Date
CN116625409A CN116625409A (en) 2023-08-22
CN116625409B true CN116625409B (en) 2023-10-20

Family

ID=87592409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310863675.8A Active CN116625409B (en) 2023-07-14 2023-07-14 Dynamic positioning performance evaluation method, device and system

Country Status (1)

Country Link
CN (1) CN116625409B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105091911A (en) * 2015-09-07 2015-11-25 中国人民解放军信息工程大学 Detection system and method for dynamic positioning precision of POS (point of sale) system
JP2017020922A (en) * 2015-07-13 2017-01-26 株式会社日立システムズ Work action support navigation system and method, and computer program for work action support navigation, memory medium storing program for work action support navigation, self-propelled robot loaded with work action support navigation system, and intelligent helmet used with work action support navigation system
CN106960456A (en) * 2017-03-28 2017-07-18 长沙全度影像科技有限公司 A kind of method that fisheye camera calibration algorithm is evaluated
CN110136125A (en) * 2019-05-17 2019-08-16 北京深醒科技有限公司 One kind replicating mobile counterfeiting detection method based on the matched image of level characteristics point
CN113091768A (en) * 2021-03-12 2021-07-09 南京理工大学 MIMU integral dynamic intelligent calibration compensation method
CN114777771A (en) * 2022-04-13 2022-07-22 西安电子科技大学 Outdoor unmanned vehicle combined navigation positioning method
CN115272923A (en) * 2022-07-22 2022-11-01 华中科技大学同济医学院附属协和医院 Intelligent identification method and system based on big data platform
CN116311594A (en) * 2023-05-11 2023-06-23 中国人民解放军海军工程大学 Ship subsystem state analysis method, device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7383371B2 (en) * 2018-02-28 2023-11-20 キヤノン株式会社 Image processing device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017020922A (en) * 2015-07-13 2017-01-26 株式会社日立システムズ Work action support navigation system and method, and computer program for work action support navigation, memory medium storing program for work action support navigation, self-propelled robot loaded with work action support navigation system, and intelligent helmet used with work action support navigation system
CN105091911A (en) * 2015-09-07 2015-11-25 中国人民解放军信息工程大学 Detection system and method for dynamic positioning precision of POS (point of sale) system
CN106960456A (en) * 2017-03-28 2017-07-18 长沙全度影像科技有限公司 A kind of method that fisheye camera calibration algorithm is evaluated
CN110136125A (en) * 2019-05-17 2019-08-16 北京深醒科技有限公司 One kind replicating mobile counterfeiting detection method based on the matched image of level characteristics point
CN113091768A (en) * 2021-03-12 2021-07-09 南京理工大学 MIMU integral dynamic intelligent calibration compensation method
CN114777771A (en) * 2022-04-13 2022-07-22 西安电子科技大学 Outdoor unmanned vehicle combined navigation positioning method
CN115272923A (en) * 2022-07-22 2022-11-01 华中科技大学同济医学院附属协和医院 Intelligent identification method and system based on big data platform
CN116311594A (en) * 2023-05-11 2023-06-23 中国人民解放军海军工程大学 Ship subsystem state analysis method, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
北斗卫星导航系统动态定位精度测试研究;翁吴彬;《计算机产品与流通》(第08期);全文 *
基于MEA-BP神经网络的卫星钟差预报;吕栋等;《测绘学报》(第08期);全文 *

Also Published As

Publication number Publication date
CN116625409A (en) 2023-08-22

Similar Documents

Publication Publication Date Title
US10867167B2 (en) Collaborative deep network model method for pedestrian detection
US11557029B2 (en) Method for detecting and recognizing surface defects of automated fiber placement composite based on image converted from point cloud
CN106226050B (en) A kind of TFDS fault picture automatic identifying method based on convolutional neural networks
CN112836640B (en) Single-camera multi-target pedestrian tracking method
CN110070074A (en) A method of building pedestrian detection model
CN113129341A (en) Landing tracking control method and system based on light-weight twin network and unmanned aerial vehicle
CN108748149B (en) Non-calibration mechanical arm grabbing method based on deep learning in complex environment
CN103324937A (en) Method and device for labeling targets
CN108090896B (en) Wood board flatness detection and machine learning method and device and electronic equipment
CN114281093B (en) Defect detection system and method based on unmanned aerial vehicle power inspection
CN109389156B (en) Training method and device of image positioning model and image positioning method
CN110633643A (en) Abnormal behavior detection method and system for smart community
CN111582395A (en) Product quality classification system based on convolutional neural network
CN116625409B (en) Dynamic positioning performance evaluation method, device and system
CN108280516B (en) Optimization method for mutual-pulsation intelligent evolution among multiple groups of convolutional neural networks
CN107644203A (en) A kind of feature point detecting method of form adaptive classification
CN116958713B (en) Quick recognition and statistics method and system for surface fastener of aviation part
CN113888494B (en) Artificial intelligence interface pin quality detection method of automobile domain controller
CN116027743A (en) Intelligent monitoring method, device and system for production line
CN114119479A (en) Industrial production line quality monitoring method based on image recognition
CN114818945A (en) Small sample image classification method and device integrating category adaptive metric learning
CN109598207B (en) Fast human eye tracking method based on convolutional neural network
CN112200856A (en) Visual ranging method based on event camera
CN113324995B (en) Intelligent detection management system for quality supervision, acceptance inspection and acceptance of constructional engineering
CN117456612B (en) Cloud computing-based body posture automatic assessment method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 301, 3rd Floor, Commercial and Garage, No. 17 Zhichun Road, Haidian District, Beijing, 100080

Patentee after: Xiangke Intelligent Technology (Beijing) Co.,Ltd.

Country or region after: China

Address before: A09, 9th Floor, Jinqiu International Building, No. 6 Zhichun Road, Haidian District, Beijing, 100080

Patentee before: Xiangke Intelligent Technology (Beijing) Co.,Ltd.

Country or region before: China