CN110225335B - Camera stability evaluation method and device - Google Patents

Camera stability evaluation method and device Download PDF

Info

Publication number
CN110225335B
CN110225335B CN201910535068.2A CN201910535068A CN110225335B CN 110225335 B CN110225335 B CN 110225335B CN 201910535068 A CN201910535068 A CN 201910535068A CN 110225335 B CN110225335 B CN 110225335B
Authority
CN
China
Prior art keywords
variance
camera
edge
value
stability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910535068.2A
Other languages
Chinese (zh)
Other versions
CN110225335A (en
Inventor
陈亮
牛海军
王潇
王丽娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum Beijing
Original Assignee
China University of Petroleum Beijing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum Beijing filed Critical China University of Petroleum Beijing
Priority to CN201910535068.2A priority Critical patent/CN110225335B/en
Publication of CN110225335A publication Critical patent/CN110225335A/en
Application granted granted Critical
Publication of CN110225335B publication Critical patent/CN110225335B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a camera stability evaluation method and a device, wherein the method comprises the following steps: acquiring continuous multi-frame edge contour images; the multi-frame edge contour images all comprise edge contours of target objects shot by a camera; determining the variance of the vertical coordinates of the same edge point in the multi-frame edge profile image; the stability of the camera was evaluated from the variance. The invention can improve the accuracy of camera stability evaluation.

Description

Camera stability evaluation method and device
Technical Field
The invention relates to the technical field of high-precision measurement, in particular to a camera stability evaluation method and device.
Background
In the production and scientific research links of industrial departments such as aerospace, machinery and the like, more and more machine vision measurement methods are used for high-precision measurement tasks, and the high-precision measurement tasks generally have higher requirements on camera imaging stability and measurement result stability. Conventional camera sensors are classified into a CCD (Charge-coupled Device) and a CMOS (Complementary Metal Oxide Semiconductor). The CMOS image sensor has high integration level, the distance between each element and each circuit is very close, the mutual interference is serious, and the imaging noise is high. CCD image sensors have high sensitivity and low noise, but are expensive. Before a specific measurement task is started, the type selection work of a camera is needed, and the camera with good imaging effect, stability, reliability and excellent performance is selected to play a role in achieving twice the result with half the effort on the subsequent measurement task. Selecting a camera model typically requires evaluating the stability of the camera for that model.
At present, in order to evaluate the stability of a camera, the existing method evaluates based on the pixel value change of a pixel point in a collected image, selects a certain pixel point on two frames of images continuously collected to obtain two pixel values, and determines that the camera is stable if the difference between the two pixel values is smaller than a certain threshold. The method is simple, but the stability of the camera cannot be accurately measured only by using two adjacent frames of images. Another conventional method is to estimate based on the state change of the feature points in the captured image, continuously capture N frames of images, divide each frame of image into a plurality of regions, generate description vectors of the image using the corner points detected in each region, calculate the number of regions in which the existing state of the corner points in the N frames of image changes according to the description vectors, determine whether the N frames of image are stable according to the number of regions in which the existing state of the corner points in the N frames of image changes, and further determine whether the camera is stable. The method excessively depends on the accuracy of corner detection, and the existing corner detection algorithm is not very robust, so that more situations of false detection, missing detection and the like exist, and the final camera stability evaluation result is greatly influenced. How to accurately and effectively measure the stability of a camera is a problem to be solved urgently in the field of current high-precision measurement.
Disclosure of Invention
The invention provides a camera stability evaluation method and device, which can improve the accuracy of camera stability evaluation.
In a first aspect, an embodiment of the present invention provides a camera stability evaluation method, where the method includes:
acquiring continuous multi-frame edge contour images; determining the variance of the vertical coordinates of the same edge point in the multi-frame edge contour images by the edge contour of the target object shot by the camera in the multi-frame edge contour images; evaluating the stability of the camera according to the variance.
In a second aspect, an embodiment of the present invention further provides a camera stability evaluation apparatus, where the apparatus includes: the acquisition module is used for acquiring continuous multi-frame edge contour images; the multi-frame edge contour images all comprise edge contours of target objects shot by a camera; the calculation module is used for determining the variance of the vertical coordinates of the same edge point in the multi-frame edge contour image; and the evaluation module is used for evaluating the stability of the camera according to the variance.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor implements the camera stability evaluation method when executing the computer program.
In a fourth aspect, the present invention also provides a computer-readable medium having a non-volatile program code executable by a processor, where the program code causes the processor to execute the above camera stability evaluation method.
The embodiment of the invention has the following beneficial effects: the embodiment of the invention provides a camera stability evaluation method and a device, the method comprises the steps of firstly, obtaining continuous multi-frame edge outline images to increase the number of images for evaluating the camera stability and improve the accuracy of evaluating the camera stability; secondly, determining the variance of the vertical coordinate of the same edge point in each frame of edge contour image, wherein the variance can describe the fluctuation of the vertical coordinate, then reflecting the stability of the camera, and finally generating the stability evaluation result of the camera according to the variance. Therefore, the accuracy of camera stability evaluation can be improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a camera stability assessment method according to an embodiment of the present invention;
fig. 2 is a grayscale image of a shot target and an edge contour image of the shot target extracted after edge detection is performed by using a Canny operator in the camera stability evaluation method according to the embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a logical arrangement of target regions in a camera stability evaluation method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a selected ROI in a camera stability assessment method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an ROI that should not be selected in the camera stability assessment method according to the embodiment of the present invention;
FIG. 6 is a broken-line diagram illustrating the evaluation result of the camera with better stability according to the embodiment of the present invention;
FIG. 7 is a broken-line diagram illustrating the evaluation result of the camera with poor stability according to the embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a camera stability assessment apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of another camera stability evaluation apparatus according to an embodiment of the present invention;
fig. 10 is a block diagram illustrating a structure of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a camera stability evaluation method and device, which can improve the accuracy and reliability of camera stability evaluation, can be used in a high-precision industrial measurement task, and provide technical support for selecting a camera with good imaging effect, stability, reliability and excellent performance.
To facilitate understanding of the present embodiment, a detailed description is first given of a camera stability evaluation method disclosed in the present embodiment.
An embodiment of the present invention provides a camera stability evaluation method, which is shown in a flowchart of a camera stability evaluation method shown in fig. 1, and includes the following steps:
step S102, acquiring continuous multi-frame edge contour images; the edge contour images of the multiple frames each comprise an edge contour of a target object shot by a camera.
The edge contour image may be obtained by processing a picture including the target object taken by the camera, and the edge contour image includes an edge contour of the target object taken by the camera. The target object refers to a photographic object selected for evaluating camera stability, and may be an elliptical object, a triangular object, a pentagonal object, or another shape.
It should be noted that the total number of the multi-frame edge contour images needs to be much larger than 1, and each edge contour image includes an edge contour of the target object, so as to improve the accuracy and reliability of the camera stability evaluation.
And step S104, determining the variance of the vertical coordinates of the same edge point in the multi-frame edge contour image.
The edge points are a set of coordinate points selected on the edge contour in the edge contour image. A group of edge points can be determined for each frame of the edge contour map, the abscissa of the edge contour points in different frames is the same, and the ordinate is different, so that the variance of all the ordinates corresponding to each abscissa is calculated. Edge points having the same abscissa are taken as the same edge point.
Step S106, evaluating the stability of the camera according to the variance.
The variance can describe the fluctuation of the camera and further reflect the stability of the camera, so the stability of the camera can be evaluated by judging the size of the variance. The embodiment of the invention uses the statistical significance of the variance, can intuitively and effectively explain the problem and provides a more accurate evaluation result. The square difference can be further analyzed and calculated, and the calculated result is compared with a preset threshold value or analyzed to obtain a stability evaluation result of the camera.
According to the camera stability evaluation method provided by the embodiment of the invention, firstly, the number of images for evaluating the stability of the camera is increased by acquiring continuous multi-frame edge profile images, and the accuracy of evaluating the stability of the camera is improved; secondly, determining the variance of the vertical coordinate of the same edge point in each frame of edge contour image, wherein the variance can describe the fluctuation of the vertical coordinate, then reflecting the stability of the camera, and finally generating the stability evaluation result of the camera according to the variance. Therefore, the accuracy of camera stability evaluation can be improved.
In order to determine edge points in an edge contour image, determine a variance according to obtained edge point coordinates, and provide a basis for evaluating camera stability, in an embodiment of the present invention, determining a variance of vertical coordinates of the same edge point in multiple frame edge contour images specifically includes the following steps:
(1) respectively selecting areas with the same pixel coordinates in the multi-frame edge profile image to obtain a plurality of target areas; the plurality of target regions each include a partial edge contour of the target object.
The preset parameters can be set to select the areas with the same pixel coordinates in the edge contour image. The preset parameters include upper left corner pixel position information, horizontal pixel range information and vertical pixel range information Of a Region to be selected, an ROI (Region Of Interest) Region can be divided in the edge contour image according to the preset parameters, and the divided ROI Region is used as a target Region. Because the preset parameters used by each frame are the same and the sizes of the edge contour images of the multiple frames are the same, a plurality of target areas with the same size and the same pixel coordinates can be selected from the edge contour images of each frame.
It should be noted that the number of target regions is the same as the number of frames of the edge contour image, and the target regions correspond to the edge contour images one to one. Referring to fig. 4, which shows a schematic diagram of a selected ROI region, (a) in fig. 4 is a schematic diagram of a ROI region that can be selected when the target object has an elliptical shape, (b) in fig. 4 is a schematic diagram of a ROI region that can be selected when the target object has a triangular shape, and (c) in fig. 4 is a schematic diagram of a ROI region that can be selected when the target object has a pentagonal shape.
In addition, the preset parameters may be manually set, after obtaining the continuous multi-frame edge contour image, manually determine the position and size of the ROI region according to the edge contour image according to a certain filtering rule, and confirm the position information of the top left pixel, the horizontal pixel range information, and the vertical pixel range information of the region. For example, the filtering rule may be that the ROI should not contain mutually interlaced and complicated contours, and should not select the too close contour regions as shown in the schematic diagram of the ROI region that should not be selected in fig. 5. In order to ensure that the target region includes the partial edge contour of the target object, it is required to ensure that the selected ROI region includes the partial edge contour of the target object in the process of determining the preset parameter. The preset parameters may also be obtained by other methods, and the embodiment of the present invention is not particularly limited.
(2) Respectively generating a group of horizontal coordinates in each target area, and generating a corresponding group of vertical coordinates according to the horizontal coordinates to determine a group of edge points in each target area; wherein the abscissa of the corresponding edge point in different groups is the same.
A target area may be randomly selected and a set of abscissas including a plurality of pieces of abscissa data may be randomly generated for an area in which the edge profile is located in the target area, for example, the number of pieces of the coordinate data included in the set of abscissas may be greater than 100. Because the target area comprises the partial edge contour of the target object, a group of vertical coordinates corresponding to the group of horizontal coordinates on the partial edge contour can be obtained according to the group of horizontal coordinates, and points corresponding to the horizontal coordinates and the vertical coordinates are used as edge points. Since a set of abscissa includes a plurality of coordinate data, a set of edge points includes a plurality of edge points, and the number of edge points is the same as the number of coordinate data in a given set of abscissa.
Given a set of abscissas, denoted x1,x2,…,xMWherein M is the number of coordinate data of the set of abscissa, referring to the target region logical arrangement diagram shown in fig. 3, and according to the abscissa and the edge profile in the target region, obtaining the edge profileA set of ordinates corresponding to the abscissa, denoted
Figure BDA0002100950740000051
Where the superscript indicates that the ordinate is the ordinate in the first target region. Applying the set of abscissa to the other target areas except the selected target area to respectively obtain the corresponding ordinate in the other target areas except the selected target area: according to the above method, the ordinate in each target region can be obtained and recorded as
Figure BDA0002100950740000061
Where j is 1,2, …, N indicates the number of target regions. After obtaining the abscissa and the ordinate of a target region, a set of edge points of the target region can be obtained and recorded as
Figure BDA0002100950740000062
Wherein i is 1,2, …, M.
It should be noted that the abscissa in different target regions is the same, and the ordinate in different target regions may be different because the edge profile in different target regions may be different. Finally, a group of edge points is determined for each target area, and the number of the group of edge points is multiple. And determining a plurality of groups of edge points for the plurality of target areas, wherein the abscissa of each group of edge points is the same, and the ordinate may be different. It should be further noted that, in the embodiment of the present invention, a set of vertical coordinates may also be given, and a set of corresponding horizontal coordinates is generated through the vertical coordinates to determine a set of edge points in each target area, where the horizontal coordinates or the vertical coordinates are specifically given, and are selected according to actual requirements, and the embodiment of the present invention is not limited specifically.
It should be noted that a set of abscissa may be randomly generated so as to adapt the embodiment of the present invention to target objects with more shapes, and a corresponding manner of generating the abscissa may be set for the shape of the target object, which is not specifically limited in the embodiment of the present invention.
It should be further noted that one target area may be randomly selected, or all the target areas may be sorted according to a certain logical order, for example, the target areas may be sorted according to a time order, see the target area logical arrangement diagram shown in fig. 3, and a first target area after logical sorting is selected, so as to randomly generate a set of abscissa for the area where the edge contour in the selected target area is located. How to select a target area to generate a set of abscissas can be selected according to actual requirements, and the embodiment of the invention is not particularly limited. In addition, a set of abscissa coordinates may be generated simultaneously in each of all the target regions according to the same standard.
(3) And calculating the variance of the ordinate of the edge point with the same abscissa.
The edge points comprise information of an abscissa and an ordinate, and because the abscissas in different target areas are the same, statistical analysis can be performed on the information of the ordinate in different target areas to obtain the variance. The variance may be used to describe the volatility of the ordinate, by which the stability of the camera may be further reflected.
Considering the purpose of reducing the complexity of calculation and improving the evaluation efficiency of the embodiment of the invention, the step of acquiring continuous multi-frame edge contour images comprises the following steps:
(1) acquiring continuous multiframe initial images shot by a camera.
The embodiment of the invention can use a camera fixed on the workbench to continuously shoot a plurality of frames of images containing the target object fixed on the workbench at intervals, and the images are used as initial images. For example, the camera may continuously capture N (N is an integer greater than 0 and N is much greater than 1) frame images every T (T is greater than or equal to 0.2 seconds) seconds, thereby obtaining an initial image. The camera may be an area-array gray scale ccd (cmos) camera.
It should be noted that, in order to ensure the stable gray level of the collected image, the circle center lens, the area array gray level ccd (cmos) camera, the parallel light source and the shooting target used in the shooting should be placed in the darkroom to avoid the influence of the ambient light; a voltage-stabilized power supply is used for providing stable voltage and current for the parallel light source so as to ensure that the parallel light source keeps stable output; the stabilized voltage supply is used for providing stable voltage and current for the area array gray scale CCD (CMOS) camera so as to ensure that the area array gray scale CCD (CMOS) camera keeps stable power supply.
In addition, in order to avoid the influence of the relative motion between the camera and the target object to be photographed on the evaluation result, the camera and the target object to be photographed should be fixed on the workbench, and the influence of the mechanical shake on the evaluation result can be prevented to the maximum extent by the camera and the target object. During shooting, the power of the parallel light source power supply is adjusted to ensure that the maximum gray value of the image is between 175 and 185, and the minimum gray value of the image is between 0 and 10. It is assumed that there is no relative motion between the camera and the target object to be photographed.
(2) And respectively carrying out edge detection on the continuous initial images according to an edge detection algorithm to obtain continuous multi-frame edge profile images.
The edge detection algorithm may be a Canny algorithm (Canny edge detector, Canny edge detection algorithm). When a target object is an elliptical object, the gray scale image of a certain frame of initial image is shown in (a) in fig. 2, and the edge contour image obtained after the edge detection processing is shown in (b) in fig. 2. When the target object is a triangular object or a pentagonal object, the grayscale image and the edge contour image of the initial image of a certain frame can be respectively shown in (c) in fig. 2 to (f) in fig. 2, and are not described herein again.
Considering that the variance can be used to describe the fluctuation situation of the camera, in order to further describe the stability of the camera in the direction of the vertical axis, the step of calculating the variance of the vertical coordinate of the edge point with the same horizontal coordinate comprises the following steps: calculating the average value of the vertical coordinates of each group of edge points corresponding to the horizontal coordinates; and calculating the variance corresponding to the abscissa according to the average value.
For example, if there are N frames of edge contour images, N target regions may be obtained correspondingly, and a set of abscissa coordinates is generated in each target regionObtaining N groups of abscissas, wherein each group of abscissas has M edge points, and the abscissas of the corresponding edge points in different groups are the same, namely the corresponding abscissas of the M edge points in the N target areas are x1,x2,…,xMAnd the ordinate corresponding to the set of abscissas may be different, calculating x1,x2,…,xMCorresponding to
Figure BDA0002100950740000071
That is, the average value of N vertical coordinates corresponding to M horizontal coordinates in N groups of edge points is calculated, M variances corresponding to the horizontal coordinates can be calculated according to the obtained M average values, and finally { (x) is obtained1,S1),(x2,S2)…(xM,SM)}。
Considering that the camera further reflects the stability of the camera in order to more scientifically acquire the fluctuation condition of the image, the step of calculating the average value of the vertical coordinates of each group of edge points corresponding to the horizontal coordinate specifically comprises the following steps:
calculating an average value of respective ordinates of edge points having the same abscissa in the respective target regions according to the following formula:
Figure BDA0002100950740000081
wherein the content of the first and second substances,
Figure BDA0002100950740000082
an average value of respective ordinates of edge points i having the same abscissa in the respective target regions is represented, where i is 1,2, …, M is the number of edge points included in a group of edge points corresponding to one target region,
Figure BDA0002100950740000083
and (3) an ordinate of the edge point i having the same abscissa in each target region, where j is 1,2, …, N indicates the number of target regions.
For example, for a set of abscissas, X, on a horizontal axis, X1,x2,…,xMWherein x is1Corresponding to the first edge point, x1Corresponding N edge points in the N target regions are
Figure BDA0002100950740000084
Calculating the N coordinates on the vertical axis Y using the above formula
Figure BDA0002100950740000085
Average value of coordinates can be obtained
Figure BDA0002100950740000086
Similarly, can be calculated
Figure BDA0002100950740000087
Thus, the same abscissa x is obtained1,x2,…,xMAverage value of respective vertical coordinates of N groups of M edge points
Figure BDA0002100950740000088
After obtaining the average value of the ordinate, the step of calculating the variance corresponding to the abscissa according to the average value specifically includes the following steps:
calculating the variance of each ordinate of the edge points having the same abscissa in each target region according to the following formula;
Figure BDA0002100950740000089
wherein S isiRepresenting the variance of the respective ordinates of the edge points i,
Figure BDA00021009507400000810
an average value of respective ordinates of the edge points i having the same abscissa in the respective target regions is represented.
On obtaining the abscissa x1,x2,…,xMAverage value of respective vertical coordinates of N groups of M edge points
Figure BDA00021009507400000811
Thereafter, the abscissa x is calculated1Corresponding variance S1
Figure BDA00021009507400000812
And is noted as (x)1,S1). Similarly, calculate S2~SMFinally, { (x) can be obtained1,S1),(x2,S2)…(xM,SM)}。
Considering the variance as a parameter for measuring the stability of the camera, in order to evaluate the stability of the camera from multiple angles and obtain a more intuitive evaluation result of the stability of the camera, the step of generating the evaluation result of the stability of the camera according to the variance and a preset threshold comprises the following steps:
determining a maximum variance value, a minimum variance value and a variance average value according to the variance; and generating a stability evaluation result of the camera according to the maximum variance value, the minimum variance value, the variance average value and a preset threshold value.
The camera volatility is described by determining the maximum variance value, the minimum variance value and the variance average value, so that a more accurate evaluation result is provided, the maximum variance value, the minimum variance value and the variance average value are analyzed and processed, and finally a more visual camera stability evaluation result is obtained by combining with a preset threshold value. The variance is the amount for measuring the fluctuation size of the data sample, and the calculated Y-axis coordinate value variance result can reflect the fluctuation condition of the image acquired by the camera and further reflect the stability of the camera.
The step of determining the maximum variance value, the minimum variance value and the mean variance value according to the variance comprises the following steps:
determining a maximum variance value, a minimum variance value and a variance average value according to the following formulas; smax=Max{S1,S2…SM},Smin=Min{S1,S2…SM},
Figure BDA0002100950740000091
Wherein S ismaxDenotes the maximum variance value, SminThe minimum variance value is represented as a value of the minimum variance,
Figure BDA0002100950740000092
mean of variance, M TableThe number of edge points in each target region is shown.
And generating a stability evaluation result of the camera according to the variance and a preset threshold value, wherein the stability evaluation result comprises the following step of taking the absolute value of the difference value between the maximum variance value and the mean variance value or the absolute value of the difference value between the minimum variance value and the mean variance value as an evaluation parameter.
The evaluation parameter may be
Figure BDA0002100950740000093
Or
Figure BDA0002100950740000094
The evaluation parameter may also be a parameter in other forms obtained according to the maximum variance value, the minimum variance value, or the variance average value, and may be adjusted according to actual requirements.
Judging whether the evaluation parameter is larger than a preset threshold value, and if so, determining that the camera stability is unqualified; and if not, determining that the camera stability is qualified.
The evaluation parameter describes fluctuation of a vertical coordinate, if the evaluation parameter is larger than a preset threshold value, the camera fluctuation is large, stability problems exist, measurement precision and measurement stability in a high-precision measurement task are greatly influenced, a camera with more excellent performance should be selected, otherwise, the camera fluctuation is small, the stability is high, influence on a high-precision measurement result is small, and the camera of the type can be selected.
In addition, it should be noted that the data { (x) is obtained1,S1),(x2,S2)…(xM,SM) And after the mean value of the variances, a line drawing tool can be used for drawing a line drawing, and whether the stability of the camera is in problem or not can be visually observed according to the line drawing. Referring to FIG. 6, a polygonal line diagram of the evaluation result of the camera with better stability is shown, and a polygonal line diagram of the evaluation result of the camera with poorer stability is shown in FIG. 7, wherein the vertical axis is the variance value, the horizontal axis is the abscissa value of the edge point, and the polygonal line in the graph is based on the data { (x)1,S1),(x2,S2)…(xM,SM) Obtained, the horizontal lines in the figure are according to the data
Figure BDA0002100950740000101
And obtaining a variance line. Comparing FIG. 6 with FIG. 7, FIG. 6
Figure BDA0002100950740000102
Is significantly less than in the second line graph
Figure BDA0002100950740000103
And mean value of fig. 6
Figure BDA0002100950740000104
Less than the mean of the second graph.
If the difference value of the evaluation parameter and the preset threshold value is too large and is in the line graph
Figure BDA0002100950740000105
Large number, variance SiThe drawing display effect of (2) is that the coordinates on the variance line are more, which shows that the camera of the model has poor imaging stability effect, may cause great influence on the measurement precision and the measurement stability in the high-precision measurement task, and a camera with more excellent performance should be further selected. If the difference value of the evaluation parameter and the preset threshold value is small and is in the line graph
Figure BDA0002100950740000106
Small number, variance SiThe drawing display effect of (2) is that the coordinates on the variance line are less, which shows that the camera has better imaging stability effect and less influence on high-precision measurement results, and the camera of the model is recommended to be selected. If the difference between the evaluation parameter and the preset threshold value is large but is in the line graph
Figure BDA0002100950740000107
Small number, variance SiThe effect of the drawing display of (1) is that the coordinates on the variance line are small, but the jump fluctuation is large in the case of the extreme individual point, which may be caused by the fact thatElectromagnetic interference, temperature, light source, etc. Repeatedly collecting images of the target object for detection, selecting a new ROI (region of interest), and repeatedly measuring and observing whether a line graph with a large difference value between the evaluation parameter and the preset threshold exists or not
Figure BDA0002100950740000108
The phenomenon of the small quantity, if not, the imaging stability of the camera is not problematic, and the camera of the model can be selected. Otherwise, a camera with more excellent performance should be further selected.
In order to further ensure the accuracy of the stability evaluation result, the method may further comprise the following steps: generating a plurality of stability evaluation results of the camera according to the target objects with various shapes; determining the stability of the camera according to the plurality of stability evaluation results.
In order to avoid the influence on the final evaluation result caused by shooting the shape of the target object, the target objects in various shapes can be shot, the stability evaluation result is respectively generated for the target object in each shape, and the conditions to be met by the stability of the final camera can be set as follows: the stability assessment results generated for each shape of target object are all qualified.
The camera stability evaluation method, the camera stability evaluation device and the electronic equipment provided by the invention carry out edge detection on continuously acquired N frames of images to extract the edge contour map of a shooting target, and calculate the variance of Y-axis coordinate values of the same edge point of the N frames of images. Because the variance is the amount for measuring the fluctuation size of the data sample, the calculated variance result of the Y-axis coordinate value can reflect the fluctuation condition of the image acquired by the camera, and further reflect the stability of the camera. The embodiment of the invention does not depend on special hardware devices and has wide application range. Compared with the pixel value change evaluation method based on the collected image pixel points, the method and the device have the advantages that the statistical significance of the variance is used, the problems can be intuitively and effectively explained, and more accurate evaluation results are provided. Compared with a method based on the corner distribution transformation condition, the embodiment of the invention avoids the false detection and missing detection condition in the corner detection, and improves the accuracy of camera stability evaluation. The invention can be used in high-precision industrial measurement tasks and provides technical support for selecting a camera with good imaging effect, stability, reliability and excellent performance.
An embodiment of the present invention further provides a camera stability evaluation device, referring to a schematic structural diagram of the camera stability evaluation device shown in fig. 8, the device includes:
an obtaining module 81, configured to obtain continuous multi-frame edge contour images; the multi-frame edge contour images all comprise edge contours of target objects shot by a camera; the calculation module 82 is used for determining the variance of the vertical coordinates of the same edge point in the multi-frame edge contour image; an evaluation module 83 for evaluating the stability of the camera based on the variance.
A computing module specifically configured to: respectively selecting areas with the same pixel coordinates in the multi-frame edge profile image to obtain a plurality of target areas; a plurality of target regions each including a partial edge contour of a target object; respectively generating a group of horizontal coordinates in each target area, and generating a corresponding group of vertical coordinates according to the horizontal coordinates to determine a group of edge points in each target area; wherein, the abscissa of the corresponding edge point in different groups is the same; and calculating the variance of the ordinate of the edge point with the same abscissa.
Calculating the average value of the vertical coordinates of each group of edge points corresponding to the horizontal coordinates; and calculating the variance corresponding to the abscissa according to the average value.
Calculating the average value of the vertical coordinates of each group of edge points corresponding to the horizontal coordinates according to the following formula:
Figure BDA0002100950740000111
wherein the content of the first and second substances,
Figure BDA0002100950740000112
an average value of respective ordinates of edge points i having the same abscissa in the respective target regions is represented, where i is 1,2, …, M is the number of edge points included in a group of edge points corresponding to one target region,
Figure BDA0002100950740000113
representing edges having the same abscissa in each target regionThe ordinate of the point i, where j is 1,2, …, N indicates the number of target regions. The variance is calculated according to the following formula:
Figure BDA0002100950740000114
wherein S isiRepresenting the variance of the respective ordinates of the edge points i,
Figure BDA0002100950740000115
an average value of respective ordinates of the edge points i having the same abscissa in the respective target regions is represented.
An acquisition module specifically configured to: acquiring continuous multi-frame initial images shot by a camera; and respectively carrying out edge detection on the continuous initial images to obtain continuous multi-frame edge profile images.
An evaluation module specifically configured to: determining a maximum variance value, a minimum variance value and a variance average value according to the variance; and generating a stability evaluation result of the camera according to the maximum variance value, the minimum variance value and the variance average value.
Determining a maximum variance value, a minimum variance value and a variance mean value according to the following formulas according to the variance: smax=Max{S1,S2…SM},Smin=Min{S1,S2…SM},
Figure BDA0002100950740000121
Wherein S ismaxDenotes the maximum variance value, SminRepresents the minimum variance value, S represents the mean variance value, and M represents the number of edge points in each target region. Taking the absolute value of the difference between the maximum variance value and the mean variance value or the absolute value of the difference between the minimum variance value and the mean variance value as an evaluation parameter; judging whether the evaluation parameter is larger than a preset threshold value, and if so, determining that the camera stability is unqualified; and if not, determining that the camera stability is qualified.
Referring to another schematic structural diagram of the camera stability evaluation apparatus shown in fig. 9, the apparatus may further include a lifting module 84, which is configured to: generating a plurality of stability evaluation results of the camera according to the target objects with various shapes; determining the stability of the camera according to the plurality of stability evaluation results.
The camera stability evaluation device provided by the embodiment of the invention has the same implementation principle and technical effect as the camera stability evaluation method embodiment, and for brief description, reference may be made to the corresponding content in the method embodiment for which no mention is made in the device embodiment.
An embodiment of the present invention further provides an electronic device, referring to the schematic block diagram of the structure of the electronic device shown in fig. 10, where the electronic device includes a memory 91 and a processor 92, the memory stores a computer program that can be executed on the processor, and the processor executes the computer program to implement any of the steps of the above methods.
The electronic device provided by the embodiment of the invention has the same technical characteristics as the camera stability evaluation method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
It is clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the electronic device described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again
Embodiments of the present invention also provide a computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform any of the steps of the above-described method.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A camera stability evaluation method is characterized by comprising the following steps:
acquiring continuous multi-frame edge contour images; the multi-frame edge contour images all comprise edge contours of target objects shot by a camera;
determining the variance of the vertical coordinates of the same edge point in the multi-frame edge contour image;
evaluating the stability of the camera according to the size of the variance;
determining the variance of the vertical coordinates of the same edge point in the multi-frame edge contour image, including:
respectively selecting areas with the same pixel coordinates in the multi-frame edge contour image to obtain a plurality of target areas; a partial edge contour of the target object in each of the plurality of target regions;
respectively generating a group of horizontal coordinates in each target area, and generating a corresponding group of vertical coordinates according to the horizontal coordinates to determine a group of edge points in each target area; wherein, the abscissa of the corresponding edge point in different groups is the same;
and calculating the variance of the ordinate of the edge point with the same abscissa.
2. The method of claim 1, wherein the step of obtaining the plurality of consecutive frames of edge contour images comprises:
acquiring continuous multi-frame initial images shot by a camera;
and respectively carrying out edge detection on the continuous initial images to obtain continuous multi-frame edge profile images.
3. The method of claim 1, wherein calculating a variance of the ordinate of the edge points having the same abscissa comprises:
calculating the average value of the vertical coordinates of each group of edge points corresponding to the horizontal coordinates;
and calculating the variance corresponding to the abscissa according to the average value.
4. The method of claim 1, comprising:
calculating the average value of the vertical coordinates of each group of edge points corresponding to the horizontal coordinates according to the following formula:
Figure FDA0002809601780000011
wherein the content of the first and second substances,
Figure FDA0002809601780000012
an average value of respective ordinates of edge points i having the same abscissa in each of the target regions is represented, where i is 1,2, …, M is the number of edge points included in a group of edge points corresponding to one target region,
Figure FDA0002809601780000013
a vertical coordinate indicating an edge point i having the same horizontal coordinate in each of the target regions, where j is 1,2, …, N indicates the number of target regions;
the variance is calculated according to the following formula:
Figure FDA0002809601780000021
wherein S isiRepresenting the variance of the respective ordinates of the edge points i,
Figure FDA0002809601780000022
represents an average value of respective ordinates of the edge points i having the same abscissa in the respective target regions.
5. The method of claim 1, wherein evaluating camera stability from the variance comprises:
determining a maximum variance value, a minimum variance value and a variance average value according to the variance;
and generating a stability evaluation result of the camera according to the maximum variance value, the minimum variance value and the variance average value.
6. The method of claim 5, comprising:
determining a maximum variance value, a minimum variance value and a variance mean value according to the variance according to the following formulas:
Smax=Max{S1,S2…SM},Smin=Min{S1,S2…SM},
Figure FDA0002809601780000023
wherein S ismaxDenotes the maximum variance value, SminThe minimum variance value is represented as a value of the minimum variance,
Figure FDA0002809601780000024
representing the mean value of the variance, and M representing the number of edge points in each target area;
generating a stability evaluation result of the camera according to the maximum variance value, the minimum variance value and the variance average value, wherein the stability evaluation result comprises the following steps:
taking the absolute value of the difference between the maximum variance value and the mean variance value or the absolute value of the difference between the minimum variance value and the mean variance value as an evaluation parameter;
judging whether the evaluation parameter is larger than a preset threshold value,
if yes, determining that the camera stability is unqualified;
and if not, determining that the camera stability is qualified.
7. A camera stability evaluation device, comprising:
the acquisition module is used for acquiring continuous multi-frame edge contour images; the multi-frame edge contour images all comprise edge contours of target objects shot by a camera;
the calculation module is used for determining the variance of the vertical coordinates of the same edge point in the multi-frame edge contour image;
the evaluation module is used for evaluating the stability of the camera according to the variance;
the calculation module is specifically configured to:
respectively selecting areas with the same pixel coordinates in the multi-frame edge contour image to obtain a plurality of target areas; a partial edge contour of the target object in each of the plurality of target regions;
respectively generating a group of horizontal coordinates in each target area, and generating a corresponding group of vertical coordinates according to the horizontal coordinates to determine a group of edge points in each target area; wherein, the abscissa of the corresponding edge point in different groups is the same;
and calculating the variance of the ordinate of the edge point with the same abscissa.
8. An electronic device comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and wherein the processor implements the steps of the method of any of claims 1 to 6 when executing the computer program.
9. A computer-readable medium having non-volatile program code executable by a processor, wherein the program code causes the processor to perform the method of any of claims 1-6.
CN201910535068.2A 2019-06-20 2019-06-20 Camera stability evaluation method and device Expired - Fee Related CN110225335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910535068.2A CN110225335B (en) 2019-06-20 2019-06-20 Camera stability evaluation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910535068.2A CN110225335B (en) 2019-06-20 2019-06-20 Camera stability evaluation method and device

Publications (2)

Publication Number Publication Date
CN110225335A CN110225335A (en) 2019-09-10
CN110225335B true CN110225335B (en) 2021-01-12

Family

ID=67814193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910535068.2A Expired - Fee Related CN110225335B (en) 2019-06-20 2019-06-20 Camera stability evaluation method and device

Country Status (1)

Country Link
CN (1) CN110225335B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110955243B (en) * 2019-11-28 2023-10-20 新石器慧通(北京)科技有限公司 Travel control method, apparatus, device, readable storage medium, and mobile apparatus
CN111246100B (en) * 2020-01-20 2022-03-18 Oppo广东移动通信有限公司 Anti-shake parameter calibration method and device and electronic equipment
CN113365047B (en) * 2021-08-10 2021-11-02 苏州维嘉科技股份有限公司 Method and device for detecting repeated target-grabbing precision of camera and camera system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739693A (en) * 2009-12-11 2010-06-16 中兴通讯股份有限公司 Motion image display method and device
CN101895783A (en) * 2009-05-18 2010-11-24 华晶科技股份有限公司 Detection device for stability of digital video camera and digital video camera
CN105139404A (en) * 2015-08-31 2015-12-09 广州市幸福网络技术有限公司 Identification camera capable of detecting photographing quality and photographing quality detecting method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049911B (en) * 2012-12-20 2015-07-29 成都理想境界科技有限公司 Contour detecting stability judging method and image search method
US9784576B2 (en) * 2015-12-28 2017-10-10 Automotive Research & Test Center Calibration method for merging object coordinates and calibration board device using the same
CN105933698A (en) * 2016-04-14 2016-09-07 吴本刚 Intelligent satellite digital TV program play quality detection system
CN105844651A (en) * 2016-04-14 2016-08-10 吴本刚 Image analyzing apparatus
US11025887B2 (en) * 2017-02-27 2021-06-01 Sony Corporation Field calibration of stereo cameras with a projector

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101895783A (en) * 2009-05-18 2010-11-24 华晶科技股份有限公司 Detection device for stability of digital video camera and digital video camera
CN101739693A (en) * 2009-12-11 2010-06-16 中兴通讯股份有限公司 Motion image display method and device
CN105139404A (en) * 2015-08-31 2015-12-09 广州市幸福网络技术有限公司 Identification camera capable of detecting photographing quality and photographing quality detecting method

Also Published As

Publication number Publication date
CN110225335A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
US9621793B2 (en) Information processing apparatus, method therefor, and measurement apparatus
CN110225335B (en) Camera stability evaluation method and device
JP2021184307A (en) System and method for detecting lines with vision system
US10311591B2 (en) Displacement detecting apparatus and displacement detecting method
JP6632288B2 (en) Information processing apparatus, information processing method, and program
KR102073468B1 (en) System and method for scoring color candidate poses against a color image in a vision system
US10181202B2 (en) Control apparatus, robot, and control method
CN112033965A (en) 3D arc surface defect detection method based on differential image analysis
JP2013542528A (en) Night scene image blur detection system
WO2009085173A1 (en) System and method for performing multi-image training for pattern recognition and registration
JP2020087312A (en) Behavior recognition device, behavior recognition method, and program
CN113781393B (en) Screen defect detection method, device, equipment and storage medium
JP7127046B2 (en) System and method for 3D profile determination using model-based peak selection
US9478032B2 (en) Image monitoring apparatus for estimating size of singleton, and method therefor
JP2015148895A (en) object number distribution estimation method
US10728476B2 (en) Image processing device, image processing method, and image processing program for determining a defective pixel
WO2013088199A1 (en) System and method for estimating target size
CN112734858A (en) Binocular calibration precision online detection method and device
JP6818263B2 (en) Fracture surface analysis device and fracture surface analysis method
CN115184362A (en) Rapid defect detection method based on structured light projection
Ali et al. Vision based measurement system for gear profile
CN116051390B (en) Motion blur degree detection method and device
JP2019192048A (en) Imaging apparatus
JP7405362B2 (en) Concrete structure diagnosis system, concrete structure diagnosis method and program
RU2351091C2 (en) Method of automatic detection and correction of radial distortion on digital images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210112

Termination date: 20210620