CN116708756A - Sensor accuracy detection method, detection device, electronic device, and storage medium - Google Patents

Sensor accuracy detection method, detection device, electronic device, and storage medium Download PDF

Info

Publication number
CN116708756A
CN116708756A CN202310730272.6A CN202310730272A CN116708756A CN 116708756 A CN116708756 A CN 116708756A CN 202310730272 A CN202310730272 A CN 202310730272A CN 116708756 A CN116708756 A CN 116708756A
Authority
CN
China
Prior art keywords
depth
interval
pixel
sensor
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310730272.6A
Other languages
Chinese (zh)
Inventor
肖建强
程伟
陈刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Elevator China Co Ltd
Hitachi Building Technology Guangzhou Co Ltd
Original Assignee
Hitachi Elevator China Co Ltd
Hitachi Building Technology Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Elevator China Co Ltd, Hitachi Building Technology Guangzhou Co Ltd filed Critical Hitachi Elevator China Co Ltd
Priority to CN202310730272.6A priority Critical patent/CN116708756A/en
Publication of CN116708756A publication Critical patent/CN116708756A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a precision detection method, a detection device, an electronic device and a storage medium of a sensor. The method comprises the following steps: acquiring depth images of a shot object shot by a sensor at a plurality of shooting distances; determining depth interval information for each pixel point in the depth image based on the respective reference pixel depth and depth error threshold value of each depth image; the depth interval information comprises a plurality of depth statistical intervals and interval weights of the depth statistical intervals; determining pixel precision of each depth image based on pixel point quantity distribution of each depth statistic interval and interval weight of the corresponding depth statistic interval; the pixel precision degree represents the degree of deviation of the pixel depth of each pixel point in the depth image from the reference pixel depth; and fusing the pixel precision of each depth image to obtain a precision detection result of the sensor when the shot object is shot. By adopting the method, the accuracy and the efficiency of the precision detection of the sensor can be improved.

Description

Sensor accuracy detection method, detection device, electronic device, and storage medium
Technical Field
The present application relates to the field of computer technology, and in particular, to a method for detecting accuracy of a sensor, an apparatus for detecting accuracy of a sensor, an electronic device, and a computer readable storage medium.
Background
The image sensor is an image photographing device for converting light into an electrical signal using a photosensitive semiconductor material that reacts to the light. With the development of the automobile, medical, computer and communication industries, there is an increasing demand for high-precision image sensors in various fields such as smart phones, digital cameras, game machines, internet of things, robots, security cameras and medical miniature cameras.
When the image sensor is applied, the shooting precision of the image sensor needs to be determined first, namely, the difference between the pixel depth and the expected depth of the image shot by the image sensor is determined, so that the subsequent application can be performed according to the shooting precision of the image sensor. In the current precision detection of the image sensor, the shooting precision of the image sensor is generally detected through a detection tool (such as a calibration workpiece). However, as the use complexity of the detection tool is relatively high and the detection accuracy of the detection tool gradually decreases along with the increase of the use times, the accuracy of detecting the image sensor is low, and the image sensor is affected in subsequent functional application.
Disclosure of Invention
The present disclosure provides a precision detection method of a sensor, a precision detection apparatus of a sensor, an electronic device, a computer-readable storage medium, and a computer program product to at least solve the problem of low accuracy of precision detection of an image sensor in the related art. The technical scheme of the present disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a method for detecting accuracy of a sensor, including:
acquiring depth images of a shot object shot by a sensor at a plurality of shooting distances;
determining depth interval information for each pixel point in the depth image based on the respective reference pixel depth and depth error threshold value of each depth image; the depth interval information comprises a plurality of depth statistical intervals and interval weights of the depth statistical intervals, wherein the depth statistical intervals are depth intervals to which actual pixel depths of the pixel points belong;
determining the pixel precision of each depth image based on the pixel point quantity distribution of each depth statistical interval and the interval weight corresponding to the depth statistical interval; the pixel precision degree represents the degree of deviation of the pixel depth of each pixel point in the depth image from the reference pixel depth;
And fusing the pixel precision of each depth image to obtain a precision detection result of the sensor when the shot object is shot.
In an exemplary embodiment, the fusing the pixel precision of each depth image to obtain a precision detection result of the sensor when shooting the object includes:
acquiring a distance weight preset for each shooting distance;
and carrying out weighted summation processing on the pixel precision of the depth image shot by the sensor at each shooting distance and the corresponding distance weight to obtain a precision detection result of the sensor.
In an exemplary embodiment, the pixel number distribution is characterized by a ratio of the number of pixels included in the depth statistics interval to the number of all pixels included in the depth image;
the determining the pixel precision of each depth image based on the pixel point number distribution of each depth statistical interval and the interval weight corresponding to the depth statistical interval comprises the following steps:
and carrying out weighted summation processing on the interval weight of each depth statistical interval and the corresponding pixel quantity proportion aiming at each depth image, and determining the pixel precision of the depth image.
In an exemplary embodiment, the number of depth images photographed by the sensor for a subject at each of the photographing distances is at least two consecutive photographing;
before said determining the pixel precision of each of said depth images, further comprising:
determining, for each depth image, a proportion of the number of pixels included in each depth statistical interval to the number of all pixels included in the corresponding depth image;
and determining an average value between the proportions corresponding to the depth statistical intervals in at least two continuously shot depth images shot at each shooting distance, and representing the pixel point number distribution of the depth statistical intervals based on the average value.
In an exemplary embodiment, the determining depth interval information for each pixel point in the depth image based on the respective reference pixel depth and the depth error threshold of each depth image includes:
determining an initial depth interval for each of the depth images based on the reference pixel depth and the depth error threshold;
equally dividing the initial depth interval into a preset number of first class subintervals; and
Determining a depth range greater than an upper limit of the initial depth interval as a second class subinterval; and
determining a depth range smaller than a lower limit of the initial depth interval as a third type subinterval;
and taking the first class subinterval, the second class subinterval and the third class subinterval as depth statistical intervals.
In an exemplary embodiment, the determining depth interval information for each pixel point in the depth image based on the respective reference pixel depth and the depth error threshold of each depth image includes:
dividing the normal distribution interval of the preset normal distribution function equally based on each depth statistical interval to obtain a plurality of subclass distribution intervals respectively corresponding to each depth statistical interval;
for each of the sub-class distribution intervals, an interval weight of a depth statistics interval corresponding to the sub-class distribution interval is determined based on a constant integral of the sub-class distribution interval in the normal distribution function.
In an exemplary embodiment, the determining the interval weight of the depth statistics interval corresponding to the sub-class distribution interval based on the constant integral of the sub-class distribution interval in the normal distribution function includes the following two:
Multiplying the fixed integral of the sub-class distribution interval corresponding to the first class sub-interval by a first preset numerical value to obtain a first arithmetic value, and taking the first arithmetic value as the interval weight of the first class sub-interval;
multiplying the fixed integral of the sub-class distribution interval corresponding to the second class subinterval and the third class subinterval respectively by a second preset value to obtain a second arithmetic value, and taking the second arithmetic value as the interval weights of the second class subinterval and the third class subinterval respectively;
wherein the first preset value is a positive number and the second preset value is a negative number.
According to a second aspect of the embodiments of the present disclosure, there is provided an accuracy detecting device of a sensor, including:
an image acquisition unit configured to perform acquisition of depth images of a subject photographed by a sensor at a plurality of photographing distances;
a depth interval unit configured to perform determination of depth interval information for each pixel point in the depth image based on a respective reference pixel depth and a depth error threshold value for each of the depth images; the depth interval information comprises a plurality of depth statistical intervals and interval weights of the depth statistical intervals, wherein the depth statistical intervals are depth intervals to which actual pixel depths of the pixel points belong;
A pixel precision unit configured to perform determination of pixel precision of each of the depth images based on a pixel point number distribution of each of the depth statistical intervals and an interval weight corresponding to the depth statistical interval; the pixel precision degree represents the degree of deviation of the pixel depth of each pixel point in the depth image from the reference pixel depth;
and a precision detection unit configured to perform fusion of pixel precision of each of the depth images, and obtain a precision detection result of the sensor when photographing the subject.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the executable instructions to implement the accuracy detection method of the sensor as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, in which a computer program is included, which, when executed by a processor of an electronic device, enables the electronic device to perform a method of accuracy detection of a sensor as described in any one of the above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising program instructions therein, which when executed by a processor of an electronic device, enable the electronic device to perform the method of accuracy detection of a sensor as described in any one of the above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
firstly, acquiring depth images of a shot object at a plurality of shooting distances by a sensor; determining depth interval information for each pixel point in the depth image based on the respective reference pixel depth and depth error threshold value of each depth image; the depth interval information comprises a plurality of depth statistical intervals and interval weights of the depth statistical intervals, and the depth statistical intervals are depth intervals to which the actual pixel depths of the pixel points belong; determining the pixel precision of each depth image based on the pixel point quantity distribution of each depth statistic interval and the interval weight of the corresponding depth statistic interval; the pixel precision degree represents the degree of deviation of the pixel depth of each pixel point in the depth image from the reference pixel depth; and fusing the pixel precision of each depth image to obtain a precision detection result of the sensor when the shot object is shot. In this way, on one hand, the pixel precision of each depth image is determined according to the depth interval information of the depth image shot by the sensor at a plurality of shooting distances, and then the pixel precision of each depth image is fused to obtain the precision detection result of the sensor, so that the process of precision detection of the sensor is optimized, and the digestion of manpower and material resources is reduced; on the other hand, the method is different from the method for detecting the precision of the sensor through the detection tool in the prior art, the pixel precision of the depth image is determined according to the pixel point number distribution and the interval weight of each depth statistic interval in the depth image, so that the precision detection result of the sensor is obtained by utilizing the pixel precision of each depth image, the influence on the precision detection of the sensor caused by the use loss of the detection workpiece can be effectively avoided, the precision of the sensor precision detection is improved, the efficiency of the sensor precision detection is accelerated, and the method is beneficial to the subsequent function application of the sensor.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 is an application environment diagram illustrating a method of accuracy detection of a sensor according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of accuracy detection of a sensor according to an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a process for determining interval weights for a depth statistics interval, according to an exemplary embodiment;
FIG. 4 is a flow chart diagram illustrating a method of angle correction for a sensor, according to an exemplary embodiment;
FIG. 5 is a flow chart illustrating a correction step for a photographing angle of a sensor according to an exemplary embodiment;
FIG. 6 is a schematic diagram of an interface showing a depth image, according to an example embodiment;
FIG. 7 is a block diagram of a precision detection device of a sensor according to an exemplary embodiment;
FIG. 8 is a block diagram of an electronic device for accuracy detection of a sensor, according to an example embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The terms "first," "second," and the like in this disclosure are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, although the terms "first," "second," etc. may be used multiple times to describe various operations (or various thresholds or various applications or various instructions or various elements), etc., these operations (or thresholds or applications or instructions or elements) should not be limited by these terms. These terms are only used to distinguish one operation (or threshold or application or instruction or element) from another operation (or threshold or application or instruction or element).
The method for detecting the accuracy of the sensor provided by the embodiment of the application is applied to electronic equipment, and the electronic equipment can comprise a terminal 102 and/or a server 104. In one embodiment, the electronic device may be used in an application environment such as that shown in FIG. 1. Wherein the terminal 102 communicates with the server 104 via a communication network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server.
In some embodiments, referring to fig. 1, the server 104 first acquires depth images of a subject photographed by a sensor at a plurality of photographing distances; then, the server 104 determines depth interval information for each pixel point in the depth image based on the reference pixel depth and the depth error threshold value of each depth image; the depth interval information comprises a plurality of depth statistical intervals and interval weights of the depth statistical intervals, wherein the depth statistical intervals are depth intervals to which the actual pixel depths of the pixel points belong; then, the server 104 determines the pixel precision of each depth image based on the pixel point number distribution of each depth statistic interval and the interval weight of the corresponding depth statistic interval; the pixel precision degree represents the degree of deviation of the pixel depth of each pixel point in the depth image from the reference pixel depth; finally, the server 104 fuses the pixel precision of each depth image to obtain a precision detection result of the sensor when shooting the shot object.
In some embodiments, the terminal 102 (e.g., mobile terminal, fixed terminal) may be implemented in various forms. The terminal 102 may be a mobile terminal including a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a portable handheld device, a personal digital assistant (PDA, personal Digital Assistant), a tablet (PAD), etc., or the terminal 102 may be a fixed terminal such as an automated teller machine (Automated Teller Machine, ATM), an automatic all-in-one machine, a digital TV, a desktop computer, a stationary computer, etc.
In the following, it is assumed that the terminal 102 is a fixed terminal. However, those skilled in the art will appreciate that the configuration according to the disclosed embodiments of the present application can also be applied to a mobile type terminal 102 if there are operations or elements specifically for the purpose of movement.
In some embodiments, the data processing components running on server 104 may load any of a variety of additional server applications and/or middle tier applications being executed, including, for example, HTTP (hypertext transfer protocol), FTP (file transfer protocol), CGI (common gateway interface), RDBMS (relational database management system), and the like.
In some embodiments, the server 104 may be implemented as a stand-alone server or as a cluster of servers. The server 104 may be adapted to run one or more application services or software components that provide the terminal 102 described in the foregoing disclosure.
In some embodiments, the APP or client-running operating system may include various versions of Microsoft WindowsApple />And/or Linux operating system, various commercial or quasi +.>Operating systems (including but not limited to various GNU/Linux operating systems, google +.>Etc.) and/or a mobile operating system, such as +.> The operating system, as well as other online or offline operating systems, is not particularly limited herein.
In some embodiments, as shown in fig. 2, a method for detecting accuracy of a sensor is provided, and the method is applied to an electronic device for explanation, and includes the following steps:
step S11: depth images of a subject photographed by a sensor at a plurality of photographing distances are acquired.
In an embodiment, the depth image may also be referred to as a range image (range image), which refers to an image with a distance (depth) value from a sensor to each point in a scene as a pixel value. Optionally, the method for capturing the depth image of the object by the sensor includes: laser radar depth imaging, computer stereoscopic imaging, coordinate measuring machine, moire, structured light, etc., without specific limitation. The depth image can directly reflect the geometric shape of the visible surface of the photographed object, and the photographed object is a plane object and the geometric shape of the photographed object is a plane rectangle.
In an embodiment, the sensors may be sensors each for capturing a depth image, such as a ToF (Time of Flight) sensor, a 3D camera, a binocular camera, etc., without specific limitation.
In one embodiment, the sensor may capture a plurality of consecutive depth images at a plurality of capture distances from the subject, respectively.
For example, the sensor captures a depth image of 100 frames continuous with the subject at 2 meters, a depth image of 100 frames continuous with the subject at 1.5 meters, a depth image of 100 frames continuous with the subject at 1 meter, and a depth image of 100 frames continuous with the subject at 0.5 meters, thereby obtaining 400 depth images in total.
Step S12: depth interval information for each pixel point in the depth image is determined based on the respective reference pixel depth and depth error threshold for each depth image.
In an embodiment, the reference pixel depth of each pixel point in the depth image is a standard parameter of the depth image, and the reference pixel depth of each pixel point is the same under the standard.
In one embodiment, the depth error threshold characterizes the tolerance of the actual pixel depth of the pixel point to deviate from the reference pixel depth. The actual pixel depth is the pixel depth of the depth image shot by the sensor.
The depth error threshold may be a nominal error threshold configured in advance by a sensor manufacturer for a depth image captured by the sensor.
As an example, the shooting distance between the sensor and the planar object is 2 meters, which configures a nominal error threshold Err range 2%, namely the difference between the actual pixel depth of the normal depth image shot by the sensor at the position of 2 meters and the reference pixel depth of the normal depth image is not more than 2%; the shooting distance between the sensor and the plane object is 1 meter, and the nominal error threshold Err is configured range 1%, i.e. the actual pixel depth of the normal depth image taken by the sensor at 1 meter is not more than 1% from its reference pixel depth.
In an embodiment, the depth interval information includes a plurality of depth statistical intervals and interval weights of the depth statistical intervals, wherein the depth statistical intervals are depth intervals to which actual pixel depths of the pixel points belong.
In one embodiment, the interval weight of the depth statistics interval is configured by an engineer based on the number of each depth statistics interval.
In an embodiment, the electronic device determines depth interval information for each pixel point in the depth image, including the following steps:
step one: for each depth image, an initial depth interval is determined based on the reference pixel depth and the depth error threshold.
The electronic device adds and subtracts the product between the reference pixel depth corresponding to the depth image and the reference pixel depth-depth error threshold value respectively to obtain an initial depth section of each pixel point in the depth image.
For example, when the depth error threshold is 2% and the reference pixel depth corresponding to the depth image is 1000, the upper limit of the initial depth interval of each pixel point in the depth image is 1000+1000×2% =1020, and the lower limit of the initial depth interval of each pixel point in the depth image is 1000-1000×2% =980.
Step two: equally dividing the initial depth interval into a preset number of first class subintervals; and determining a depth range greater than the upper limit as a second class subinterval for the upper limit of the initial depth interval; and determining a depth range smaller than the lower limit as a third class subinterval for the lower limit of the initial depth interval.
The preset number of first-class subintervals is the number of intervals configured by engineers, and the preset number of first-class subintervals can be 3, 4, 5 and the like. In a preferred embodiment, the number of the preset number of first-type subintervals is an odd number greater than or equal to 5.
Step three: and taking the first class subinterval, the second class subinterval and the third class subinterval as depth statistical intervals.
As an example, if the number of the preset number of first-class subintervals is 5, the electronic device divides the number between 980 and 1200 pixels into 5 equal parts on the basis of the initial depth interval (980,1020), so as to obtain 5 first-class subintervals, which include an interval 1 (980, 988), an interval 2 (988, 996), an interval 3 (996, 1004), an interval 4 (1004, 1012), and an interval 5 (1012, 1200). And the electronic device determines the depth range of the section 6 (1020, + -infinity) as the second type subsection and the depth range of the section 7 (- + -infinity, 980) as the third type subsection. Further, in an embodiment, the electronic device takes all of the intervals 1 to 7, i.e. the 7 intervals as the depth statistics interval.
Step S13: and determining the pixel precision of each depth image based on the pixel point quantity distribution of each depth statistical interval and the interval weight of the corresponding depth statistical interval.
In an embodiment, the distribution of the number of pixel points is based on a proportional representation of the number of pixel points included in the depth statistics interval occupying the number of all pixel points included in the corresponding depth image.
In one embodiment, the electronic device may determine the distribution of the number of pixel points for each depth statistic interval by:
Step one: for each depth image, determining a proportion that the number of pixel points included in each depth statistic interval occupies the number of all pixel points included in the corresponding depth image.
As an example, there are X1 pixels in the depth image S, there are X2 pixels corresponding to the pixel depth falling within the section 1, X3 pixels corresponding to the pixel depth falling within the section 2, X4 pixels corresponding to the pixel depth falling within the section 3, X5 pixels corresponding to the pixel depth falling within the section 4, X6 pixels corresponding to the pixel depth falling within the section 5, X7 pixels corresponding to the pixel depth falling within the section 6, and X8 pixels corresponding to the pixel depth falling within the section 7, so that the corresponding section 1 has the ratio X2/X1, the corresponding section 2 has the ratio X3/X1, the corresponding section 3 has the ratio X4/X1, the corresponding section 4 has the ratio X5/X1, the corresponding section 5 has the ratio X6/X1, the corresponding section 6 has the ratio X7/X1, and the corresponding section 7 has the ratio X8/X1.
Step two: and determining an average value between the proportions corresponding to each depth statistical interval in at least two continuously shot depth images shot at each shooting distance, and representing the pixel point number distribution of the depth statistical interval based on the average value.
As an example, if the sensor captures 100 depth images at each capture distance, the electronic device calculates an average value S1 of the 100 depth images for the proportion X2/X1 of the section 1, an average value S2 of the proportion X2/X1 of the section 2, an average value S3 of the proportion X2/X1 of the section 3, an average value S4 of the proportion X2/X1 of the section 4, an average value S5 of the proportion X2/X1 of the section 5, an average value S6 of the proportion X2/X1 of the section 6, and an average value S7 of the proportion X2/X1 of the section 7. Then, the electronic device further uses the average values S1 to S7 as the pixel number distributions corresponding to the sections 1 to 7.
In another preferred embodiment, the electronic device may further determine the distribution of the number of pixels in each depth statistic interval by:
step one: for at least two depth images taken in succession, an average value of pixel depths between respective corresponding pixel points is determined.
For example, the at least two consecutive captured depth images are 100 frames of consecutive depth images with a resolution of 480x480, and the electronic device calculates an average value of pixel depths between corresponding pixels (e.g., pixel (1, 1), pixel (1, 2), pixel (2, 1), etc.) in the 100 frames of consecutive depth images.
Step two: and obtaining a new depth image based on the average value of the pixel depths among the pixel points.
Firstly, an initial depth image is obtained, and the pixel depth of each pixel point in the initial depth image is 0. And then, assigning the calculated average value of the pixel depths among the pixel points to each pixel point corresponding to the initial depth image so as to change the pixel depth of each pixel point in the initial depth image into the corresponding average value of the pixel depths of each pixel point, thereby obtaining a new depth image.
Step three: for the new depth image, determining a proportion value of the number of pixel points included in each depth statistic interval to the number of all pixel points included in the corresponding new depth image, and representing the distribution of the number of pixel points in the depth statistic interval based on the proportion value.
As an example, there are X1 pixels in the new depth image S, there are X2 pixels corresponding to the number of pixels having a depth falling within the section 1, X3 pixels corresponding to the number of pixels having a depth falling within the section 2, X4 pixels corresponding to the number of pixels having a depth falling within the section 3, X5 pixels corresponding to the number of pixels having a depth falling within the section 4, X6 pixels corresponding to the number of pixels having a depth falling within the section 5, X7 pixels corresponding to the number of pixels having a depth falling within the section 6, X8 pixels corresponding to the number of pixels having a depth falling within the section 7, such that the corresponding section 1 has a ratio of X2/X1, the corresponding section 2 has a ratio of X3/X1, the corresponding section 3 has a ratio of X4/X1, the corresponding section 4 has a ratio of X5/X1, the corresponding section 5 has a ratio of X6/X1, the corresponding section 6 has a ratio of X7/X1, and the corresponding section 7 has a ratio of X8/X1. Then, the electronic device takes the Bili beads of each of the sections 1 to 7 as the pixel number distribution corresponding to the sections 1 to 7.
The pixel precision degree characterizes the degree that the pixel depth of each pixel point in the depth image deviates from the reference pixel depth, namely the pixel precision degree can be used for evaluating the accuracy score of the depth value of the sensor shooting the depth image in a single scene (namely a single frame scene). If the accuracy score of the depth value is higher, the degree of deviation of the pixel depth of each pixel point in the depth image from the reference pixel depth is lower; if the accuracy score of the depth value is lower, the degree of deviation of the pixel depth of each pixel point in the depth image from the reference pixel depth is higher.
In an embodiment, the electronic device determines pixel precision of each depth image based on a pixel number distribution of each depth statistic interval and an interval weight of a corresponding depth statistic interval, including: and carrying out weighted summation processing on the interval weight of each depth statistical interval and the corresponding pixel point quantity proportion aiming at each depth image, and determining the pixel precision of the depth image.
As an example, the section weight for section 1 is denoted as W1 and the ratio of the number of pixel points is denoted as O mean [1]The section weight for section 2 is denoted as W2 and the ratio of the number of pixel points is denoted as O mean [2]The section weight for section 3 is denoted as W3 and the ratio of the number of pixel points is denoted as O mean [3]The section weight for section 4 is denoted as W4 and the ratio of the number of pixel points is denoted as O mean [4]The section weight for section 5 is denoted as W5 and the ratio of the number of pixel points is denoted as O mean [5]The section weight for section 6 is denoted as W6 and the ratio of the number of pixel points is denoted as O mean [6]The section weight for section 7 is denoted as W7 and the ratio of the number of pixel points is denoted as O mean [7]The pixel precision ToF score of the depth image may be expressed based on the following formula:
ToF_score=w 1 *O mean [1]+w 2 *O mean [2]+w 3 *O mean [3]+w 4 *O mean [4]+w 5 *O mean [5]+w 6 *O mean [6]+w 7 *O mean [7]。
step S14: and fusing the pixel precision of each depth image to obtain a precision detection result of the sensor when the shot object is shot.
In an embodiment, the electronic device determines a precision detection result of the sensor when shooting the subject, including the steps of:
step one: distance weights preset for the shooting distances are obtained.
Step two: and carrying out weighted summation processing on the pixel precision of the depth image shot by the sensor at each shooting distance and the corresponding distance weight to obtain the precision detection result of the sensor.
The engineer can determine the distance weight of different scenes (i.e. different shooting distances) according to the occurrence frequency or the requirement of different shooting distance data by combining with the actual development environment, so as to be used as the weighted value of the depth value accuracy score of the sensor. The depth value accuracy score is used for evaluating the depth value accuracy score of the sensor shooting the depth image in various scenes (i.e. complex scenes). If the accuracy score of the depth value is higher, the accuracy of the depth image shot by the surface sensor is higher; if the depth value accuracy score is lower, the accuracy of the depth image shot by the surface sensor is higher.
As an example, if the sensor has a distance weight W1 at a photographing distance of 0.3m and a pixel precision of the corresponding depth image of tof_score0.3m, a distance weight W2 at a photographing distance of 0.5m and a pixel precision of the corresponding depth image of tof_score0.5m, a distance weight W3 at a photographing distance of 1m and a pixel precision of the corresponding depth image of tof_score1m, a distance weight W4 at a photographing distance of 2m and a pixel precision of the corresponding depth image of tof_score2m, the accuracy detection result of the sensor may be expressed as:
ToF_final=W 1 *ToF_score 0.3m +W 2 *ToF_score 0.5m +W 3 *ToF_score 1m +W 4 *ToF_score 2m
in the process of detecting the precision of the sensor, the server firstly acquires depth images of the sensor, shot by the shot object, at a plurality of shooting distances; determining depth interval information for each pixel point in the depth image based on the respective reference pixel depth and depth error threshold value of each depth image; the depth interval information comprises a plurality of depth statistical intervals and interval weights of the depth statistical intervals, and the depth statistical intervals are depth intervals to which the actual pixel depths of the pixel points belong; determining the pixel precision of each depth image based on the pixel point quantity distribution of each depth statistic interval and the interval weight of the corresponding depth statistic interval; the pixel precision degree represents the degree of deviation of the pixel depth of each pixel point in the depth image from the reference pixel depth; and fusing the pixel precision of each depth image to obtain a precision detection result of the sensor when the shot object is shot. In this way, on one hand, the pixel precision of each depth image is determined according to the depth interval information of the depth image shot by the sensor at a plurality of shooting distances, and then the pixel precision of each depth image is fused to obtain the precision detection result of the sensor, so that the process of precision detection of the sensor is optimized, and the digestion of manpower and material resources is reduced; on the other hand, the method is different from the method for detecting the precision of the sensor through the detection tool in the prior art, the pixel precision of the depth image is determined according to the pixel point number distribution and the interval weight of each depth statistic interval in the depth image, so that the precision detection result of the sensor is obtained by utilizing the pixel precision of each depth image, the influence on the precision detection of the sensor caused by the use loss of the detection workpiece can be effectively avoided, the precision of the sensor precision detection is improved, the efficiency of the sensor precision detection is accelerated, and the method is beneficial to the subsequent function application of the sensor.
In an exemplary embodiment, referring to fig. 3, fig. 3 is a flowchart illustrating an embodiment of determining interval weights of a depth statistic interval according to the present application. In step S12, the process of determining the depth interval information for each pixel point in the depth image by the electronic device based on the respective reference pixel depth and the depth error threshold of each depth image may be implemented by:
step S121: and equally dividing a normal distribution interval of a preset normal distribution function based on each depth statistical interval to obtain a plurality of subclass distribution intervals respectively corresponding to each depth statistical interval.
Wherein, the distribution of depth pixel values of an ideal depth camera should be a normal distribution with the real depth as the mean value. Here, a standard normal distribution with a mean of 0 and a standard deviation of 1 is modeled, and the following formula is given:
the electronic device corresponds a standard normal distribution interval [ -2, 2) to an initial depth interval of the depth image, and divides the [ -2, 2) interval into equal divisions (split_span) identical to a first sub-interval of the depth image, so as to obtain equal division sub-distribution intervals.
As an example, the initial depth interval of the depth image is (980,1020), and the corresponding first sub-interval includes interval 1 (980, 988), interval 2 (988, 996), interval 3 (996, 1004), interval 4 (1004, 1012), interval 5 (1012, 1200), so that the sub-distribution interval of 5 parts corresponding to the interval [ -2,2 ] of the standard normal distribution includes interval a [ -2, -1.2), interval B [ -1.2, -0.4), interval C [ -0.4, 0.4), interval D [0.4,1.2 ], interval E [1.2,2 ]. Further, the electronic device takes the section F < -infinity, -2) as a sub-class distribution section corresponding to the second class sub-section of the depth image, and takes the section G < -2, + -infinity) as a sub-class distribution section corresponding to the third class sub-section of the depth image.
Step S122: for each sub-class distribution interval, an interval weight of a depth statistics interval corresponding to the sub-class distribution interval is determined based on a constant integral of the sub-class distribution interval in a normal distribution function.
In an embodiment, the vending machine calculates a constant score in the normal distribution function corresponding to each sub-class distribution interval as an interval weight for the pixel precision of the subsequent calculated depth image.
In one embodiment, the electronic device determines interval weights for depth statistics intervals corresponding to subclass distribution intervals, comprising: and multiplying the fixed integral of the sub-class distribution interval corresponding to the first class sub-interval by a first preset value to obtain a first calculation value, and taking the first calculation value as the interval weight of the first class sub-interval.
In another embodiment, the electronic device determines interval weights for depth statistics intervals corresponding to subclass distribution intervals, comprising: multiplying the constant integral of the sub-class distribution interval corresponding to the second class subinterval and the third class subinterval by a second preset value to obtain a second calculation value, and taking the second calculation value as the interval weights of the second class subinterval and the third class subinterval respectively.
The first preset value is positive, and the second preset value is negative, so that the aim of configuration is to make corresponding precision punishment for the pixel precision of the subsequent calculated depth image.
By way of example, the sub-class distribution section corresponding to the first class sub-section of the depth image includes section A < -2 > -1.2 >, section B < -1.2 > -0.4), section C < -0.4, section D [0.4,1.2 ], section E [1.2,2); the sub-class distribution interval corresponding to the second class subinterval of the depth image includes an interval F-infinity, -2), and the sub-class distribution interval corresponding to the third class subinterval of the depth image includes an interval G-2, + -infinity), for example. The electronic device first obtains a constant integral in a normal distribution function corresponding to each of the intervals a to E, and then multiplies the constant integral by a positive number 100 to obtain an interval weight of 9.2 corresponding to the interval a, an interval weight of 23.0 corresponding to the interval B, an interval weight of 31.1 corresponding to the interval C, an interval weight of 23.0 corresponding to the interval D, and an interval weight of 9.2 corresponding to the interval E. And the electronic equipment calculates the constant points in the normal distribution function corresponding to the interval F and the interval G, and multiplies the constant points by negative number 100 to obtain interval weight of-5 corresponding to the interval F and interval weight of-5 corresponding to the interval G.
In other embodiments, the engineer may autonomously configure interval weights of some sub-class distribution intervals, so that corresponding accuracy punishment and punishment processing may be performed for pixel accuracy of the subsequent calculated depth image. For example, the section weight of the corresponding section F is directly configured to be-50, and the section weight of the corresponding section G is directly configured to be-50.
In some embodiments, as shown in fig. 4, a method for correcting an angle of a sensor is provided, where the angle of the sensor needs to be corrected before the electronic device detects the accuracy of the sensor. The electronic device performing angle correction on the sensor comprises the following steps:
step S21: and acquiring a depth image of a planar object shot by the sensor.
Step S22: and determining the depth error degree of each local image based on the reference pixel depth of each pixel point in the depth image and the actual pixel depth of each pixel point in the preset number of local images in the depth image.
The local image is an image segment which equally divides the depth image according to the image area where the plane object is located.
In one embodiment, the average pixel depth of the partial image is the average between the actual pixel depths of the pixels within the corresponding region in the depth image (i.e., the partial image region). The actual pixel depth is the pixel depth of the depth image shot by the sensor.
Step S23: the photographing angle of the sensor is corrected based on the difference between the depth error degrees of the partial images.
In one embodiment, the degree of depth error of the partial image characterizes the difference in pixel depth between the depth image captured by the sensor and the intended image.
As an example, for the partial image X1, the partial image X2, the partial image X3, and the partial image X4 have the depth error degree P1, P2, P3, and P4, respectively. Wherein p1=p2, representing that the shooting distances of the scene areas corresponding to both the partial image X1 and the partial image X2 are the same relative to the sensor; p1 > p3=p4, representing that the shooting distances of the scene areas corresponding to both the partial image X3 and the partial image X4 are the same with respect to the sensor, and the scene areas corresponding to the partial image X3 and the partial image X4 are closer with respect to the sensor than the shooting distances of the scene areas corresponding to the partial image X1 and the partial image X2. Therefore, the shooting angle of the sensor can be corrected by the conclusion drawn above.
In this way, on one hand, the depth error degree of the local image is determined according to the reference pixel depth in the depth image and the actual pixel depth of the local image, and then the shooting angle of the sensor is corrected according to the difference between the depth error degrees of the local images, so that the flow of correcting the shooting angle of the sensor is optimized, and the digestion of manpower and material resources is reduced; on the other hand, through being different from the mode of correcting the shooting angle of the sensor through detecting the frock in the prior art, according to the difference between the respective degree of depth error of preset quantity partial image in the depth image, correct the shooting angle of the sensor, can effectively improve the degree of accuracy of correcting the shooting angle of the sensor, promoted the efficiency of correcting the shooting angle of the sensor, and then reduced the error of sensor when follow-up application.
In an exemplary embodiment, in step S22 of the foregoing embodiment, the process of determining, by the electronic device, the depth error degree of each local image based on the reference pixel depth of each pixel point in the depth image and the actual pixel depth of each pixel point in the preset number of local images in the depth image may be implemented by:
step one: an average pixel depth for each partial image is determined based on the actual pixel depths for the pixel points within each partial image.
In one embodiment, the electronics calculate an average pixel Depth between actual pixel depths of each pixel point within the partial image region
Step two: the depth average difference for each partial image is determined based on the difference between the average pixel depth for each partial image and the reference pixel depth for the depth image.
In one embodiment, the electronic device Depth the reference pixel of the Depth image GT Subtracting the average pixel Depth region Obtaining Depth average difference Depth of local image diff
Step three: the degree of depth error for each partial image is determined based on a quotient between the average difference in depth for each partial image and the nominal error for the depth image.
Wherein the nominal error characterizes a tolerance to an imaging angle error of the sensor at an imaging distance between the sensor and the planar object.
In an embodiment, the method for obtaining the nominal error of the depth image by the electronic device, that is, before determining the depth error degree of each local image based on the quotient between the depth average difference of each local image and the nominal error of the depth image, further comprises the following steps:
step one: a nominal error threshold for the depth image is determined based on the capture distance.
As an example, a sensorThe shooting distance from the plane object is 2 meters, and the preset nominal error threshold Err is set range 2%; the shooting distance between the sensor and the plane object is 1 meter, and the preset nominal error threshold Err is set range 1%.
Step two: the nominal error of the depth image is determined based on the product between the nominal error threshold and the reference pixel depth.
In one embodiment, the electronic device Depth the reference pixel of the Depth image GT Multiplying nominal error threshold Err of sensor range Obtaining nominal error Depth of Depth image err
In an exemplary embodiment, referring to fig. 5, fig. 5 is a flowchart illustrating an embodiment of correcting a shooting angle of a sensor according to the present application. In step S23 of the above embodiment, the electronic device corrects the photographing angle of the sensor based on the depth error degree of each of the partial images, by:
Step S231: based on the depth error level, color space features of each partial image are configured.
In one embodiment, the color space feature is an HSV (Hue, saturation, value) feature, where the color space feature includes Hue, saturation, and brightness for displaying the partial image. I.e. H in the HSV feature characterizes the hue of the image, which characterizes the different colors of the image by means of a color wheel; s represents the saturation of the image and V represents the brightness of the image color.
In an embodiment, the depth error level is based on a depth error rate characterization, the luminance is in a preset luminance interval, and the saturation is in a preset saturation interval.
In an embodiment, the depth error magnification may be positive or negative, and the brightness interval is (0, 100), where the color is displayed as black when v=0, where v=100 is taken here. The saturation interval is (0, 100), wherein the value of the saturation is calculated by referring to the depth error degree, and the saturation represents the shade of the color in the HSV space.
In one embodiment, the electronic device configures the hue and brightness of the partial image in two ways:
in the first aspect, when the depth error magnification of the partial image is positive, the tone of the partial image is set to the first primary color and the luminance is set to the upper limit value of the luminance section.
Specifically, when the depth error rate Err rate ∈(0,+∞]When the hue H of the partial image is set to about a value 110, the luminance V is fixed to a value 100, and when the hue H is set to about the value 110, the partial image is represented as green among three primary colors.
In the second aspect, when the depth error magnification of the partial image is negative, the tone of the partial image is set to the second primary color and the luminance is set to the upper limit value of the luminance section.
Specifically, when the depth error rate Err rate When e (- ≡0), the hue H of the partial image is configured to be a value of 0 or 360, and the brightness V is fixed to be a value of 100, wherein the partial image exhibits red in three primary colors when the hue H is a value of 0 or 360.
Wherein the first primary color is one of the three primary colors (i.e., green), and the second primary color is the other of the three primary colors (i.e., red).
In one embodiment, the electronic device configures the saturation of the partial image in four ways:
in the first aspect, when the depth error magnification of the partial image is not less than 0 and not more than 1, the saturation of the partial image is set to be a product value between the upper limit value of the saturation section and the depth error magnification.
Specifically, when the depth error rate Err rate ∈[0,1]At the time, the depth error multiplying power Err rate Multiplying the upper limit 100 of the saturation interval visually represents the saturation of the partial image. Wherein, when the real-time error rate Err rate When=0, due to real-time error rate Err rate Multiplying the upper limit 100 of the saturation interval, the output is also zero, so that the HSV color space appears white.
In the second aspect, when the depth error magnification of the partial image is not less than-1 and less than 0, the saturation of the partial image is set to be a product value between the upper limit value of the saturation section and the absolute value of the depth error magnification.
Specifically, when the depth error rate Err rate E [ -1, 0), since the saturation S must be positive, the depth error rate Err rate The absolute value of (2) is multiplied by the upper limit 100 of the saturation interval to visually represent the saturation of the partial image.
In a third aspect, when the depth error magnification of the partial image is greater than 1, the saturation of the partial image is set to the upper limit value of the saturation section.
Specifically, when the depth error rate Err rate When E is (1, + -infinity), the value range of the saturation S is (0-100), the saturation is configured as the upper limit 100 of the saturation interval.
In a fourth aspect, when the depth error magnification of the partial image is smaller than-1, the saturation of the partial image is set to the upper limit value of the saturation section.
Specifically, when the depth error rate Err rate When e (- ≡minus 1), the saturation S is set to be the upper limit value 100 of the saturation section because the saturation S is set to be (0-100).
Step S232: and displaying the depth image based on the color space characteristics of each local image.
In an embodiment, referring to fig. 6, fig. 6 is an interface schematic diagram showing an embodiment of a depth image according to the present application. As shown in the figure, the local image X1 in the depth image is shown by its color space feature P1, and the depth error magnification of the local image X1 is 30%; the local image X2 is displayed through the color space characteristic P2, and the depth error multiplying power of-10% is obtained in the local image X2; the local image X3 is displayed through the color space characteristic P3, and the depth error multiplying power of the local image X3 is 10%; the partial image X4 is shown with its color space feature P4 and its depth error magnification-30% is shown in the partial image X4.
Step S233: and correcting the shooting angle of the sensor based on the displayed depth image and the depth error degree of each local image.
In one embodiment, the degree of depth error of the partial image characterizes the difference in pixel depth between the depth image captured by the sensor and the intended image.
As an example, for the partial image X1, the partial image X2, the partial image X3, and the partial image X4 have the depth error degree P1, P2, P3, and P4, respectively. Wherein p1=p2, representing that the shooting distances of the scene areas corresponding to both the partial image X1 and the partial image X2 are the same relative to the sensor; p1 > p3=p4, representing that the shooting distances of the scene areas corresponding to both the partial image X3 and the partial image X4 are the same with respect to the sensor, and the scene areas corresponding to the partial image X3 and the partial image X4 are closer with respect to the sensor than the shooting distances of the scene areas corresponding to the partial image X1 and the partial image X2. Therefore, the shooting angle of the sensor can be corrected by the conclusion drawn above.
In this way, on one hand, the pixel precision of each depth image is determined according to the depth interval information of the depth image shot by the sensor at a plurality of shooting distances, and then the pixel precision of each depth image is fused to obtain the precision detection result of the sensor, so that the process of precision detection of the sensor is optimized, and the digestion of manpower and material resources is reduced; on the other hand, the method is different from the method for detecting the precision of the sensor through the detection tool in the prior art, the pixel precision of the depth image is determined according to the pixel point number distribution and the interval weight of each depth statistic interval in the depth image, so that the precision detection result of the sensor is obtained by utilizing the pixel precision of each depth image, the influence on the precision detection of the sensor caused by the use loss of the detection workpiece can be effectively avoided, the precision of the sensor precision detection is improved, the efficiency of the sensor precision detection is accelerated, and the method is beneficial to the subsequent function application of the sensor.
It should be understood that, although the steps in the flowcharts of fig. 2-6 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps of fig. 2-6 may include multiple steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in other steps.
It should be understood that the same/similar parts of the embodiments of the method described above in this specification may be referred to each other, and each embodiment focuses on differences from other embodiments, and references to descriptions of other method embodiments are only needed.
Fig. 7 is a block diagram of a precision detection device of a sensor according to an embodiment of the present application. Referring to fig. 7, the accuracy detecting device 30 of the sensor includes: an image acquisition unit 31, a depth interval unit 32, a pixel precision unit 33, and a precision detection unit 34.
Wherein the image acquisition unit 31 is configured to perform acquisition of depth images of the subject photographed by the sensor at a plurality of photographing distances;
wherein the error determination unit 32 is configured to perform determination of depth interval information for each pixel point in the depth image based on the respective reference pixel depth and depth error threshold value of each of the depth images; the depth interval information comprises a plurality of depth statistical intervals and interval weights of the depth statistical intervals, wherein the depth statistical intervals are depth intervals to which actual pixel depths of the pixel points belong;
wherein the angle correction unit 33 is configured to perform determining pixel precision of each depth image based on the pixel point number distribution of each depth statistic interval and the interval weight corresponding to the depth statistic interval; the pixel precision degree characterizes the degree of deviation of the pixel depth of each pixel point in the depth image from the reference pixel depth.
Wherein the precision detection unit 34 is configured to perform fusion of pixel precision of each of the depth images, and obtain a precision detection result of the sensor when photographing the subject.
In an exemplary embodiment, in the aspect of fusing the pixel precision of each of the depth images to obtain a precision detection result of the sensor when photographing the subject, the apparatus 30 further includes:
acquiring a distance weight preset for each shooting distance;
and carrying out weighted summation processing on the pixel precision of the depth image shot by the sensor at each shooting distance and the corresponding distance weight to obtain a precision detection result of the sensor.
In an exemplary embodiment, the pixel number distribution is characterized by a ratio of the number of pixels included in the depth statistics interval to the number of all pixels included in the depth image;
the determining, based on the distribution of the number of pixels in each depth statistic interval and the interval weight corresponding to the depth statistic interval, the aspect of pixel precision of each depth image, the apparatus 30 further includes:
and carrying out weighted summation processing on the interval weight of each depth statistical interval and the corresponding pixel quantity proportion aiming at each depth image, and determining the pixel precision of the depth image.
In an exemplary embodiment, the number of depth images photographed by the sensor for a subject at each of the photographing distances is at least two consecutive photographing;
prior to said determining the pixel precision of each of said depth images, said apparatus 30 further comprises:
determining, for each depth image, a proportion of the number of pixels included in each depth statistical interval to the number of all pixels included in the corresponding depth image;
and determining an average value between the proportions corresponding to the depth statistical intervals in at least two continuously shot depth images shot at each shooting distance, and representing the pixel point number distribution of the depth statistical intervals based on the average value.
In an exemplary embodiment, the determining depth interval information aspect for each pixel point in the depth image based on the reference pixel depth and the depth error threshold of each depth image, the apparatus 30 further includes:
determining an initial depth interval for each of the depth images based on the reference pixel depth and the depth error threshold;
equally dividing the initial depth interval into a preset number of first class subintervals; and
Determining a depth range greater than an upper limit of the initial depth interval as a second class subinterval; and
determining a depth range smaller than a lower limit of the initial depth interval as a third type subinterval;
and taking the first class subinterval, the second class subinterval and the third class subinterval as depth statistical intervals.
In an exemplary embodiment, in determining the depth interval information for each pixel point in the depth image based on the respective reference pixel depth and the depth error threshold for each of the depth images, the apparatus 30 further includes:
dividing the normal distribution interval of the preset normal distribution function equally based on each depth statistical interval to obtain a plurality of subclass distribution intervals respectively corresponding to each depth statistical interval;
for each of the sub-class distribution intervals, an interval weight of a depth statistics interval corresponding to the sub-class distribution interval is determined based on a constant integral of the sub-class distribution interval in the normal distribution function.
In an exemplary embodiment, in determining the interval weight of the depth statistics interval corresponding to the sub-class distribution interval based on the constant integral of the sub-class distribution interval in the normal distribution function, the apparatus 30 further includes:
Multiplying the fixed integral of the sub-class distribution interval corresponding to the first class sub-interval by a first preset numerical value to obtain a first arithmetic value, and taking the first arithmetic value as the interval weight of the first class sub-interval;
multiplying the fixed integral of the sub-class distribution interval corresponding to the second class subinterval and the third class subinterval respectively by a second preset value to obtain a second arithmetic value, and taking the second arithmetic value as the interval weights of the second class subinterval and the third class subinterval respectively;
wherein the first preset value is a positive number and the second preset value is a negative number.
Fig. 8 is a block diagram of an electronic device 40 according to an embodiment of the present application. For example, electronic device 40 may be a server, an electronic component, or an array of servers, etc. Referring to fig. 8, the electronic device 40 comprises a processor 41, which further processor 41 may be a processor set, which may comprise one or more processors, and the electronic device 40 comprises memory resources represented by a memory 42, wherein the memory 42 has stored thereon a computer program, such as an application program. The computer program stored in memory 42 may include one or more modules each corresponding to a set of executable instructions. Further, the processor 41 is configured to implement the accuracy detection method of the sensor as described above when executing the executable instructions.
In some embodiments, electronic device 40 is a server in which a computing system may run one or more operating systems, including any of the operating systems discussed above, as well as any commercially available server operating systems. The electronic device 40 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP (hypertext transfer protocol) servers, FTP (file transfer protocol) servers, CGI (common gateway interface) servers, super servers, database servers, and the like. Exemplary database servers include, but are not limited to, those commercially available from (International Business machines) and the like.
In some embodiments, processor 41 generally controls overall operation of electronic device 40, such as operations associated with display, data processing, data communication, and recording operations. Processor 41 may include one or more processor components to execute computer programs to perform all or part of the steps of the methods described above. Further, the processor component may include one or more modules that facilitate interactions between the processor component and other components. For example, the processor component may include a multimedia module to facilitate controlling interactions between the consumer electronic device 40 and the processor 41 using the multimedia component.
In some embodiments, the processor components in the processor 41 may also be referred to as CPUs (Central Processing Unit, central processing units). The processor assembly may be an electronic chip with signal processing capabilities. The processor may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor element or the like. In addition, the processor components may be collectively implemented by an integrated circuit chip.
In some embodiments, memory 42 is configured to store various types of data to support operations at electronic device 40. Examples of such data include instructions, acquisition data, messages, pictures, video, etc., for any application or method operating on electronic device 40. The memory 42 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, optical disk, or graphene memory.
In some embodiments, the memory 42 may be a memory stick, TF card, or the like, and may store all information in the electronic device 40, including input raw data, computer programs, intermediate operation results, and final operation results, all stored in the memory 42. In some embodiments, it stores and retrieves information based on the location specified by the processor. In some embodiments, with memory 42, electronic device 40 has memory functions to ensure proper operation. In some embodiments, the memory 42 of the electronic device 40 may be divided into a main memory (memory) and an auxiliary memory (external memory) according to purposes, and there is a classification method that is divided into an external memory and an internal memory. The external memory is usually a magnetic medium, an optical disk, or the like, and can store information for a long period of time. The memory refers to a storage component on the motherboard for storing data and programs currently being executed, but is only used for temporarily storing programs and data, and the data is lost when the power supply is turned off or the power is turned off.
In some embodiments, the electronic device 40 may further include: the power supply assembly 43 is configured to perform power management of the electronic device 40, and the wired or wireless network interface 44 is configured to connect the electronic device 40 to a network, and the input output (I/O) interface 45. The electronic device 40 may operate based on an operating system stored in the memory 42, such as Windows Server, mac OS X, unix, linux, freeBSD, or the like.
In some embodiments, power supply component 43 provides power to the various components of electronic device 40. Power supply components 43 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 40.
In some embodiments, wired or wireless network interface 44 is configured to facilitate communication between electronic device 40 and other devices, either wired or wireless. The electronic device 40 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof.
In some embodiments, the wired or wireless network interface 44 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the wired or wireless network interface 44 also includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In some embodiments, input output (I/O) interface 45 provides an interface between processor 41 and peripheral interface modules, which may be keyboards, click wheels, buttons, and the like. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The embodiment of the application provides a block diagram of a computer-readable storage medium. The computer readable storage medium stores a computer program, wherein the computer program when executed by a processor implements the method for detecting accuracy of the sensor as described above.
The units integrated with the functional units in the various embodiments of the present application may be stored in a computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied essentially or partly in the form of a software product or all or part of the technical solution, where the computer-readable storage medium includes several instructions to cause a server (which may be a personal computer, a system server, or a network device, etc.), an electronic device (such as MP3, MP4, etc., also may be a smart terminal such as a mobile phone, a tablet computer, a wearable device, etc., also may be a desktop computer, etc.), or a processor (processor) to perform all or part of the steps of the method according to the embodiments of the present application.
The embodiment of the application provides a block diagram of a computer program product. The computer program product comprises program instructions executable by a processor of a server to implement a method of accuracy detection of a sensor as described above.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided with a method of accuracy detection of a sensor, an accuracy detection apparatus 30 of a sensor, an electronic device 40, a computer readable storage medium or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer program instructions (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods of detecting accuracy of a sensor, apparatus 30 for detecting accuracy of a sensor, electronic device 40, computer-readable storage medium, or computer program product according to embodiments of the application. It will be understood that each flowchart and/or block of the flowchart and/or block diagrams, and combinations of flowcharts and/or block diagrams, can be implemented by computer program products. These computer program products may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the program instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program products may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the program instructions stored in the computer program product produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the program instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for detecting accuracy of a sensor, the method comprising:
acquiring depth images of a shot object shot by a sensor at a plurality of shooting distances;
determining depth interval information for each pixel point in the depth image based on the respective reference pixel depth and depth error threshold value of each depth image; the depth interval information comprises a plurality of depth statistical intervals and interval weights of the depth statistical intervals, wherein the depth statistical intervals are depth intervals to which actual pixel depths of the pixel points belong;
determining the pixel precision of each depth image based on the pixel point quantity distribution of each depth statistical interval and the interval weight corresponding to the depth statistical interval; the pixel precision degree represents the degree of deviation of the pixel depth of each pixel point in the depth image from the reference pixel depth;
and fusing the pixel precision of each depth image to obtain a precision detection result of the sensor when the shot object is shot.
2. The method according to claim 1, wherein the fusing the pixel precision of each of the depth images to obtain the precision detection result of the sensor when photographing the subject includes:
acquiring a distance weight preset for each shooting distance;
and carrying out weighted summation processing on the pixel precision of the depth image shot by the sensor at each shooting distance and the corresponding distance weight to obtain a precision detection result of the sensor.
3. The method of claim 1, wherein the distribution of pixel count is characterized by a proportional representation of the number of pixel counts included in the depth statistics interval corresponding to the number of all pixel counts included in the depth image;
the determining the pixel precision of each depth image based on the pixel point number distribution of each depth statistical interval and the interval weight corresponding to the depth statistical interval comprises the following steps:
and carrying out weighted summation processing on the interval weight of each depth statistical interval and the corresponding pixel quantity proportion aiming at each depth image, and determining the pixel precision of the depth image.
4. A method according to any one of claims 1 to 3, wherein the number of depth images photographed by the sensor for a subject at each of the photographing distances is at least two of consecutive photographing;
before said determining the pixel precision of each of said depth images, further comprising:
determining, for each depth image, a proportion of the number of pixels included in each depth statistical interval to the number of all pixels included in the corresponding depth image;
and determining an average value between the proportions corresponding to the depth statistical intervals in at least two continuously shot depth images shot at each shooting distance, and representing the pixel point number distribution of the depth statistical intervals based on the average value.
5. A method according to any of claims 1-3, wherein said determining depth interval information for each pixel point in the depth image based on a respective reference pixel depth and depth error threshold for each of the depth images comprises:
determining an initial depth interval for each of the depth images based on the reference pixel depth and the depth error threshold;
Equally dividing the initial depth interval into a preset number of first class subintervals; and
determining a depth range greater than an upper limit of the initial depth interval as a second class subinterval; and
determining a depth range smaller than a lower limit of the initial depth interval as a third type subinterval;
and taking the first class subinterval, the second class subinterval and the third class subinterval as depth statistical intervals.
6. The method of claim 5, wherein determining depth interval information for each pixel point in the depth image based on the respective reference pixel depth and depth error threshold for each depth image comprises:
dividing the normal distribution interval of the preset normal distribution function equally based on each depth statistical interval to obtain a plurality of subclass distribution intervals respectively corresponding to each depth statistical interval;
for each of the sub-class distribution intervals, an interval weight of a depth statistics interval corresponding to the sub-class distribution interval is determined based on a constant integral of the sub-class distribution interval in the normal distribution function.
7. The method of claim 6, wherein the determining the interval weight of the depth statistics interval corresponding to the sub-class distribution interval based on the constant integral of the sub-class distribution interval in the normal distribution function comprises:
Multiplying the fixed integral of the sub-class distribution interval corresponding to the first class sub-interval by a first preset numerical value to obtain a first arithmetic value, and taking the first arithmetic value as the interval weight of the first class sub-interval;
multiplying the fixed integral of the sub-class distribution interval corresponding to the second class subinterval and the third class subinterval respectively by a second preset value to obtain a second arithmetic value, and taking the second arithmetic value as the interval weights of the second class subinterval and the third class subinterval respectively;
wherein the first preset value is a positive number and the second preset value is a negative number.
8. A precision detection device of a sensor, the device comprising:
an image acquisition unit configured to perform acquisition of depth images of a subject photographed by a sensor at a plurality of photographing distances;
a depth interval unit configured to perform determination of depth interval information for each pixel point in the depth image based on a respective reference pixel depth and a depth error threshold value for each of the depth images; the depth interval information comprises a plurality of depth statistical intervals and interval weights of the depth statistical intervals, wherein the depth statistical intervals are depth intervals to which actual pixel depths of the pixel points belong;
A pixel precision unit configured to perform determination of pixel precision of each of the depth images based on a pixel point number distribution of each of the depth statistical intervals and an interval weight corresponding to the depth statistical interval; the pixel precision degree represents the degree of deviation of the pixel depth of each pixel point in the depth image from the reference pixel depth;
and a precision detection unit configured to perform fusion of pixel precision of each of the depth images, and obtain a precision detection result of the sensor when photographing the subject.
9. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the executable instructions to implement the method of accuracy detection of a sensor as claimed in any one of claims 1 to 7.
10. A computer readable storage medium comprising program data, wherein the program data, when executed by a processor of an electronic device, enables the electronic device to perform the method of accuracy detection of a sensor as claimed in any one of claims 1 to 7.
CN202310730272.6A 2023-06-19 2023-06-19 Sensor accuracy detection method, detection device, electronic device, and storage medium Pending CN116708756A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310730272.6A CN116708756A (en) 2023-06-19 2023-06-19 Sensor accuracy detection method, detection device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310730272.6A CN116708756A (en) 2023-06-19 2023-06-19 Sensor accuracy detection method, detection device, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN116708756A true CN116708756A (en) 2023-09-05

Family

ID=87843007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310730272.6A Pending CN116708756A (en) 2023-06-19 2023-06-19 Sensor accuracy detection method, detection device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN116708756A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117880630A (en) * 2024-03-13 2024-04-12 杭州星犀科技有限公司 Focusing depth acquisition method, focusing depth acquisition system and terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117880630A (en) * 2024-03-13 2024-04-12 杭州星犀科技有限公司 Focusing depth acquisition method, focusing depth acquisition system and terminal

Similar Documents

Publication Publication Date Title
US11044453B2 (en) Data processing apparatus, imaging apparatus and data processing method
CN108174118B (en) Image processing method and device and electronic equipment
JP2017520050A (en) Local adaptive histogram flattening
US10002436B2 (en) Image processing device, image processing method, and solid-state imaging device
KR102566998B1 (en) Apparatus and method for determining image sharpness
US20150278996A1 (en) Image processing apparatus, method, and medium for generating color image data
CN107241556B (en) Light measuring method and device of image acquisition equipment
WO2020083307A1 (en) Method, apparatus, and storage medium for obtaining depth image
CN108337496B (en) White balance processing method, processing device, processing equipment and storage medium
US20190258852A1 (en) Image processing apparatus, image processing system, image processing method, and program
CN113029128B (en) Visual navigation method and related device, mobile terminal and storage medium
CN112634343A (en) Training method of image depth estimation model and processing method of image depth information
US20200272524A1 (en) Method and system for auto-setting of image acquisition and processing modules and of sharing resources in large scale video systems
US20190026921A1 (en) Calculating device and calculating device control method
CN116708756A (en) Sensor accuracy detection method, detection device, electronic device, and storage medium
US10834341B2 (en) Systems and methods for simultaneous capture of two or more sets of light images
CN111563517B (en) Image processing method, device, electronic equipment and storage medium
JP2019096222A (en) Image processor, method for processing image, and computer program
CN113344906B (en) Camera evaluation method and device in vehicle-road cooperation, road side equipment and cloud control platform
CN112492191B (en) Image acquisition method, device, equipment and medium
CN107392948B (en) Image registration method of amplitude-division real-time polarization imaging system
CN113628259A (en) Image registration processing method and device
CN110689565B (en) Depth map determination method and device and electronic equipment
US9813640B2 (en) Image processing apparatus, image processing method, image processing program, and non-transitory recording for calculating a degree-of-invalidity for a selected subject type
CN116485645A (en) Image stitching method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination