CN114881908B - Abnormal pixel identification method, device and equipment and computer storage medium - Google Patents

Abnormal pixel identification method, device and equipment and computer storage medium Download PDF

Info

Publication number
CN114881908B
CN114881908B CN202210796052.9A CN202210796052A CN114881908B CN 114881908 B CN114881908 B CN 114881908B CN 202210796052 A CN202210796052 A CN 202210796052A CN 114881908 B CN114881908 B CN 114881908B
Authority
CN
China
Prior art keywords
distance
minimum
image
frequency
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210796052.9A
Other languages
Chinese (zh)
Other versions
CN114881908A (en
Inventor
汪峰
莫苏苏
吴昊
王抒昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Silicon Integrated Co Ltd
Original Assignee
Wuhan Silicon Integrated Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Silicon Integrated Co Ltd filed Critical Wuhan Silicon Integrated Co Ltd
Priority to CN202210796052.9A priority Critical patent/CN114881908B/en
Publication of CN114881908A publication Critical patent/CN114881908A/en
Application granted granted Critical
Publication of CN114881908B publication Critical patent/CN114881908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The embodiment of the application discloses a method, a device, equipment and a computer storage medium for identifying abnormal pixels, wherein the method comprises the following steps: acquiring an image to be identified; determining a mapping relation between a minimum fusion error threshold value and a measurement characteristic value in an image to be identified; the mapping relation is obtained by analyzing according to the minimum error graph under at least one distance and the corresponding measurement characteristic graph; determining a minimum fusion error threshold value corresponding to each of a plurality of pixel points in the image to be identified according to the mapping relation; and performing abnormal pixel identification on a plurality of pixel points in the image to be identified according to the minimum fusion error threshold value, and determining abnormal pixel points in the image to be identified. Therefore, the corresponding minimum fusion error threshold is determined according to the mapping relation between the minimum fusion error threshold and the measurement characteristic value in the image to be recognized, whether the pixel points in the image to be recognized are abnormal pixel points or not is recognized according to the minimum fusion error threshold, and the abnormal pixel points in the image to be recognized are recognized.

Description

Abnormal pixel identification method, device and equipment and computer storage medium
Technical Field
The present application relates to the field of pixel identification technologies, and in particular, to a method, an apparatus, a device, and a computer storage medium for identifying an abnormal pixel.
Background
At present, a Time of Flight (TOF) camera mostly adopts a dual-frequency fusion method, so that the measurement accuracy and range of the TOF camera are improved, and the distance ambiguity phenomenon under the condition of single frequency is solved. The method comprises the steps of obtaining depth measurement distances under various frequencies, and carrying out period expansion on all the measurement distances respectively to obtain a minimum measurement error value, wherein the distance after the period expansion corresponding to the main frequency is the final distance value.
In the related art, the ToF camera works in dual-frequency mode, and the measurement range is expanded while the distance measurement precision is not reduced by a dual-frequency fusion method. However, depth images under different frequencies are not completed within an integration time, and imaging under two single frequencies is difficult to be in one-to-one correspondence, and is easily affected by factors such as random noise, target motion and the like, so that the final fusion result is abnormal.
Disclosure of Invention
The application provides an abnormal pixel identification method, an abnormal pixel identification device, abnormal pixel identification equipment and a computer storage medium, and the abnormal pixel identification method, the abnormal pixel identification device and the computer storage medium can be used for identifying abnormal pixels in an image to be identified.
The technical scheme of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an abnormal pixel identification method, where the method includes:
acquiring an image to be identified;
determining a mapping relation between a minimum fusion error threshold value and a measurement characteristic value in the image to be identified; the mapping relation is obtained by analyzing according to the minimum error graph under at least one distance and the corresponding measurement characteristic graph;
determining a minimum fusion error threshold value corresponding to each of a plurality of pixel points in the image to be identified according to the mapping relation;
and according to the minimum fusion error threshold value, carrying out abnormal pixel identification on a plurality of pixel points in the image to be identified, and determining abnormal pixel points in the image to be identified.
In some embodiments, the method further comprises:
acquiring a dual-frequency depth image at least one distance;
determining a minimum error map under the at least one distance according to the dual-frequency depth image under the at least one distance;
acquiring a measurement characteristic map under at least one distance, wherein the measurement characteristic map at least comprises a measurement amplitude map, a measurement gray scale map or a measurement depth map;
and performing fitting analysis according to the minimum error graph under the at least one distance and the corresponding measurement characteristic graph under the at least one distance to determine the mapping relation between the minimum fusion error threshold and the measurement characteristic value.
In some embodiments, the acquiring the dual-frequency depth image at the at least one distance and the measurement feature map at the at least one distance includes:
and acquiring images at different distances in a static scene, and determining a dual-frequency depth image at least one distance and a measurement feature map at least one distance.
In some embodiments, the determining the minimum error map at the at least one distance from the dual-frequency depth image at the at least one distance includes:
and respectively carrying out minimum fusion error calculation according to the double-frequency depth image under the at least one distance, and determining a minimum error graph corresponding to the at least one distance.
In some embodiments, the determining the minimum error map corresponding to each of the at least one distance by performing minimum fusion error calculation according to the dual-frequency depth images at the at least one distance includes:
determining a first measured distance value and at least one first measurement fuzzy number under a first frequency and a second measured distance value and at least one second measurement fuzzy number under a second frequency according to the double-frequency depth image under the first distance;
calculating at least one first candidate distance corresponding to a first frequency according to the first measurement distance value under the first frequency and the at least one first measurement fuzzy time;
calculating at least one second candidate distance corresponding to a second frequency according to the second measurement distance value under the second frequency and the at least one second measurement fuzzy time;
calculating the minimum error map corresponding to the first distance based on at least one first candidate distance corresponding to the first frequency and at least one second candidate distance corresponding to the second frequency;
wherein the first distance is any one of the at least one distance.
In some embodiments, said calculating said minimum error map for a first distance based on at least one first candidate distance for said first frequency and at least one second candidate distance for said second frequency comprises:
performing difference calculation on at least one first candidate distance corresponding to the first frequency and at least one second candidate distance corresponding to the second frequency to obtain a plurality of difference values;
and obtaining a minimum error map corresponding to the first distance according to the minimum difference value in the plurality of difference values.
In some embodiments, the performing fitting analysis according to the minimum error map at the at least one distance and the corresponding measured feature map to determine a mapping relationship between the minimum fusion error threshold and the measured feature value includes:
constructing a first distribution graph according to the minimum error graph and the corresponding measurement characteristic graph under the at least one distance; wherein the first distribution map is used for reflecting the mapping relation between the minimum fusion error threshold value and the measured characteristic value;
determining a minimum fusion error threshold corresponding to at least one measured characteristic value based on the first distribution graph;
and fitting according to the at least one measurement characteristic value and the corresponding minimum fusion error threshold value, and determining the mapping relation between the minimum fusion error threshold value and the measurement characteristic value.
In some embodiments, the determining a minimum fusion error threshold corresponding to at least one measured characteristic value based on the first profile includes:
determining a minimum error value corresponding to the first measured characteristic value based on the first distribution graph;
calculating a preset quantile of a minimum error value corresponding to the first measurement characteristic value;
determining the preset quantile as a minimum fusion error threshold corresponding to the first measurement characteristic value;
wherein the first measured characteristic value is any one of the at least one measured characteristic values.
In some embodiments, the determining, according to the mapping relationship, a minimum fusion error threshold corresponding to each of a plurality of pixel points in the image to be recognized includes:
determining the measurement characteristic values corresponding to a plurality of pixel points in the image to be identified;
determining a minimum fusion error threshold corresponding to the image to be identified based on the measurement characteristic values corresponding to the pixel points and the mapping relation; and the pixel points in the image to be identified have a corresponding relation with the minimum fusion error threshold value.
In some embodiments, the performing, according to the minimum fusion error threshold, abnormal pixel identification on a plurality of pixel points in the image to be identified, and determining abnormal pixel points in the image to be identified includes:
determining the minimum fusion error corresponding to each pixel point in the image to be identified;
if the minimum fusion error value corresponding to the pixel point is less than or equal to the minimum fusion error threshold corresponding to the pixel point, determining the pixel point as a normal pixel point;
and if the minimum fusion error value corresponding to the pixel point is greater than the minimum fusion error threshold corresponding to the pixel point, determining the pixel point as an abnormal pixel point.
In a second aspect, an embodiment of the present application provides an abnormal pixel identification apparatus, including:
an acquisition unit configured to acquire an image to be recognized;
the mapping unit is configured to determine a mapping relation between a minimum fusion error threshold value and a measurement characteristic value in the image to be identified; the mapping relation is obtained by analyzing according to the minimum error graph under at least one distance and the corresponding measurement characteristic graph;
the determining unit is configured to determine a minimum fusion error threshold value corresponding to each of a plurality of pixel points in the image to be identified according to the mapping relation;
and the identification unit is configured to perform abnormal pixel identification on a plurality of pixel points in the image to be identified according to the minimum fusion error threshold value, and determine abnormal pixel points in the image to be identified.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory for storing a computer program capable of running on the processor;
a processor for performing the method of abnormal pixel identification of any one of the first aspect when the computer program is run.
In a fourth aspect, an embodiment of the present application provides a computer storage medium storing a computer program, which when executed by at least one processor implements the method for identifying an abnormal pixel according to any one of the first aspect.
The method, the device, the equipment and the computer storage medium for identifying the abnormal pixels are used for acquiring an image to be identified; determining a mapping relation between a minimum fusion error threshold value and a measurement characteristic value in an image to be identified; the mapping relation is obtained by analyzing according to the minimum error graph under at least one distance and the corresponding measurement characteristic graph; determining a minimum fusion error threshold value corresponding to each of a plurality of pixel points in the image to be identified according to the mapping relation; and performing abnormal pixel identification on a plurality of pixel points in the image to be identified according to the minimum fusion error threshold value, and determining abnormal pixel points in the image to be identified. Therefore, the corresponding minimum fusion error threshold value is determined through the mapping relation between the minimum fusion error threshold value and the measurement characteristic value in the image to be recognized, whether the pixel point in the image to be recognized is an abnormal pixel point or not is recognized according to the minimum fusion error threshold value, the abnormal pixel point in the image to be recognized is recognized, and the abnormal pixel point is marked, so that the pixel point needing to be repaired can be accurately obtained when the abnormal pixel point is repaired in the subsequent process.
Drawings
Fig. 1 is a schematic diagram of a working principle of dual-frequency fusion provided in the related art;
fig. 2 is a schematic diagram of a motion blur principle of dual-frequency fusion provided in the related art;
FIG. 3a is a schematic diagram of a depth image at a frequency provided by the related art;
FIG. 3b is a schematic diagram of a depth image at another frequency provided by the related art;
fig. 4 is a schematic diagram of a motion-blurred image after dual-frequency fusion provided in the related art;
fig. 5 is a schematic flowchart of an abnormal pixel identification method according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of another abnormal pixel identification method according to an embodiment of the present application;
fig. 7 is a schematic diagram of an image to be processed with abnormal fusion caused by motion noise in dual-frequency fusion according to an embodiment of the present application;
FIG. 8 is a graphical illustration of a minimum fusion error provided by an embodiment of the present application;
fig. 9 is a schematic diagram of a piecewise linear fit curve corresponding to a magnitude value-minimum fusion error value according to an embodiment of the present application;
fig. 10 is a schematic diagram of a dual-frequency motion-blurred image after an abnormal pixel point is marked according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an abnormal pixel identification apparatus according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram of a specific hardware structure of an electronic device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
So that the manner in which the features and elements of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict. It should also be noted that the terms "first \ second \ third" are used herein only for distinguishing similar objects and do not denote a particular order or sequence of objects, and it should be understood that "first \ second \ third" may be interchanged under appropriate circumstances such that embodiments of the present application described herein may be implemented in other sequences than those illustrated or described herein.
It can be understood that, referring to fig. 1, a schematic diagram of a dual-frequency fusion principle is shown, as shown in fig. 1, a TOF camera mostly adopts a dual-frequency fusion method, so that the measurement accuracy and range of the TOF camera are improved, and a distance ambiguity phenomenon under a single frequency condition is solved. The method comprises the steps of obtaining depth measurement distances under various frequencies, and carrying out period expansion on all the measurement distances respectively to obtain a minimum measurement error value, wherein the distance after the period expansion corresponding to the main frequency is the final distance value.
Referring to fig. 2, if the imaging objects of the TOF camera are in a one-to-one correspondence relationship under two single frequencies, the method can extend the measurement range without reducing the ranging accuracy. In practice, the dual-frequency depth image of the TOF camera is obtained by sequential integration (for example, frequency 1 is integrated to obtain one depth measurement value, frequency 2 is integrated to obtain another depth measurement value), and after all the frequencies obtain corresponding depth measurement values, the fused depth value can be obtained by a period expansion method. Obviously, when the measurement target is difficult to be in one-to-one correspondence under two single frequencies, part of pixels are influenced by other factors such as random noise, target motion and the like more or less in the imaging process, and no matter how the period expansion is carried out, the fusion error of the measurement target is obviously larger, so that the final depth fusion image has obvious abnormality.
On the basis, the motion blur caused by visible double-frequency fusion is analyzed, the motion blur is similar to the motion smear phenomenon under visible light, and the motion smear phenomenon of the visible light can be well solved through image registration, motion estimation and other methods. However, the method cannot well solve the dual-frequency fusion motion blur of the TOF camera, because the pixel values of the dual-frequency depth images have large difference, and due to the inherent depth aliasing phenomenon of a single frequency, the difficulty of directly performing motion estimation or image registration on the two depth images is further increased. Illustratively, referring to fig. 3a and 3b, which show depth images at two different frequencies, the fusion of fig. 3a and 3b results in a motion blurred image after the dual-frequency fusion as shown in fig. 4, and the fused image has a distinct blurred region.
In the related art, the ToF camera operates in dual frequency to extend the measurement range without reducing the accuracy of the distance measurement. However, depth images under different frequencies are not completed within an integration time, and imaging under two single frequencies is difficult to be in one-to-one correspondence, so that the images are easily influenced by factors such as random noise, target motion and the like, and the final fusion result is abnormal. Due to obvious difference between the dual-frequency depth images, the conventional methods such as image registration and motion estimation under visible light cannot solve the problems well. And circuit improvement methods such as a method (2-tap, 4-tap) for acquiring a plurality of phase images by improving the frame rate or the same integral time can reduce the motion blur phenomenon, but the method has high cost and great implementation difficulty and cannot radically and completely eliminate the problem of motion blur pixels.
Based on this, the embodiment of the present application provides an abnormal pixel identification method, and the basic idea of the method is: acquiring an image to be identified; determining a mapping relation between a minimum fusion error threshold value and a measurement characteristic value in an image to be identified; the mapping relation is obtained by analyzing according to the minimum error graph under at least one distance and the corresponding measurement characteristic graph; determining a minimum fusion error threshold value corresponding to each of a plurality of pixel points in the image to be identified according to the mapping relation; and performing abnormal pixel identification on a plurality of pixel points in the image to be identified according to the minimum fusion error threshold value, and determining abnormal pixel points in the image to be identified. Therefore, the corresponding minimum fusion error threshold is determined through the mapping relation between the minimum fusion error threshold and the measurement characteristic value in the image to be identified, whether the pixel point in the image to be identified is an abnormal pixel point is identified according to the minimum fusion error threshold, the identification of the abnormal pixel point in the image to be identified is realized, and the abnormal pixel point is marked, so that the pixel point to be repaired can be accurately obtained when the abnormal pixel point is repaired in the subsequent process.
In an embodiment of the present application, referring to fig. 5, a flowchart of an abnormal pixel identification method provided in an embodiment of the present application is shown. As shown in fig. 5, the method may include:
s501: and acquiring an image to be identified.
It should be noted that the pixel marking method provided by the embodiment of the present application can be applied to an electronic device with a pixel repair requirement. Here, the electronic device has a dual-frequency TOF camera, and may be an electronic device such as a computer, a smart phone, a tablet computer, a notebook computer, a palm computer, a Personal Digital Assistant (PDA), a navigation device, a wearable device, and the like, which is not particularly limited in this embodiment of the present application.
It should be further noted that in the embodiment of the present application, the image to be identified is abnormal in the final depth fusion image due to random noise, target motion, and the like in the process of fusion of the dual-frequency depth image, where the abnormal image includes a clear region and a blurred region, where a pixel point in the clear region is a normal pixel point, and a pixel point in the blurred region is an abnormal pixel point.
S502: determining a mapping relation between a minimum fusion error threshold value and a measurement characteristic value in the image to be identified; and analyzing the mapping relation according to the minimum error graph and the corresponding measurement characteristic graph under at least one distance.
It should be noted that, in the embodiment of the present application, there is an association relationship between the minimum fusion error threshold value in the image to be identified and the measurement characteristic value, where the measurement characteristic value may include a depth measurement value, an amplitude measurement value, or another measurement characteristic value that is associated with the minimum fusion error threshold value.
In some embodiments, referring to fig. 6, which shows a schematic flowchart of another abnormal pixel identification method provided in the embodiment of the present application, as shown in fig. 6, the step may include:
s601: a dual-frequency depth image at least one distance is acquired.
S602: and determining a minimum error map at the at least one distance according to the double-frequency depth image at the at least one distance.
S603: acquiring a measurement characteristic map under at least one distance, wherein the measurement characteristic map at least comprises a measurement amplitude map, a measurement gray scale map or a measurement depth map;
s604: and performing fitting analysis according to the minimum error graph under the at least one distance and the corresponding measurement characteristic graph under the at least one distance to determine the mapping relation between the minimum fusion error threshold and the measurement characteristic value.
It should be noted that, in this embodiment of the present application, dual-frequency depth images at multiple distances need to be acquired, a minimum error map at each distance is respectively determined, measurement feature maps at multiple distances need to be acquired, and fitting analysis is performed to obtain a mapping relationship between a minimum fusion error threshold and a measurement feature value at each distance, where in a case where the measurement feature map is a measurement depth map, a depth map at any frequency in the dual-frequency depth images may be directly used as the measurement feature map.
It should be further noted that, in the embodiment of the present application, the mapping relationship between the minimum fusion error threshold and the measured characteristic value may be in the form of a function, a curve, a histogram, and the like, and is not limited herein. In performing the fitting analysis, a piecewise linear fit, a curve fit, or other fit that describes the characteristics of the constructed curve may be used, without limitation.
In some embodiments, the acquiring the dual-frequency depth image at the at least one distance and the measurement feature map at the at least one distance includes:
and acquiring images at different distances in a static scene, and determining a dual-frequency depth image at least one distance and a measurement feature map at least one distance.
It should be noted that, in the embodiment of the present application, image acquisition at different distances is performed within the measurement range of the dual-frequency TOF camera, so as to obtain dual-frequency depth images at different distances.
In some embodiments, the determining the minimum error map at the at least one distance from the dual-frequency depth image at the at least one distance includes:
and respectively carrying out minimum fusion error calculation according to the double-frequency depth image under the at least one distance, and determining a minimum error graph corresponding to the at least one distance.
It should be noted that, in the embodiment of the present application, the minimum fusion error calculation is performed on the dual-frequency depth image, specifically, the minimum fusion error of each pixel point in the dual-frequency depth image is calculated, so as to obtain a minimum error map, the measurement feature value calculation is performed on the dual-frequency depth image, specifically, the measurement feature value calculation is performed on each pixel point in the dual-frequency depth image, so as to obtain a measurement feature map.
In some embodiments, the determining the minimum error map corresponding to each of the at least one distance by performing minimum fusion error calculation according to the dual-frequency depth images at the at least one distance includes:
determining a first measured distance value and at least one first measured fuzzy number under a first frequency and a second measured distance value and at least one second measured fuzzy number under a second frequency according to the double-frequency depth image under the first distance;
calculating at least one first candidate distance corresponding to a first frequency according to the first measurement distance value under the first frequency and the at least one first measurement fuzzy time;
calculating at least one second candidate distance corresponding to a second frequency according to the second measurement distance value under the second frequency and the at least one second measurement fuzzy time;
calculating the minimum error map corresponding to the first distance based on at least one first candidate distance corresponding to the first frequency and at least one second candidate distance corresponding to the second frequency;
wherein the first distance is any one of the at least one distance.
It should be noted that, in the embodiment of the present application, the first frequency and the second frequency are two frequencies of the dual-frequency TOF camera, respectively, and the dual-frequency depth image at least at one distance is acquired within a measurement range of the dual-frequency TOF camera.
It should be further noted that, in the embodiment of the present application, in the first distance, a first measured distance value and at least one measurement ambiguity number corresponding to the first frequency, and a first measured distance value and at least one measurement ambiguity number corresponding to the second frequency are respectively determined, and at least one first candidate distance of the first frequency and at least one second candidate distance of the second frequency are calculated, and when a minimum error map is determined according to the at least one first candidate distance and the at least one second candidate distance, the at least one candidate distance of the first frequency and the at least one candidate distance of the second frequency are subtracted, and the smallest difference is selected as a minimum fusion error value and forms the minimum error map.
In some specific embodiments, said calculating at least one first candidate distance corresponding to a first frequency according to the first measured distance value at the first frequency and the at least one first measurement ambiguity number includes:
determining a first maximum measurement distance corresponding to the first frequency based on the first frequency and the speed of light;
calculating the product of the first maximum measurement distance and the at least one first measurement fuzzy number to obtain a first value;
summing the first value and the first measured distance value, and determining the obtained first sum value as at least one first candidate distance corresponding to the first frequency;
correspondingly, the calculating at least one second candidate distance corresponding to a second frequency according to the second measured distance value at the second frequency and the at least one second measured ambiguity number includes:
determining a second maximum measurement distance corresponding to the second frequency based on the second frequency and the speed of light;
calculating the product of the second maximum measurement distance and the at least one second measurement fuzzy number to obtain a second value;
and performing summation calculation on the second value and the second measured distance value, and determining the obtained second sum value as at least one second candidate distance corresponding to the second frequency.
It should be noted that, in the embodiment of the present application, the first maximum measurement distance is calculated by the speed of light and the first frequency, and the second maximum measurement distance is calculated by the speed of light and the second frequency, where each frequency corresponds to one maximum measurement distance.
It should be further noted that, in the embodiment of the present application, the same distance corresponds to one measurement distance and corresponds to a plurality of measurement ambiguity times, so at least one first candidate distance is calculated according to at least one first measurement ambiguity time, and similarly, at least one second candidate distance is calculated according to at least one second measurement ambiguity time.
In some specific embodiments, said calculating said minimum error map for a first distance based on at least one first candidate distance for said first frequency and at least one second candidate distance for said second frequency comprises:
performing difference calculation on at least one first candidate distance corresponding to the first frequency and at least one second candidate distance corresponding to the second frequency to obtain a plurality of difference values;
and obtaining a minimum error map corresponding to the first distance according to the minimum difference value in the plurality of difference values.
It should be noted that, in this embodiment of the application, when performing difference calculation on at least one first candidate distance corresponding to the first frequency and at least one second candidate distance corresponding to the second frequency to obtain a plurality of difference values, specifically, when performing difference calculation on any one first candidate distance in the at least one first candidate distance and any one second candidate distance in the at least one second candidate distance to obtain a plurality of difference values, a minimum difference value is selected from the plurality of difference values to be a minimum fusion error value, and then a minimum error map is obtained.
In some embodiments, the performing fitting analysis according to the minimum error map at the at least one distance and the corresponding measured feature map to determine a mapping relationship between the minimum fusion error threshold and the measured feature value includes:
constructing a first distribution graph according to the minimum error graph and the corresponding measurement characteristic graph under the at least one distance; wherein the first distribution map is used for reflecting the mapping relation between the minimum fusion error threshold value and the measured characteristic value;
determining a minimum fusion error threshold corresponding to at least one measured characteristic value based on the first distribution graph;
and fitting according to the at least one measurement characteristic value and the corresponding minimum fusion error threshold value, and determining the mapping relation between the minimum fusion error threshold value and the measurement characteristic value.
It should be noted that, in the embodiment of the present application, the measured characteristic value may include a measured gray value, a measured depth value, a measured amplitude value, or another characteristic value that can describe the current minimum fusion error variation, and without any limitation, after the first distribution map is established according to the measured characteristic map and the minimum error map, the corresponding relationship between the measured characteristic value and the minimum fusion error threshold is determined.
It should be further noted that, in the embodiment of the present application, in the process of fitting at least one measurement characteristic value and a corresponding minimum fusion error threshold value, the fitting process may be piecewise linear fitting, curve fitting or fitting of other curve characteristics that can describe the constructed curve, so as to obtain a distribution curve of the amplitude value and the minimum fusion error threshold value, and determine a mapping relationship between the minimum fusion error threshold value and the measurement characteristic value.
In some embodiments, the determining a minimum fusion error threshold corresponding to at least one measured characteristic value based on the first profile includes:
determining a minimum error value corresponding to the first measured characteristic value based on the first profile;
calculating a preset quantile of a minimum error value corresponding to the first measurement characteristic value;
determining the preset quantile as a minimum fusion error threshold corresponding to the first measurement characteristic value;
wherein the first measured characteristic value is any one of the at least one measured characteristic values.
It should be noted that, in the embodiment of the present application, the minimum error value is also affected by other environmental factors, a preset quantile is set according to the degree of influence of the environmental factors, and the preset quantile is taken for the minimum error value, so that the influence of the other environmental factors on the result can be approximately simulated and removed, and for example, the preset quantile may be set to 95%.
In this way, the mapping relation between the minimum fusion error threshold value and the measurement characteristic value in the image to be identified is determined by fitting the measurement characteristic value and the minimum fusion error; and analyzing the mapping relation according to the minimum error graph and the corresponding measurement characteristic graph under at least one distance.
S503: and determining the minimum fusion error threshold value corresponding to each of a plurality of pixel points in the image to be identified according to the mapping relation.
It should be noted that, in some embodiments, each pixel point in the image to be identified has a corresponding minimum fusion error threshold, and the pixel point is identified by comparing the minimum fusion error value of each pixel point with the minimum fusion error threshold.
In some embodiments, for step S503, the determining, according to the mapping relationship, a minimum fusion error threshold corresponding to each of a plurality of pixel points in the image to be recognized includes:
determining the measurement characteristic values corresponding to the pixel points in the image to be identified;
determining a minimum fusion error threshold corresponding to the image to be identified based on the measurement characteristic values corresponding to the pixel points and the mapping relation; and the pixel points in the image to be identified have a corresponding relation with the minimum fusion error threshold value.
It should be noted that, in the embodiment of the present application, in the process of determining the minimum fusion error threshold corresponding to the image to be recognized based on the current measurement characteristic values corresponding to the pixel points and the mapping relationship, different current measurement characteristic values are brought into the fitting function in the fitting process to obtain the minimum fusion error threshold.
It should be further noted that, in the embodiment of the present application, a piecewise linear fitting is taken as an example, and corresponds to a plurality of fitting functions, so when a current measurement characteristic value is brought into a fitting function, which fitting function needs to be selected according to the determination of the measurement characteristic value for bringing in, and the brought fitting function does not conform to the range of the measurement characteristic value, which may cause an obtained minimum fusion error threshold to be inaccurate.
S504: and according to the minimum fusion error threshold value, carrying out abnormal pixel identification on a plurality of pixel points in the image to be identified, and determining abnormal pixel points in the image to be identified.
It should be noted that, in the embodiment of the present application, in the process of identifying an abnormal pixel, the minimum fusion error threshold of the pixel is compared with the minimum fusion error value, and when the minimum fusion error value is greater than the minimum fusion error threshold corresponding to the pixel, the pixel is determined to be an abnormal pixel.
In some embodiments, for S504, in some embodiments, the performing, according to the minimum fusion error threshold, abnormal pixel identification on a plurality of pixel points in the image to be identified, and determining abnormal pixel points in the image to be identified includes:
determining the minimum fusion error value corresponding to each pixel point in the image to be identified;
if the minimum fusion error value corresponding to the pixel point is less than or equal to the minimum fusion error threshold corresponding to the pixel point, determining the pixel point as a normal pixel point;
and if the minimum fusion error value corresponding to the pixel point is greater than the minimum fusion error threshold corresponding to the pixel point, determining the pixel point as an abnormal pixel point.
It should be noted that, in the embodiment of the present application, when identifying a pixel point, it is necessary to first compare the minimum fusion error value of the pixel point with the minimum fusion error threshold corresponding to the pixel point, and when the minimum fusion error value is greater than the minimum fusion error threshold, the pixel point is an abnormal pixel point.
In some embodiments, under the condition that a pixel point is determined to be an abnormal pixel point, the pixel point is marked to obtain a marked dual-frequency motion blurred image, so that the abnormal pixel is repaired in the following process.
The embodiment of the application provides an abnormal pixel identification method, which is used for acquiring an image to be identified; determining a mapping relation between a minimum fusion error threshold value and a measurement characteristic value in an image to be identified; the mapping relation is obtained by analyzing according to the minimum error graph and the corresponding measurement characteristic graph under at least one distance; determining a minimum fusion error threshold value corresponding to each of a plurality of pixel points in the image to be identified according to the mapping relation; and performing abnormal pixel identification on a plurality of pixel points in the image to be identified according to the minimum fusion error threshold value, and determining abnormal pixel points in the image to be identified. Therefore, the corresponding minimum fusion error threshold is determined through the mapping relation between the minimum fusion error threshold and the measurement characteristic value in the image to be identified, whether the pixel point in the image to be identified is an abnormal pixel point is identified according to the minimum fusion error threshold, the identification of the abnormal pixel point in the image to be identified is realized, and the abnormal pixel point is marked, so that the pixel point to be repaired can be accurately obtained when the abnormal pixel point is repaired in the subsequent process.
In another embodiment of the present application, based on the abnormal pixel identification method described in the foregoing embodiment, for identifying an abnormal pixel caused by dual-frequency fusion of a ToF camera, the present invention takes motion noise during dual-frequency fusion as an example, obtains an amplitude value (a gray value or other values that can describe distance change) -a minimum fusion error curve by means of fitting processing, obtains a minimum fusion error threshold corresponding to the amplitude value based on the error curve, and identifies a dual-frequency fusion abnormal pixel, so as to facilitate a subsequent algorithm to repair the area.
Specifically, in some embodiments, referring to fig. 7, which shows a schematic diagram of an image to be processed with abnormal fusion caused by motion noise in dual-frequency fusion, when the image to be processed is subjected to dual-frequency motion blur region marking based on a minimum fusion error, the method may include:
step 1, the frequency of the TOF camera under double-frequency working is respectively
Figure 872468DEST_PATH_IMAGE001
/
Figure 965189DEST_PATH_IMAGE002
Measuring a distance of
Figure 641021DEST_PATH_IMAGE003
/
Figure 223312DEST_PATH_IMAGE004
And the distance of the actual scene
Figure 351805DEST_PATH_IMAGE005
Satisfies the following conditions:
Figure 779375DEST_PATH_IMAGE006
(1)
Figure 309714DEST_PATH_IMAGE007
(2)
wherein the content of the first and second substances,
Figure 797327DEST_PATH_IMAGE008
indicating the current frequency
Figure 413116DEST_PATH_IMAGE009
The maximum measured distance of.
Figure 644377DEST_PATH_IMAGE010
Indicating the current frequency
Figure 29222DEST_PATH_IMAGE011
The maximum measured distance of.
Figure 422157DEST_PATH_IMAGE012
Representing the true distance of the scene.
Figure 790822DEST_PATH_IMAGE013
Representing frequency
Figure 560194DEST_PATH_IMAGE009
The corresponding measured distance.
Figure 330704DEST_PATH_IMAGE014
Representing frequency
Figure 160120DEST_PATH_IMAGE011
The corresponding measured distance.
Figure 750501DEST_PATH_IMAGE015
Representing frequency
Figure 589144DEST_PATH_IMAGE009
Corresponding measurement ambiguity times.
Figure 683002DEST_PATH_IMAGE016
Representing frequency
Figure 948899DEST_PATH_IMAGE002
Corresponding measurement ambiguity times.
And c represents the speed of light.
Are respectively aligned with in the above formula
Figure 26576DEST_PATH_IMAGE015
And
Figure 668910DEST_PATH_IMAGE016
and carrying out period expansion, and solving the minimum fusion error according to the following formula:
Figure 148433DEST_PATH_IMAGE017
(3)
obtaining a minimum how error map, as shown in fig. 8;
step 2, because the minimum fusion error is related to the measured amplitude value (measured gray value, measured depth value or other characteristic values capable of describing the current minimum fusion error change) in the actual scene, in order to adaptively determine the current minimum fusion error threshold according to the current measured amplitude value (measured gray value, measured amplitude value or other characteristic values capable of describing the current minimum fusion error change), a plurality of groups of data at different distances need to be acquired. In the embodiment of the application, the minimum error and the measured amplitude value are taken as examples for explanation, the dual-frequency depth images and the dual-frequency amplitude value maps of a plurality of distances in a static scene are collected, and the minimum error maps of the plurality of distances are obtained according to the steps in the step 1 respectively;
and 3, constructing an amplitude value-minimum fusion error map (X: the amplitude value and Y: the minimum fusion error) according to the plurality of minimum fusion error maps at different distances and the corresponding amplitude value map obtained in the step 2, as shown in fig. 9.
Step 4, considering that other noise factors exist in the scene, respectively taking the minimum fusion error of 95% quantiles as the minimum fusion error threshold value under each amplitude value, and performing piecewise linear fitting, curve fitting or other fitting capable of describing the characteristics of the constructed curve on the amplitude values and the corresponding minimum fusion error threshold values to obtain an amplitude value-minimum fusion error threshold value distribution curve, wherein piecewise linear fitting is taken as an example for illustration, and the curve is assumed as:
Figure 585230DEST_PATH_IMAGE018
; (4)
step 5, inputting the current amplitude measurement value, and solving the minimum fusion error threshold value according to the formula in the step 4
Figure 150204DEST_PATH_IMAGE019
Determining that is greater than a threshold
Figure 330650DEST_PATH_IMAGE020
The pixel points of (1) are abnormal pixel points and marked as motion blur points, and the marked image is shown in fig. 10.
The embodiment of the application analyzes the reason for generating the double-frequency fusion abnormal pixel and establishes a method for identifying the double-frequency fusion abnormal pixel; and, taking abnormal pixel identification caused by motion noise as an example, acquiring dual-frequency depth images of a plurality of distances in a static scene, and providing a method for constructing a distribution diagram of a minimum fusion error distribution map of a plurality of distance lower amplitude values (measuring gray values, measuring depth values or other characteristic values capable of describing current minimum fusion error change); and taking the amplitude value as an example, in the amplitude value-minimum fusion error distribution diagram, taking 95% of sub-data of the minimum fusion error corresponding to each amplitude value, avoiding noise factors in an actual scene, and then obtaining minimum fusion error thresholds under different amplitude values by adopting piecewise linear fitting, curve fitting or other fitting methods capable of describing characteristics of the constructed curve. When the method is applied to an actual scene, the dual-frequency fusion abnormal pixel judgment threshold can be adaptively selected according to the current amplitude measurement value and the curve constructed by the method, so that the robustness is better.
The embodiment of the application provides an abnormal pixel identification method, and the specific implementation of the embodiment is elaborated based on the embodiment, so that the technical scheme of the embodiment can be seen, the technical scheme provides the abnormal pixel identification caused by dual-frequency fusion of a ToF camera.
In another embodiment of the present application, refer to fig. 11, which illustrates a schematic structural diagram of a device for identifying an abnormal pixel provided in an embodiment of the present application. As shown in fig. 11, the abnormal pixel identifying apparatus 110 may include:
an acquisition unit 1101 configured to acquire an image to be recognized;
a mapping unit 1102 configured to determine a mapping relationship between a minimum fusion error threshold and a measurement feature value in the image to be identified; the mapping relation is obtained by analyzing according to the minimum error map and the corresponding measurement characteristic map under at least one distance;
a determining unit 1103 configured to determine, according to the mapping relationship, a minimum fusion error threshold corresponding to each of a plurality of pixel points in the image to be recognized;
the identifying unit 1104 is configured to perform abnormal pixel identification on a plurality of pixel points in the image to be identified according to the minimum fusion error threshold, and determine abnormal pixel points in the image to be identified.
In some embodiments, the mapping unit 1102 is further configured to acquire a dual-frequency depth image at least one distance; determining a minimum error map at the at least one distance according to the double-frequency depth image at the at least one distance; acquiring a measurement characteristic map under at least one distance, wherein the measurement characteristic map at least comprises a measurement amplitude map, a measurement gray scale map or a measurement depth map; and performing fitting analysis according to the minimum error graph under the at least one distance and the corresponding measurement characteristic graph under the at least one distance to determine the mapping relation between the minimum fusion error threshold and the measurement characteristic value.
In some embodiments, the mapping unit 1102 is specifically configured to perform image acquisition at different distances in a static scene, and determine a dual-frequency depth image at the at least one distance and a measurement feature map at the at least one distance; wherein the measurement characteristic value at least comprises a measurement amplitude value, a measurement gray value or a measurement depth value.
In some embodiments, the mapping unit 1102 is specifically configured to perform minimum fusion error calculation according to the dual-frequency depth images at the at least one distance, and determine a minimum error map corresponding to each of the at least one distance.
In some embodiments, the mapping unit 1102 is specifically configured to determine, from the dual-frequency depth image at the first distance, a first measured distance value and at least one first measured blur number at the first frequency and a second measured distance value and at least one second measured blur number at the second frequency; calculating at least one first candidate distance corresponding to the first frequency according to the first measurement distance value under the first frequency and the at least one first measurement fuzzy frequency; calculating at least one second candidate distance corresponding to a second frequency according to the second measurement distance value under the second frequency and the at least one second measurement fuzzy time; and calculating the minimum error map corresponding to the first distance based on at least one first candidate distance corresponding to the first frequency and at least one second candidate distance corresponding to the second frequency; wherein the first distance is any one of the at least one distance.
In some embodiments, the mapping unit 1102 is specifically configured to determine a first maximum measurement distance corresponding to the first frequency based on the first frequency and the speed of light; and calculating the product of the first maximum measurement distance and the at least one first measurement fuzzy number to obtain a first value; and summing the first value and the first measured distance value, and determining the obtained first sum value as at least one first candidate distance corresponding to the first frequency; correspondingly, the mapping unit 1102 is further configured to determine a second maximum measurement distance corresponding to the second frequency based on the second frequency and the speed of light; and calculating the product of the second maximum measurement distance and the at least one second measurement fuzzy number to obtain a second value; and performing summation calculation on the second value and the second measured distance value, and determining the obtained second sum value as at least one second candidate distance corresponding to the second frequency.
A mapping unit 1102, configured to perform difference calculation on at least one first candidate distance corresponding to the first frequency and at least one second candidate distance corresponding to the second frequency to obtain a plurality of difference values; and obtaining a minimum error map corresponding to the first distance according to the minimum difference value in the plurality of difference values.
A mapping unit 1102, specifically configured to construct a first distribution graph according to the minimum error graph at the at least one distance and the corresponding measured feature graph; wherein the first distribution map is used for reflecting the mapping relation between the minimum fusion error threshold value and the measured characteristic value; and determining a minimum fusion error threshold corresponding to at least one measured characteristic value based on the first distribution graph; and fitting according to the at least one measurement characteristic value and the corresponding minimum fusion error threshold value, and determining the mapping relation between the minimum fusion error threshold value and the measurement characteristic value.
A mapping unit 1102, specifically configured to determine a minimum error value corresponding to the first measured characteristic value based on the first distribution graph; calculating a preset quantile of a minimum error value corresponding to the first measurement characteristic value; determining the preset quantile as a minimum fusion error threshold value corresponding to the first measurement characteristic value; wherein the first measured characteristic value is any one of the at least one measured characteristic values.
A determining unit 1103, configured to determine measurement feature values corresponding to respective pixel points in the image to be identified; determining a minimum fusion error threshold corresponding to the image to be identified based on the measurement characteristic values corresponding to the pixel points and the mapping relation; and the pixel points in the image to be identified have a corresponding relation with the minimum fusion error threshold value.
The identifying unit 1104 is specifically configured to determine a minimum fusion error corresponding to each of the pixel points in the image to be identified; if the minimum fusion error value corresponding to the pixel point is smaller than or equal to the minimum fusion error threshold corresponding to the pixel point, determining the pixel point as a normal pixel point; and if the minimum fusion error value corresponding to the pixel point is larger than the minimum fusion error threshold corresponding to the pixel point, determining the pixel point as an abnormal pixel point.
It is understood that, in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and may also be a module, or may be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Accordingly, the present embodiments provide a computer storage medium storing an anomalous pixel identification program that, when executed by at least one processor, implements the steps of the method of any of the preceding embodiments.
Based on the above-mentioned composition of the abnormal pixel identifying apparatus 110 and the computer storage medium, refer to fig. 12, which shows a specific hardware structure diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 12, the electronic device 120 may include: a communication interface 1201, a memory 1202, and a processor 1203; the various components are coupled together by a bus system 1204. It is understood that the bus system 1204 is used to enable communications among the components of the connection. The bus system 1204 includes a power bus, a control bus, and a status signal bus, in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 1204 in fig. 12. The communication interface 1201 is used for receiving and sending signals in the process of receiving and sending information with other external network elements;
a memory 1202 for storing a computer program operable on the processor 1203;
a processor 1203, configured to execute, when executing the computer program:
acquiring an image to be identified;
determining a mapping relation between a minimum fusion error threshold value and a measurement characteristic value in the image to be identified; the mapping relation is obtained by analyzing according to the minimum error graph under at least one distance and the corresponding measurement characteristic graph;
determining a minimum fusion error threshold corresponding to the image to be identified according to the mapping relation;
and performing abnormal pixel identification on the image to be identified according to the minimum fusion error threshold value, and determining abnormal pixel points in the image to be identified.
It will be appreciated that the memory 1202 in the subject embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous chained SDRAM (Synchronous link DRAM, SLDRAM), and Direct memory bus RAM (DRRAM). The memory 1202 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
And the processor 1203 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 1203. The Processor 1203 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, etc. as is well known in the art. The storage medium is located in the memory 1202, and the processor 1203 reads the information in the memory 1202 to complete the steps of the above method in combination with hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as another embodiment, the processor 1203 is further configured to execute the steps of the method of any of the previous embodiments when running the computer program.
In some embodiments, refer to fig. 13, which shows a schematic structural diagram of an electronic device 120 provided in an embodiment of the present application. As shown in fig. 13, the electronic device 120 at least includes the abnormal pixel identification apparatus 110 according to any one of the foregoing embodiments.
In the embodiment of the present application, for the electronic device 120, an image to be recognized is acquired; determining a mapping relation between a minimum fusion error threshold value and a measurement characteristic value in an image to be identified; the mapping relation is obtained by analyzing according to the minimum error graph under at least one distance and the corresponding measurement characteristic graph; determining a minimum fusion error threshold value corresponding to each of a plurality of pixel points in the image to be identified according to the mapping relation; and performing abnormal pixel identification on a plurality of pixel points in the image to be identified according to the minimum fusion error threshold value, and determining the abnormal pixel points in the image to be identified. Therefore, the corresponding minimum fusion error threshold value is determined through the mapping relation between the minimum fusion error threshold value and the measurement characteristic value in the image to be recognized, whether the pixel point in the image to be recognized is an abnormal pixel point or not is recognized according to the minimum fusion error threshold value, the abnormal pixel point in the image to be recognized is recognized, and the abnormal pixel point is marked, so that the pixel point needing to be repaired can be accurately obtained when the abnormal pixel point is repaired in the subsequent process.
It should be noted that, in the present application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. An abnormal pixel identification method, characterized in that the method comprises:
acquiring an image to be identified; the image to be recognized is acquired through a double-frequency depth camera;
determining a mapping relation between the minimum fusion error threshold value and the measurement characteristic value in the image to be identified, wherein the mapping relation is obtained by analyzing according to a minimum error graph under at least one distance and a corresponding measurement characteristic graph;
determining a minimum fusion error threshold value corresponding to each of a plurality of pixel points in the image to be identified according to the mapping relation;
according to the minimum fusion error threshold value, carrying out abnormal pixel identification on a plurality of pixel points in the image to be identified, and determining abnormal pixel points in the image to be identified;
wherein the method further comprises:
acquiring a dual-frequency depth image at least one distance;
determining a minimum error map at the at least one distance according to the dual-frequency depth image at the at least one distance;
acquiring a measurement characteristic map under at least one distance, wherein the measurement characteristic map at least comprises a measurement amplitude map, a measurement gray scale map or a measurement depth map;
and performing fitting analysis according to the minimum error graph under the at least one distance and the corresponding measurement characteristic graph under the at least one distance to determine the mapping relation between the minimum fusion error threshold and the measurement characteristic value.
2. The method of claim 1, wherein acquiring the dual-frequency depth image at the at least one distance comprises:
and acquiring images at different distances in a static scene, and determining a dual-frequency depth image at the at least one distance.
3. The method of claim 1, wherein determining the minimum error map at the at least one range from the dual-frequency depth image at the at least one range comprises:
and respectively carrying out minimum fusion error calculation according to the double-frequency depth image under the at least one distance, and determining a minimum error graph corresponding to the at least one distance.
4. The method according to claim 3, wherein the determining the minimum error maps corresponding to the at least one distance by performing minimum fusion error calculation according to the dual-frequency depth images at the at least one distance respectively comprises:
determining a first measured distance value and at least one first measured fuzzy number under a first frequency and a second measured distance value and at least one second measured fuzzy number under a second frequency according to the double-frequency depth image under the first distance;
calculating at least one first candidate distance corresponding to a first frequency according to the first measurement distance value under the first frequency and the at least one first measurement fuzzy number;
calculating at least one second candidate distance corresponding to a second frequency according to the second measurement distance value under the second frequency and the at least one second measurement fuzzy time;
calculating the minimum error map corresponding to the first distance based on at least one first candidate distance corresponding to the first frequency and at least one second candidate distance corresponding to the second frequency;
wherein the first distance is any one of the at least one distance.
5. The method of claim 4, wherein calculating the minimum error map for a first distance based on at least one first candidate distance for the first frequency and at least one second candidate distance for the second frequency comprises:
performing difference calculation on at least one first candidate distance corresponding to the first frequency and at least one second candidate distance corresponding to the second frequency to obtain a plurality of difference values;
and obtaining a minimum error map corresponding to the first distance according to the minimum difference value in the plurality of difference values.
6. The method according to claim 1, wherein the performing fitting analysis according to the minimum error map at the at least one distance and the corresponding measured feature map to determine the mapping relationship between the minimum fusion error threshold and the measured feature value comprises:
constructing a first distribution graph according to the minimum error graph and the corresponding measurement characteristic graph under the at least one distance; wherein the first distribution map is used for reflecting the mapping relation between the minimum fusion error threshold value and the measured characteristic value;
determining a minimum fusion error threshold corresponding to at least one measured characteristic value based on the first distribution graph;
and fitting according to the at least one measurement characteristic value and the corresponding minimum fusion error threshold value, and determining the mapping relation between the minimum fusion error threshold value and the measurement characteristic value.
7. The method of claim 6, wherein determining a minimum fusion error threshold for at least one measured feature value based on the first profile comprises:
determining a minimum error value corresponding to the first measured characteristic value based on the first distribution graph;
calculating a preset quantile of a minimum error value corresponding to the first measurement characteristic value;
determining the preset quantile as a minimum fusion error threshold corresponding to the first measurement characteristic value;
wherein the first measured characteristic value is any one of the at least one measured characteristic values.
8. The method according to any one of claims 1 to 7, wherein the determining a minimum fusion error threshold corresponding to each of a plurality of pixel points in the image to be recognized according to the mapping relationship comprises:
determining the measurement characteristic values corresponding to a plurality of pixel points in the image to be identified;
determining a minimum fusion error threshold corresponding to the image to be identified based on the measurement characteristic values corresponding to the pixel points and the mapping relation; and the pixel points in the image to be identified and the minimum fusion error threshold value have a corresponding relation.
9. The method according to claim 1, wherein the performing abnormal pixel identification on a plurality of pixel points in the image to be identified according to the minimum fusion error threshold value to determine abnormal pixel points in the image to be identified comprises:
determining the minimum fusion error value corresponding to each pixel point in the image to be identified;
if the minimum fusion error value corresponding to the pixel point is less than or equal to the minimum fusion error threshold corresponding to the pixel point, determining the pixel point as a normal pixel point;
and if the minimum fusion error value corresponding to the pixel point is greater than the minimum fusion error threshold corresponding to the pixel point, determining the pixel point as an abnormal pixel point.
10. An abnormal pixel identification device, comprising:
an acquisition unit configured to acquire an image to be recognized; the image to be recognized is acquired by a double-frequency depth camera;
the mapping unit is configured to determine a mapping relation between a minimum fusion error threshold value and a measurement characteristic value in the image to be identified, wherein the mapping relation is obtained by analyzing a minimum error graph under at least one distance and a corresponding measurement characteristic graph;
the determining unit is configured to determine a minimum fusion error threshold value corresponding to each of a plurality of pixel points in the image to be identified according to the mapping relation;
the identification unit is configured to perform abnormal pixel identification on a plurality of pixel points in the image to be identified according to the minimum fusion error threshold value, and determine abnormal pixel points in the image to be identified;
the mapping unit is further configured to acquire a dual-frequency depth image at least one distance; determining a minimum error map under the at least one distance according to the dual-frequency depth image under the at least one distance; acquiring a measurement characteristic map under at least one distance, wherein the measurement characteristic map at least comprises a measurement amplitude map, a measurement gray scale map or a measurement depth map; and performing fitting analysis according to the minimum error graph under the at least one distance and the corresponding measurement characteristic graph under the at least one distance to determine the mapping relation between the minimum fusion error threshold and the measurement characteristic value.
11. An electronic device, characterized in that the electronic device comprises:
a memory for storing a computer program capable of running on the processor;
a processor for performing the method of any one of claims 1 to 9 when running the computer program.
12. A computer storage medium, characterized in that it stores a computer program which, when executed by at least one processor, implements the method of any one of claims 1 to 9.
CN202210796052.9A 2022-07-07 2022-07-07 Abnormal pixel identification method, device and equipment and computer storage medium Active CN114881908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210796052.9A CN114881908B (en) 2022-07-07 2022-07-07 Abnormal pixel identification method, device and equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210796052.9A CN114881908B (en) 2022-07-07 2022-07-07 Abnormal pixel identification method, device and equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN114881908A CN114881908A (en) 2022-08-09
CN114881908B true CN114881908B (en) 2022-09-30

Family

ID=82683061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210796052.9A Active CN114881908B (en) 2022-07-07 2022-07-07 Abnormal pixel identification method, device and equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN114881908B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833370A (en) * 2020-07-22 2020-10-27 浙江光珀智能科技有限公司 Flight pixel filtering method and system
CN112446836A (en) * 2019-09-05 2021-03-05 浙江舜宇智能光学技术有限公司 Data processing method and system for TOF depth camera
CN113219476A (en) * 2021-07-08 2021-08-06 武汉市聚芯微电子有限责任公司 Ranging method, terminal and storage medium
WO2022133976A1 (en) * 2020-12-25 2022-06-30 深圳市大疆创新科技有限公司 Tof module detection method, and electronic device and readable storage medium
CN114697521A (en) * 2020-12-29 2022-07-01 深圳市光鉴科技有限公司 TOF camera motion blur detection method, system, equipment and storage medium
CN114697478A (en) * 2020-12-29 2022-07-01 深圳市光鉴科技有限公司 Depth camera

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10884109B2 (en) * 2018-03-30 2021-01-05 Microsoft Technology Licensing, Llc Analytical-adaptive multifrequency error minimization unwrapping
CN113470096A (en) * 2020-03-31 2021-10-01 华为技术有限公司 Depth measurement method and device and terminal equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446836A (en) * 2019-09-05 2021-03-05 浙江舜宇智能光学技术有限公司 Data processing method and system for TOF depth camera
CN111833370A (en) * 2020-07-22 2020-10-27 浙江光珀智能科技有限公司 Flight pixel filtering method and system
WO2022133976A1 (en) * 2020-12-25 2022-06-30 深圳市大疆创新科技有限公司 Tof module detection method, and electronic device and readable storage medium
CN114697521A (en) * 2020-12-29 2022-07-01 深圳市光鉴科技有限公司 TOF camera motion blur detection method, system, equipment and storage medium
CN114697478A (en) * 2020-12-29 2022-07-01 深圳市光鉴科技有限公司 Depth camera
CN113219476A (en) * 2021-07-08 2021-08-06 武汉市聚芯微电子有限责任公司 Ranging method, terminal and storage medium

Also Published As

Publication number Publication date
CN114881908A (en) 2022-08-09

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN110766724B (en) Target tracking network training and tracking method and device, electronic equipment and medium
CN111681256B (en) Image edge detection method, image edge detection device, computer equipment and readable storage medium
CN109005368B (en) High dynamic range image generation method, mobile terminal and storage medium
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
US20160350936A1 (en) Methods and Systems for Detecting Moving Objects in a Sequence of Image Frames Produced by Sensors with Inconsistent Gain, Offset, and Dead Pixels
US8989481B2 (en) Stereo matching device and method for determining concave block and convex block
US11061102B2 (en) Position estimating apparatus, position estimating method, and terminal apparatus
CN110335313B (en) Audio acquisition equipment positioning method and device and speaker identification method and system
US11657485B2 (en) Method for expanding image depth and electronic device
CN111383246B (en) Scroll detection method, device and equipment
US20110286674A1 (en) Detecting potential changed objects in images
CN112966654A (en) Lip movement detection method and device, terminal equipment and computer readable storage medium
CN111723634A (en) Image detection method and device, electronic equipment and storage medium
EP2927635B1 (en) Feature set optimization in vision-based positioning
CN107392948B (en) Image registration method of amplitude-division real-time polarization imaging system
CN113542868A (en) Video key frame selection method and device, electronic equipment and storage medium
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
CN114881908B (en) Abnormal pixel identification method, device and equipment and computer storage medium
CN110706257B (en) Identification method of effective characteristic point pair, and camera state determination method and device
US20240127567A1 (en) Detection-frame position-accuracy improving system and detection-frame position correction method
CN116152714A (en) Target tracking method and system and electronic equipment
CN112950709B (en) Pose prediction method, pose prediction device and robot
CN111161225B (en) Image difference detection method and device, electronic equipment and storage medium
CN109214398B (en) Method and system for measuring rod position from continuous images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant