CN111798506A - Image processing method, control method, terminal and computer readable storage medium - Google Patents

Image processing method, control method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN111798506A
CN111798506A CN202010618277.6A CN202010618277A CN111798506A CN 111798506 A CN111798506 A CN 111798506A CN 202010618277 A CN202010618277 A CN 202010618277A CN 111798506 A CN111798506 A CN 111798506A
Authority
CN
China
Prior art keywords
data
phase
feature
basic
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010618277.6A
Other languages
Chinese (zh)
Inventor
陈琢
应忍冬
刘佩林
邹耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Data Miracle Intelligent Technology Co ltd
Original Assignee
Shanghai Data Miracle Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Data Miracle Intelligent Technology Co ltd filed Critical Shanghai Data Miracle Intelligent Technology Co ltd
Priority to CN202010618277.6A priority Critical patent/CN111798506A/en
Publication of CN111798506A publication Critical patent/CN111798506A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Abstract

The invention relates to the technical field of image processing, in particular to an image processing method, a control method, a terminal and a computer readable storage medium, wherein the image processing method comprises the following steps: acquiring first basic image data, and acquiring first characteristic data according to the first basic image data; analyzing the first feature data to form first depth feature data matched with the first feature data; acquiring second basic data, and forming second depth feature data according to the second basic data; and reconstructing to complete depth image data according to the first depth feature data, the second depth feature data and the original basic data.

Description

Image processing method, control method, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, a control method, a terminal, and a computer-readable storage medium.
Background
ToF cameras are one of the emerging depth cameras in recent years, and are becoming one of the mainstream choices due to their advantages such as strong adaptability and wide measurement range. The ToF camera has a wide range of applications in the 3D field, such as three-dimensional reconstruction, gesture recognition, and people flow statistics, and the most basic requirement of these applications for the camera is high accuracy. Although the ToF camera can acquire a high-precision depth image in a static scene, it is difficult to acquire an image of the same precision in a moving scene. Limited by the imaging principle of the ToF camera, the motion blur phenomenon of the moving object in the image greatly affects the precision of depth measurement. Such characteristics also limit the development of ToF cameras in many other fields.
The problem of ToF camera motion blur has received attention from researchers, and current solutions can be broadly divided into two categories. One is motion estimation and compensation, which uses optical flow or other methods to estimate the radial motion of the object and processes the raw data image based on the motion information to achieve the goal of motion blur correction. The method has huge calculation complexity, real-time performance and calculation precision are difficult to be considered, and good performance is difficult to show in practical application. The other method is a method for reducing the original data and the integration time, wherein 4 original data are normally needed for the calculation of the depth image, and the method only uses 2 continuous original data acquired in a shorter integration time to reduce the moving range of the object, thereby relieving the problem of motion blur.
The similar patent in China is 'motion blur elimination method and equipment of a flight time three-dimensional sensor', the method provided by the patent belongs to the field of motion estimation and compensation, the core idea of the method is similar to that of the patent, and moving objects in different original phase images are aligned through a special method to achieve the purpose of eliminating motion blur. The alignment method used in this patent is an image block matching dense matching technique, while the alignment method used in this patent is to use raw data values to calculate regions of object displacement in different raw phase images. Compared with the traditional optical flow method, the method greatly reduces the operation complexity.
Disclosure of Invention
In view of the technical deficiencies, the present invention provides an image processing method, comprising:
acquiring first basic image data, and acquiring first characteristic data according to the first basic image data;
analyzing the first feature data to form first depth feature data matched with the first feature data;
acquiring second basic data, and forming second depth feature data according to the second basic data;
and reconstructing to complete depth image data according to the first depth feature data, the second depth feature data and the original basic data.
Preferably, in the image processing method, the acquiring first basic image data, and the acquiring first feature data according to the first basic image data specifically includes:
under the control of preset parameters, acquiring a first phase image, a second phase image, a third phase image and a fourth phase image through an acquisition unit; forming the first base image data from the first, second, third and fourth phase images;
acquiring first phase data matched with the pixel point in the first phase image, second phase data matched with the pixel point in the second phase image, third phase data matched with the pixel point in the third phase image and fourth phase data matched with the pixel point in the fourth phase image according to each pixel point;
forming a characteristic pixel point according to the first phase data, the second phase data, the third phase data and the fourth phase data;
and forming the first characteristic data according to the characteristic pixel points.
Preferably, in the image processing method, the forming a characteristic pixel according to the first phase data, the second phase data, the third phase data, and the fourth phase data specifically includes:
calculating according to the first phase data, the second phase data, the third phase data and the fourth phase data of the pixel points according to a first calculation method to form a first calculation result, and judging whether the first calculation result is matched with a first threshold range;
and under the condition that the first calculation result is matched with a first threshold range, judging that the pixel points matched with the first calculation result form characteristic pixel points.
Preferably, in the image processing method, the calculating according to the first phase data, the second phase data, the third phase data, and the fourth phase data of the pixel point by a first calculating method to form a first calculation result, and determining whether the first calculation result matches a first threshold range specifically includes:
the first calculation method specifically includes:
|(DCSO+DCS2)-(DCS1+DCS3)|=T
wherein DCS0 is the first phase data;
DCS1 is the second phase data;
DCS3 is the third phase data;
DCS4 is the fourth phase data;
t is the first calculation result;
and judging that the first calculation result matches the first threshold range in the state that the first calculation result is not smaller than the first threshold range.
Preferably, in the image processing method, the analyzing the first feature data to form first depth feature data matched with the first feature data specifically includes:
processing and analyzing all the characteristic pixel points, and removing discrete pixel points to form basic characteristic pixel points;
acquiring calculation basic feature data from the basic feature pixel points, and forming a fuzzy feature area according to the calculation basic feature data;
acquiring first phase data, second phase data, third phase data and fourth phase data of all the feature pixel points in the fuzzy feature region, calculating to form a first comparison value according to the first phase data and the third phase data, and calculating to form a second comparison value according to the second phase data and the fourth phase data;
classifying all the characteristic pixel points according to the first comparison value and the second comparison value to form a fuzzy region set;
calculating and forming an average position coordinate value of the fuzzy region according to the fuzzy region set;
according to the average position coordinate value of the fuzzy area, second phase data is used as standard data, and first deviation data between first phase data and the second phase data is obtained; third deviation data between third phase data and second phase data, fourth deviation data between fourth phase data and the second phase data;
performing translation processing on all the characteristic pixel points in the fuzzy characteristic region according to the first deviation data, the third deviation data and the fourth deviation data so as to align the characteristic pixel points;
and performing depth processing on the aligned feature pixel points in the fuzzy feature region according to a preset calculation method to form the first depth feature data.
Preferably, in the image processing method, the obtaining second basic data and forming second depth feature data according to the second basic data specifically includes:
all the characteristic pixel points in the fuzzy characteristic region are subjected to moving processing according to the first deviation data, the third deviation data and the fourth deviation data and then combined with second basic data to form second basic data;
and forming second depth feature data according to a preset calculation method according to the second basic data.
In another aspect, the present invention provides an image processing apparatus, including:
a first feature data forming unit configured to acquire first basic image data and acquire first feature data based on the first basic image data;
the first depth feature data forming unit is used for analyzing and processing the first feature data to form first depth feature data matched with the first feature data;
the second depth characteristic data forming unit is used for acquiring second basic data and forming second depth characteristic data according to the second basic data;
and the processing unit is used for reconstructing depth image data according to the first depth characteristic data, the second depth characteristic data and the original basic data.
In another aspect, the present invention further provides a terminal, wherein the terminal includes:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an image processing method as described in any one of the above.
Finally, the invention further provides a computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements an image processing method as described in any of the above.
The embodiment of the invention solves the problem that the motion blur generated by a fast moving object affects the depth image. Compared with the early motion estimation and compensation method, the technical scheme adopted by the invention greatly reduces the operation complexity and can basically meet the requirement of real-time operation; compared with other methods for processing by using original data, the algorithm is more in accordance with the imaging principle of the ToF camera, and the accuracy of the restored depth map is effectively improved.
Drawings
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an image processing method according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present invention.
Detailed Description
The invention provides an image processing method, which comprises the following steps:
as shown in fig. 1, step S110, acquiring first basic image data, and acquiring first feature data according to the first basic image data;
as shown in fig. 2, in step S1101, under the control of predetermined parameters, a first phase image, a second phase image, a third phase image and a fourth phase image are obtained by an acquisition unit; forming the first base image data from the first, second, third and fourth phase images; wherein the predetermined parameters include a set modulation frequency, an HDR mode, and an integration time. The first phase image is a 0-degree phase image, the second phase image is a 90-degree phase image, the third phase image is a 180-degree phase image, and the fourth phase image is a 270-degree phase image.
Step S1102, obtaining, according to each pixel point, first phase data matched with the pixel point in the first phase image, second phase data matched with the pixel point in the second phase image, third phase data matched with the pixel point in the third phase image, and fourth phase data matched with the pixel point in the fourth phase image;
step S1103, forming a feature pixel according to the first phase data, the second phase data, the third phase data, and the fourth phase data; in particular, the amount of the solvent to be used,
step S11031, calculating according to the first phase data, the second phase data, the third phase data, and the fourth phase data of the pixel point by a first calculation method to form a first calculation result, and determining whether the first calculation result matches a first threshold range; and judging that the first calculation result matches the first threshold range in the state that the first calculation result is not smaller than the first threshold range. The first calculation method specifically includes:
|(DCSO+DCS2)-(DCS1+DCS3)|=T
wherein DCS0 is the first phase data;
DCS1 is the second phase data;
DCS3 is the third phase data;
DCS4 is the fourth phase data;
t is the first calculation result;
step S11032, in the state that the first calculation result is matched with the first threshold range, judging that the pixel points matched with the first calculation result form characteristic pixel points. The first threshold range may be 50 to 200, and when the first calculation result of the pixel is not smaller than the first threshold range, the pixel may be determined to be a motion blur pixel. The motion-blurred pixel points are the feature pixel points.
And step S1104, forming the first feature data according to the feature pixel points. The first characteristic data is a motion-blurred pixel point set.
Step S120, analyzing and processing the first feature data to form first depth feature data matched with the first feature data; the method specifically comprises the following steps:
as shown in fig. 3, step S1201 is to perform processing analysis on all the feature pixels, and remove discrete pixels to form basic feature pixels; the discrete pixel points are formed due to reflection or other reasons, and the discrete pixel points accord with the characteristics of the motion blur pixel points in the calculation of the steps, so that the discrete pixel points need to be removed through morphological processing or connected domain analysis. The basic characteristic pixel points are motion fuzzy pixel point sets after the discrete pixel points are removed.
Step S1202, calculating basic feature data are obtained from the basic feature pixel points, and a fuzzy feature area is formed according to the calculating basic feature data; the fuzzy feature region is formed in the following manner:
obtaining the X coordinate value and the Y coordinate value of the basic feature pixel point, respectively selecting 3 maximum X coordinate values and 3 minimum X coordinate values from all the X coordinate values, respectively selecting 3 maximum Y coordinate values and 3 minimum Y coordinate values from all the Y coordinate values, and calculating to form a maximum average value X according to the 3 maximum X coordinate valuesmaxCalculating the shape from the 3 smallest X-coordinate valuesTo the maximum mean value xminCalculating and forming a maximum average value Y according to the 3 maximum Y coordinate valuesmaxCalculating and forming a maximum average value Y according to the 3 minimum Y coordinate valuesminAccording to (x)max,ymax)、(xmax,ymin)、(xmin,ymax)、 (xmin,ymin) The four coordinate points form a rectangular area, and the rectangular area is a fuzzy characteristic area.
Step S1203, acquiring first phase data, second phase data, third phase data and fourth phase data of all the feature pixel points in the fuzzy feature area, calculating to form a first comparison value according to the first phase data and the third phase data, and calculating to form a second comparison value according to the second phase data and the fourth phase data; the method specifically comprises the following steps:
Diff0=|DCSO|-|DCS2|,
Diff1=|DCS1|-|DCS3|;
wherein Diff0Is the first comparison value;
Diff1is the second comparison value.
Step S1204, classify all the characteristic pixel points according to the first comparison value and the second comparison value to form a fuzzy region set; for example, a comparison Threshold is presetdiffThe comparison threshold may range from 100 to 210.
The forming method of the fuzzy region set can be as follows:
if Diff0>ThresholddiffAnd | Diff1|≤ThresholddiffIf so, the pixel point is regarded as a 1-1 area fuzzy pixel point;
if Diff0>ThresholddiffAnd Diff1>ThresholddiffIf so, the pixel point is regarded as a 1-2 area fuzzy pixel point;
if Diff0|≤ThresholddiffAnd Diff1>ThresholddiffIf so, the pixel point is regarded as a 1-3 area fuzzy pixel point;
if Diff0<-ThresholddiffAnd | Diff1|≤ThresholddiffIf so, the pixel point is regarded as a 2-1 area fuzzy pixel point;
if Diff0<-ThresholddiffAnd Diff1<-ThresholddiffIf so, the pixel point is regarded as a 2-2 area fuzzy pixel point;
if Diff0|≤ThresholddiffAnd Diff1<-ThresholddiffIf so, the pixel point is regarded as a 2-3 area fuzzy pixel point;
it should be noted that, if the comparison threshold is set differently, the number of the fuzzy regions in the fuzzy region set is also different, in this embodiment, if the comparison threshold is set to 100, 6 fuzzy regions may be formed, and when the comparison threshold is set to other numerical values, 8 fuzzy regions may be formed, which is not described herein in detail.
Step S1205, calculating and forming an average position coordinate value of the fuzzy region according to the fuzzy region set; if there are 6 fuzzy regions in the fuzzy region set, six average coordinate values are obtained
Figure BDA0002562180460000081
Figure BDA0002562180460000082
Wherein the content of the first and second substances,
Figure BDA0002562180460000083
the average position of the blurred pixel points in the 1-1 region,
Figure BDA0002562180460000084
the average position of the blurred pixel points is 1-2 regions,
Figure BDA0002562180460000085
the average position of the blurred pixel points is 1-3 regions,
Figure BDA0002562180460000086
the average position of the blurred pixel points is 2-1 area,
Figure BDA0002562180460000087
the average position of the blurred pixel points is 2-2 regions,
Figure BDA0002562180460000088
and 2-3 areas are used for blurring the average position of the pixel points.
Step S1206, according to the average position coordinate value of the fuzzy area, taking second phase data as standard data, and acquiring first deviation data between first phase data and the second phase data; third deviation data between third phase data and second phase data, fourth deviation data between fourth phase data and the second phase data; continuing with the example of 6 fuzzy areas described above,
according to the 6 average position coordinates, taking the second phase data as standard data, and respectively calculating and acquiring first deviation data between the first phase data and the second phase data; third deviation data between third phase data and second phase data, and fourth deviation data between fourth phase data and the second phase data. Specifically, the method comprises the following steps:
first deviation data between the first phase data and the second phase data is:
Figure BDA0002562180460000091
third deviation data between the third phase data and the second phase data is:
Figure BDA0002562180460000092
fourth deviation data between the fourth phase data and the second phase data is:
Figure BDA0002562180460000093
step S1207, all the characteristic pixel points in the fuzzy characteristic region are subjected to movement processing according to the first deviation data, the third deviation data and the fourth deviation data so that the characteristic pixel points are aligned; so that the objects in the first, second, third and fourth phase images are aligned to the same position. The moving process may be a translation process, a rotation process, or another process, for example, when a rotation process is required, the moving process may be performed according to a fitting curve.
And step S1208, performing depth processing on the feature pixel points aligned in the fuzzy feature region according to a preset calculation method to form the first depth feature data. The specific calculation precautions are:
Figure BDA0002562180460000094
wherein c represents the speed of light, fLEDIndicating the modulation frequency.
Step S130, second basic data are obtained, and second depth feature data are formed according to the second basic data; the method specifically comprises the following steps:
step S1301, performing translation processing on all the characteristic pixel points in the fuzzy characteristic region according to the first deviation data, the third deviation data and the fourth deviation data, and combining second basic data to form second basic data; and all the characteristic pixel points in the fuzzy characteristic region form a cavity part after translation processing is carried out according to the first deviation data, the third deviation data and the fourth deviation data, and form second basic data according to the cavity part and the second basic data. The integration time of the second basic data is half of the integration time of the first basic image data, and the fuzzy area is also half of the first basic image data.
And step S1302, forming second depth feature data according to a preset calculation method according to the second basic data.
And S140, reconstructing depth image data according to the first depth feature data, the second depth feature data and the original basic data. Wherein the original base data is non-motion blurred data.
The invention provides an image processing method, which can be used for removing motion blur, wherein first phase data, second phase data, third phase data and fourth phase data are compared with the magnitude relation of the first phase data, the second phase data, the third phase data and the fourth phase data corresponding to each pixel point so as to detect the pixel points with motion blur; the degree of motion blur is judged by using a mathematical relationship, so that the deviation amount of a moving object in first phase data, second phase data, third phase data and fourth phase data is calculated, the detected motion area is respectively moved by the offset of the group of phase data to realize the alignment of the moving object in the first phase data, the second phase data, the third phase data and the fourth phase data, meanwhile, the first phase data, the second phase data, the third phase data and the fourth phase data under two groups of different integration time are obtained, and the correct depth value of a cavity pixel point left after the object is moved can be calculated according to the first phase data, the second phase data, the third phase data and the fourth phase data of different groups. Thus, the depth image with the motion blur removed is finally calculated. The embodiment of the invention solves the problem that the motion blur generated by a fast moving object affects the depth image. Compared with the early motion estimation and compensation method, the technical scheme adopted by the invention greatly reduces the operation complexity and can basically meet the requirement of real-time operation; compared with other methods for processing by using original data, the algorithm is more in accordance with the imaging principle of the ToF camera, and the accuracy of the restored depth map is effectively improved.
In another aspect, the present invention provides an image processing apparatus, including:
a first feature data forming unit configured to acquire first basic image data and acquire first feature data based on the first basic image data;
the first depth feature data forming unit is used for analyzing and processing the first feature data to form first depth feature data matched with the first feature data;
the second depth characteristic data forming unit is used for acquiring second basic data and forming second depth characteristic data according to the second basic data;
and the processing unit is used for reconstructing depth image data according to the first depth characteristic data, the second depth characteristic data and the original basic data.
The working principle of the image processing apparatus is the same as that of the image processing method, and is not described herein again.
In another aspect, the present invention further provides a terminal, wherein the terminal includes:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an image processing method as described in any one of the above.
Finally, the invention further provides a computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements an image processing method as described in any of the above.
Finally, the present invention further provides a computer-readable storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements any of the above-described water treatment based control methods.
The specific embodiments described above are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (9)

1. An image processing method, comprising:
acquiring first basic image data, and acquiring first characteristic data according to the first basic image data;
analyzing the first feature data to form first depth feature data matched with the first feature data;
acquiring second basic data, and forming second depth feature data according to the second basic data;
and reconstructing to complete depth image data according to the first depth feature data, the second depth feature data and the original basic data.
2. The image processing method according to claim 1, wherein acquiring first base image data, and acquiring first feature data from the first base image data specifically includes:
under the control of preset parameters, acquiring a first phase image, a second phase image, a third phase image and a fourth phase image through an acquisition unit; forming the first base image data from the first, second, third and fourth phase images;
acquiring first phase data matched with the pixel point in the first phase image, second phase data matched with the pixel point in the second phase image, third phase data matched with the pixel point in the third phase image and fourth phase data matched with the pixel point in the fourth phase image according to each pixel point;
forming a characteristic pixel point according to the first phase data, the second phase data, the third phase data and the fourth phase data;
and forming the first characteristic data according to the characteristic pixel points.
3. The image processing method according to claim 2, wherein forming a feature pixel according to the first phase data, the second phase data, the third phase data, and the fourth phase data specifically comprises:
calculating according to the first phase data, the second phase data, the third phase data and the fourth phase data of the pixel points according to a first calculation method to form a first calculation result, and judging whether the first calculation result is matched with a first threshold range;
and under the condition that the first calculation result is matched with a first threshold range, judging that the pixel points matched with the first calculation result form characteristic pixel points.
4. The image processing method according to claim 3, wherein calculating according to a first calculation method to form a first calculation result according to the first phase data, the second phase data, the third phase data, and the fourth phase data of the pixel point, and determining whether the first calculation result matches a first threshold range specifically includes:
the first calculation method specifically includes:
|(DCS0+DCS2)-(DCS1+DCS3)|=T
wherein DCS0 is the first phase data;
DCS1 is the second phase data;
DCS3 is the third phase data;
DCS4 is the fourth phase data;
t is the first calculation result;
and judging that the first calculation result matches the first threshold range in the state that the first calculation result is not smaller than the first threshold range.
5. The image processing method according to claim 2, wherein the analyzing the first feature data to form first depth feature data matched with the first feature data specifically comprises:
processing and analyzing all the characteristic pixel points, and removing discrete pixel points to form basic characteristic pixel points;
acquiring calculation basic feature data from the basic feature pixel points, and forming a fuzzy feature area according to the calculation basic feature data;
acquiring first phase data, second phase data, third phase data and fourth phase data of all the feature pixel points in the fuzzy feature region, calculating to form a first comparison value according to the first phase data and the third phase data, and calculating to form a second comparison value according to the second phase data and the fourth phase data;
classifying all the characteristic pixel points according to the first comparison value and the second comparison value to form a fuzzy region set;
calculating and forming an average position coordinate value of the fuzzy region according to the fuzzy region set;
according to the average position coordinate value of the fuzzy area, second phase data is used as standard data, and first deviation data between first phase data and the second phase data is obtained; third deviation data between third phase data and second phase data, fourth deviation data between fourth phase data and the second phase data;
performing translation processing on all the characteristic pixel points in the fuzzy characteristic region according to the first deviation data, the third deviation data and the fourth deviation data so as to align the characteristic pixel points;
and performing depth processing on the aligned feature pixel points in the fuzzy feature region according to a preset calculation method to form the first depth feature data.
6. The image processing method according to claim 3, wherein obtaining second basic data and forming second depth feature data according to the second basic data specifically comprises:
performing moving processing on all the characteristic pixel points in the fuzzy characteristic region according to the first deviation data, the third deviation data and the fourth deviation data, and combining with second basic data to form second basic data;
and forming second depth feature data according to a preset calculation method according to the second basic data.
7. An image processing apparatus characterized by comprising:
a first feature data forming unit configured to acquire first basic image data and acquire first feature data based on the first basic image data;
the first depth feature data forming unit is used for analyzing and processing the first feature data to form first depth feature data matched with the first feature data;
the second depth characteristic data forming unit is used for acquiring second basic data and forming second depth characteristic data according to the second basic data;
and the processing unit is used for reconstructing depth image data according to the first depth characteristic data, the second depth characteristic data and the original basic data.
8. A terminal, characterized in that the terminal comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an image processing method as claimed in any one of claims 1-6.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out an image processing method as claimed in any one of claims 1 to 6.
CN202010618277.6A 2020-06-30 2020-06-30 Image processing method, control method, terminal and computer readable storage medium Pending CN111798506A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010618277.6A CN111798506A (en) 2020-06-30 2020-06-30 Image processing method, control method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010618277.6A CN111798506A (en) 2020-06-30 2020-06-30 Image processing method, control method, terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111798506A true CN111798506A (en) 2020-10-20

Family

ID=72810938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010618277.6A Pending CN111798506A (en) 2020-06-30 2020-06-30 Image processing method, control method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111798506A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255586A (en) * 2021-06-23 2021-08-13 中国平安人寿保险股份有限公司 Human face anti-cheating method based on alignment of RGB (red, green and blue) image and IR (infrared) image and related equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102393515A (en) * 2010-07-21 2012-03-28 微软公司 Method and system for lossless dealiasing in time-of-flight (TOF) systems
CN104104941A (en) * 2013-04-08 2014-10-15 三星电子株式会社 3d image acquisition apparatus and method of generating depth image
JP2014528059A (en) * 2011-07-12 2014-10-23 サムスン エレクトロニクス カンパニー リミテッド Blur processing apparatus and method
US20160232684A1 (en) * 2013-10-18 2016-08-11 Alexander Borisovich Kholodenko Motion compensation method and apparatus for depth images
US20170214901A1 (en) * 2016-01-22 2017-07-27 Samsung Electronics Co., Ltd. Method and apparatus for obtaining depth image by using time-of-flight sensor
KR20170088259A (en) * 2016-01-22 2017-08-01 삼성전자주식회사 Method and Apparatus FOR obtaining DEPTH IMAGE USING TOf(Time-of-flight) sensor
CN109903241A (en) * 2019-01-31 2019-06-18 武汉市聚芯微电子有限责任公司 A kind of the depth image calibration method and system of TOF camera system
CN110009672A (en) * 2019-03-29 2019-07-12 香港光云科技有限公司 Promote ToF depth image processing method, 3D rendering imaging method and electronic equipment
CN110297254A (en) * 2018-03-21 2019-10-01 三星电子株式会社 Time-of-flight sensor, three-dimensional imaging device and its driving method using it
CN110428381A (en) * 2019-07-31 2019-11-08 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, mobile terminal and storage medium
CN110942430A (en) * 2019-10-08 2020-03-31 杭州电子科技大学 Method for improving motion blur robustness of TOF camera

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102393515A (en) * 2010-07-21 2012-03-28 微软公司 Method and system for lossless dealiasing in time-of-flight (TOF) systems
JP2014528059A (en) * 2011-07-12 2014-10-23 サムスン エレクトロニクス カンパニー リミテッド Blur processing apparatus and method
CN104104941A (en) * 2013-04-08 2014-10-15 三星电子株式会社 3d image acquisition apparatus and method of generating depth image
US20160232684A1 (en) * 2013-10-18 2016-08-11 Alexander Borisovich Kholodenko Motion compensation method and apparatus for depth images
US20170214901A1 (en) * 2016-01-22 2017-07-27 Samsung Electronics Co., Ltd. Method and apparatus for obtaining depth image by using time-of-flight sensor
KR20170088259A (en) * 2016-01-22 2017-08-01 삼성전자주식회사 Method and Apparatus FOR obtaining DEPTH IMAGE USING TOf(Time-of-flight) sensor
CN110297254A (en) * 2018-03-21 2019-10-01 三星电子株式会社 Time-of-flight sensor, three-dimensional imaging device and its driving method using it
CN109903241A (en) * 2019-01-31 2019-06-18 武汉市聚芯微电子有限责任公司 A kind of the depth image calibration method and system of TOF camera system
CN110009672A (en) * 2019-03-29 2019-07-12 香港光云科技有限公司 Promote ToF depth image processing method, 3D rendering imaging method and electronic equipment
CN110428381A (en) * 2019-07-31 2019-11-08 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, mobile terminal and storage medium
CN110942430A (en) * 2019-10-08 2020-03-31 杭州电子科技大学 Method for improving motion blur robustness of TOF camera

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LEE S: "Time-of-flight depth camera motion blur detec-tion and deblurring", IEEE SIGNAL PROCESSING LETTERS, no. 6 *
MARVIN LINDNER 等: "Compensation of Motion Artifacts for Time-of-Flight Cameras", Retrieved from the Internet <URL:https://link.springer.com/chapter/10.1007/978-3-642-03778-8_2> *
王乐 等: "ToF深度相机测量误差校正模型", 系统仿真学报, no. 10 *
胡康哲: "TOF 深度成像系统的研究与实现", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 07 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255586A (en) * 2021-06-23 2021-08-13 中国平安人寿保险股份有限公司 Human face anti-cheating method based on alignment of RGB (red, green and blue) image and IR (infrared) image and related equipment
CN113255586B (en) * 2021-06-23 2024-03-15 中国平安人寿保险股份有限公司 Face anti-cheating method based on RGB image and IR image alignment and related equipment

Similar Documents

Publication Publication Date Title
CN107705333B (en) Space positioning method and device based on binocular camera
CN109086724B (en) Accelerated human face detection method and storage medium
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN109711241B (en) Object detection method and device and electronic equipment
WO2021057455A1 (en) Background motion estimation method and apparatus for infrared image sequence, and storage medium
CN111709893B (en) ORB-SLAM2 improved algorithm based on information entropy and sharpening adjustment
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
Chen et al. A color-guided, region-adaptive and depth-selective unified framework for Kinect depth recovery
CN116051820A (en) Single target detection method based on multiple templates
CN116503462A (en) Method and system for quickly extracting circle center of circular spot
Pan et al. Single-image dehazing via dark channel prior and adaptive threshold
CN112801141B (en) Heterogeneous image matching method based on template matching and twin neural network optimization
CN111798506A (en) Image processing method, control method, terminal and computer readable storage medium
CN113674220A (en) Image difference detection method, detection device and storage medium
CN111160362B (en) FAST feature homogenizing extraction and interframe feature mismatching removal method
CN106802149B (en) Rapid sequence image matching navigation method based on high-dimensional combination characteristics
CN110599407B (en) Human body noise reduction method and system based on multiple TOF cameras in downward inclination angle direction
CN114926514B (en) Registration method and device of event image and RGB image
CN108010076B (en) End face appearance modeling method for intensive industrial bar image detection
Reich et al. A Real-Time Edge-Preserving Denoising Filter.
CN115760549A (en) Processing method for flattening 3D data of curved surface
CN110264417B (en) Local motion fuzzy area automatic detection and extraction method based on hierarchical model
CN108629333A (en) A kind of face image processing process of low-light (level), device, equipment and readable medium
CN114170109A (en) Image noise reduction method and device based on MEMS (micro-electromechanical systems) stripe structured light
Cao et al. Depth image vibration filtering and shadow detection based on fusion and fractional differential

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination