CN109859124B - Depth image noise reduction method and device - Google Patents

Depth image noise reduction method and device Download PDF

Info

Publication number
CN109859124B
CN109859124B CN201910027697.4A CN201910027697A CN109859124B CN 109859124 B CN109859124 B CN 109859124B CN 201910027697 A CN201910027697 A CN 201910027697A CN 109859124 B CN109859124 B CN 109859124B
Authority
CN
China
Prior art keywords
noise
depth image
candidate
depth
candidate noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910027697.4A
Other languages
Chinese (zh)
Other versions
CN109859124A (en
Inventor
刘敦浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201910027697.4A priority Critical patent/CN109859124B/en
Publication of CN109859124A publication Critical patent/CN109859124A/en
Application granted granted Critical
Publication of CN109859124B publication Critical patent/CN109859124B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention is suitable for the technical field of image processing, and provides a method and a device for reducing noise of a depth image, wherein the method comprises the following steps: acquiring at least two frames of depth images, and determining one frame of depth image as a current frame of depth image; comparing the current frame depth image with the adjacent frame depth image to obtain the difference of the two adjacent frame depth image data, and recording the corresponding region of the difference as a candidate noise point; taking the candidate noise point as a center, acquiring a connected region of the candidate noise point, and forming a candidate noise block by the candidate noise point and the connected region thereof; filtering the candidate noise block to obtain a real noise point; processing the depth data of the real noise point in the current frame depth image to remove the real noise point from the current frame depth image; the method can effectively remove noise and noise blocks with two frames at most in the same position, can be applied to depth image noise reduction in multi-frame or dynamic environments, and has wide application prospect.

Description

Depth image noise reduction method and device
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a device for reducing noise of a depth image.
Background
The binocular structured light is matched according to infrared images acquired by two Infrared (IR) lenses to generate parallax, so that depth data are generated. However, the real environment is relatively complex, and in some scenes, due to the influence of some environmental factors (such as an interference light source, a special material and the like in the environment), the infrared image is mistakenly matched in some areas, so that some error data is generated, an abnormal depth value occurs when the depth value is supposed to be 0, and the depth value of the area is a noise point generated. Not only binocular structure light can produce above-mentioned circumstances, also can produce similar circumstances when adopting technologies such as time of flight (TOF), stereo image to gather the image.
The current technical scheme is that filtering is generally carried out on a single-frame depth image or noise reduction is carried out by adopting a noise reduction algorithm, however, the method can only process some salt and pepper noises, but cannot process noise blocks. Some noise reduction algorithms may even smooth the depth data, modifying the original depth data, thereby affecting the depth quality.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for denoising a depth image, so as to solve the technical problem in the prior art that the effect is not good when denoising is performed in a single-frame depth image.
A first aspect of an embodiment of the present invention provides a depth image denoising method, including:
acquiring at least two frames of depth images, and determining one frame of depth image as a current frame of depth image;
comparing the current frame depth image with the adjacent frame depth image to obtain the difference of the two adjacent frame depth image data, and recording the region corresponding to the difference as a candidate noise point;
taking the candidate noise point as a center, acquiring a connected region of the candidate noise point, and forming a candidate noise block by the candidate noise point and the connected region thereof;
filtering the candidate noise block to obtain a real noise point;
and processing the depth data of the real noise in the current frame depth image so as to remove the real noise from the current frame depth image.
A second aspect of an embodiment of the present invention provides a depth image noise reduction device, including:
the device comprises a first module, a second module and a third module, wherein the first module is used for acquiring at least two frames of depth images and determining one frame of depth image as a current frame of depth image;
the second module is used for comparing the current frame depth image with the adjacent frame depth image to obtain the difference of the two adjacent frame depth image data, and recording the region corresponding to the difference as a candidate noise point;
a third module, configured to obtain a connected region of the candidate noise point with the candidate noise point as a center, where the candidate noise point and the connected region thereof form a candidate noise block;
a fourth module, configured to filter the candidate noise block to obtain a real noise point;
and the fifth module is used for processing the depth data of the real noise point in the current frame depth image so as to remove the real noise point from the current frame depth image.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above-described method.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment, at least two frames of depth images are collected for data processing, and the difference between the second frame of depth image and other depth images is compared to obtain candidate noise points with abrupt depth change; obtaining candidate noise blocks through the connected region; in order to further improve the noise reduction effect, the candidate noise blocks are subjected to noise filtration twice, the candidate noise blocks appearing in the multi-frame depth image data difference are compared comprehensively to generate real noise point information, the depth data at the position of the real noise point in the second frame depth image is processed, and meanwhile, the other depth image data is updated, so that noise points can be removed effectively, the effect of eliminating two frames of noise point blocks appearing at the same position at most can be achieved, the original shape of the depth image data can be kept, the operations such as smoothing and the like can not be carried out on the depth data, the technology can be applied to the noise reduction of the depth image under the multi-frame or dynamic environment, and the application prospect is wide.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a first flowchart illustrating a first depth image denoising method according to an embodiment of the present invention;
fig. 2 is a second flowchart illustrating a first depth image denoising method according to an embodiment of the present invention;
fig. 3 is a first flowchart illustrating a second depth image denoising method according to an embodiment of the present invention;
fig. 4 is a second flowchart illustrating a fourth depth image denoising method according to an embodiment of the present invention;
FIG. 5 is a first schematic diagram illustrating candidate noise blocks in a depth image denoising method according to an embodiment of the present invention;
FIG. 6 is a second schematic diagram of a candidate noise block in the depth image denoising method according to the embodiment of the present invention;
FIG. 7 is a third schematic diagram of a candidate noise block in the depth image denoising method according to the embodiment of the present invention;
fig. 8 is schematic diagrams before and after denoising in the depth image denoising method provided by the embodiment of the present invention, where (a) is a depth image before denoising and (b) is a depth image after denoising;
FIG. 9 is a first schematic diagram of a depth image noise reduction apparatus according to an embodiment of the present invention;
FIG. 10 is a second schematic diagram of a depth image noise reduction apparatus according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 shows a flowchart of an implementation of a first depth image denoising method according to an embodiment of the present invention, where the method may be executed by a structured light image acquisition device, and the structured light image acquisition device may be configured in a mobile terminal, and may be implemented by software, or may be implemented by hardware, or may be implemented by both software and hardware. As shown in fig. 1, the depth image denoising method may include the steps of:
step S11: and acquiring three frames of depth images, and determining one frame of depth image as a current frame of depth image.
When the depth image is obtained, the obtained images are matched to generate parallax, and the depth is calculated according to the parallax of the corresponding point, so that the depth image is obtained. However, in a complex surrounding environment, matching errors easily occur, and thus some abnormal values, i.e., noise, are generated. A large number of experiments show that the noise has the following characteristics:
(1) noise is blocky (large noise can reach hundreds of pixels);
(2) generally, the depth image does not exceed 2 frames of depth images, and the situation that the same noise point appears in the continuous 2 frames of depth images may exist in part of block noise points;
(3) noise is only unnaturalness, that is, the depth value of a certain position should be 0, but the value of the position in the depth image is not 0;
(4) the noise value has randomness, the depth value is not fixed, the size is random, the occurrence frequency is random, and the occurrence position is random;
(5) noise generally exists in relatively isolated form, and the depth value of the surrounding pixel where the noise occurs is generally 0.
Based on the above characteristics of the noise point, the present embodiment obtains three frames of depth images, and records them as a first depth image, a second depth image, and a third depth image in sequence, and records the second depth image as a current frame depth image. When noise occurs in any one frame of depth image, the noise generally does not occur continuously in three frames of depth images. The depth image may be acquired by any suitable technique, such as time-of-flight (TOF), structured light, and stereo vision, among others.
Step S12: and comparing the current frame depth image with the adjacent frame depth image to obtain the difference of the depth image data of the two adjacent frames, and marking the region corresponding to the difference as a candidate noise point.
In making the comparison, one way is: comparing the depth information of corresponding pixels in the first depth image and the second depth image, and recording the difference part between the first depth image and the second depth image as diff 1; the depth information of the corresponding pixels in the second depth image and the third depth image are compared, and the difference between the two is denoted as diff 2. The difference portion refers to the disparity of the depth information between the two, which means that the depth information of the corresponding pixel in the first depth image, the second depth image, or the third depth image has a sudden change, and the change of the depth information of the pixel may occur because the depth image generates noise, so the pixels included in diff1 and diff2 may be regarded as noise candidates. Since there is generally no noise that occurs in more than 2 frames of depth images, one is noise that occurs in only one frame of depth image, and the other is noise that occurs in two consecutive frames of depth images, and in either case, the noise information is contained in diff1 or diff 2.
Step S13: and taking the candidate noise point as a center, acquiring a connected region of the candidate noise point, and forming a candidate noise block by the candidate noise point and the connected region thereof.
In this embodiment, the search direction of the connected component may be the neighboring pixels of the noise pixel candidate. Referring to fig. 5, for example, in constructing a candidate noise block in diff1, a candidate noise point in diff1 is first selected as a center pixel, and its neighboring pixels include 8 pixels adjacent to the center pixel, so that the candidate noise block includes 1 candidate noise point pixel at the center and 8 neighboring pixels surrounding the center. Referring to fig. 6, for another example, if the noise candidate is located at the edge of the depth image, in constructing the noise candidate block of the noise candidate in diff1, the noise candidate is first selected as the center pixel, and its neighboring pixels include 5 pixels neighboring to the center pixel, so that the noise candidate block includes 1 noise candidate pixel located at the center and 5 neighboring pixels surrounding the center. Referring to fig. 7, as another example, if the noise candidate is located at one corner (i.e. the intersection of two edges) of the depth image, when constructing the noise candidate block of the noise candidate in diff1, first, the noise candidate is selected as the central pixel, and its neighboring pixels include 3 pixels neighboring to the central pixel, so that the noise candidate block includes 1 noise candidate pixel located at the center and 3 neighboring pixels surrounding the center. When the candidate noise point is located in diff2, the candidate noise block is constructed in the same manner as described above, and will not be described herein again. It should be understood that the search mode of the connected region may be other modes, and is not limited to the above-mentioned case.
In constructing the candidate noise block, consider that diff1 may include multiple candidate noise points that may be adjacent or far apart. When the noise candidates are adjacent, in the process of constructing the noise candidates, the same adjacent pixel may be shared, or one noise candidate may be another adjacent pixel, and at this time, a plurality of noise candidates may be connected to each other to form a larger noise candidate. When the distance between the candidate noise points is long, the constructed candidate noise blocks are independent.
After the candidate noise block is constructed, the candidate noise block needs to be filtered so as to eliminate candidate noise points which are not noise, so that a real noise point can be obtained.
Step S14: and filtering the candidate noise block to obtain a real noise point. Referring to fig. 2, the filtering of the candidate noise block may be performed as follows:
step S141: it is determined whether the number of pixels in the candidate noise block is within a first threshold range.
The first threshold range refers to the number of pixels, that is, the noise block candidates are first filtered by determining whether the number of pixels in the noise block candidates is within a preset range of the number of pixels. When the number of pixels in the candidate noise block is not within the first threshold, it means that the candidate noise block is not a true noise block, and it is not necessary to perform noise reduction processing on the candidate noise block, so the following steps are required:
step S142: the candidate noise block is filtered.
When the number of pixels in the candidate noise block is within the first threshold, it means that the candidate noise block is considered as a noise block, and if the noise reduction processing is required, the following second filtering is also required:
step S143: and judging whether the number proportion of the pixel points with the depth values in the communication area of the candidate noise points in the candidate noise block is lower than a second threshold value.
At this time, it is necessary to calculate depth values of neighboring pixels of the noise candidate in the noise block, and determine which neighboring pixels have depth values and which neighboring pixels have depth values of 0. The more the number of the pixel points with the depth value is, the less the number of the pixel points with the depth value of 0 is. One limiting case is: the depth values of the adjacent pixels of the candidate noise point are all 0, which means that only the depth value of the candidate noise point in the candidate noise block is not 0, and the other pixels are all 0 (at this time, the proportion of the number of the pixels having the depth values in the connected region of the candidate noise point in the selected noise block is the lowest), and the depth value of the candidate noise point can be considered to have a sudden change, and the sudden change is caused by the occurrence of the noise point, so the candidate noise point can be considered to be the real noise point. Another limiting case is: the depth values of the adjacent pixels of the candidate noise point are not 0, which means that the depth values of all the pixels in the candidate noise block are not 0 (at this time, the proportion of the number of the pixels having the depth values in the connected region of the candidate noise point in the selected noise block is the highest), that is, the depth value of the candidate noise point is not mutated, so that the candidate noise point is considered not to be a real noise point, and therefore, the noise reduction is not needed.
In practical situations, it is not only determined whether the candidate noise is true noise or non-true noise if the two limit conditions occur, so that a second threshold may be preset, and the ratio of the number of pixels having depth values in the connected region of the candidate noise in the candidate noise block may be compared with the second threshold to determine whether the ratio is lower than the second threshold. If the ratio of the number of the pixels with the depth value is not lower than the second threshold, it means that the candidate noise point is not the true noise point, and at this time, the following steps need to be performed:
step S144: and filtering the candidate noise point without carrying out noise reduction processing.
If the number proportion of the pixel points with the depth values is lower than the second threshold value, the candidate noise point is a real noise point, then:
step S145: and determining the candidate noise point as a real noise point.
Step S15: and processing the depth data of the real noise point in the current frame depth image so as to remove the real noise point from the current frame depth image.
The real noise may exist in diff1, diff2, or both diff1 and diff2, so when performing noise reduction processing, the real noise data in diff1 and diff2 need to be compared, so as to find the position corresponding to the real noise in the depth image.
When true noise is present only in diff1, it means that there are two cases:
the real noise exists in the first depth image, and neither the second depth image nor the third depth image exists, and at the moment, the real noise is only required to be found in the first depth image, and the depth value of the real noise is set to be zero;
the first depth image does not have the real noise, the second depth image and the third depth image have the real noise, and at this time, the real noise is only required to be found in the second depth image and the third depth image, and the depth value of the real noise is set to be zero.
When true noise is present only in diff2, it means that there are two cases:
the real noise exists in the third depth image, and neither the first depth image nor the second depth image exists, so that the real noise is only required to be found in the third depth image, and the depth value of the real noise is set to be zero;
the third depth image does not have the real noise, the first depth image and the second depth image have the real noise, and at this time, the real noise only needs to be found in the first depth image and the second depth image, and the depth value of the real noise is set to be zero.
When true noise is present in diff1 and diff2, it means that there are two cases:
the real noise exists in the first depth image and the third depth image, the second depth image does not exist, and at the moment, the real noise is only required to be found in the first depth image and the third depth image, and the depth value of the real noise is set to be zero;
the true noise exists in the second depth image, and neither the first depth image nor the third depth image exists, and at this time, the true noise is only required to be found in the second depth image, and the depth value of the true noise is set to zero.
After the depth data of the real noise point in the depth image is set to zero, outputting the depth image of the current frame (namely, the second depth image), and simultaneously updating and storing the depth data of the first depth image and the third depth image.
It should be understood that, in the above denoising process, because the number of noise candidates and noise block candidates to be denoised is plural, the denoising process may be performed in a loop until all noise candidates are finally processed to obtain a denoised depth image (see fig. 8).
The depth image noise reduction method provided by the embodiment of the invention has the beneficial effects that:
at present, when the depth image is subjected to noise reduction, the technical scheme mainly adopted is to perform filtering on a single-frame depth image or perform noise reduction by adopting a noise reduction algorithm, however, the method can only process some salt and pepper noises, but cannot process noise blocks. Some noise reduction algorithms may even smooth the depth data, modifying the original depth data, thereby affecting the depth quality. Moreover, because the denoising processing time in the denoising method is too long, the denoising method is not suitable for denoising under a multi-frame or dynamic environment.
In the embodiment, three frames of depth images are collected for data processing, and the difference between the second frame of depth image and the other two frames of depth images is compared to obtain candidate noise points with abrupt depth change; obtaining candidate noise blocks through the connected region; in order to further improve the noise reduction effect, the noise filtering is carried out twice on the candidate noise block, the candidate noise block appearing in the difference of three frames of depth image data is comprehensively compared, real noise point information is generated, the depth data at the position of the real noise point in the second frame of depth image is processed, meanwhile, the depth image data of the other two frames are updated, noise points can be effectively removed, the effect of eliminating 2 frames of noise point blocks appearing at the same position at most can be achieved, the original shape of the depth image data can be kept, the operations such as smoothing and the like can not be carried out on the depth data, and the technology can be applied to the noise reduction of the depth image under a multi-frame or dynamic environment and has wide application prospects.
Fig. 3 shows a flowchart of an implementation of a second depth image denoising method according to an embodiment of the present invention, where the method may be executed by a structured light image capturing device, and the structured light image capturing device may be configured in a mobile terminal, and may be implemented by software, or may be implemented by hardware, or may be implemented by both software and hardware. As shown in fig. 1, in this embodiment, for a case where noise is not continuously present in two frames of depth images (only one frame of depth image is noisy), the depth image noise reduction method may include the following steps:
step S21: acquiring two frames of depth images, and determining one frame of depth image as a current frame of depth image.
Based on the noise characteristics described above, this embodiment obtains two frames of depth images, sequentially records the two frames of depth images as a first depth image and a second depth image, and records the second depth image as a current frame of depth image. When noise appears in any one frame of depth image, the noise generally does not appear in two frames of depth images continuously. The depth image may be acquired by any suitable technique, such as time-of-flight (TOF), structured light, and stereo vision, among others.
Step S22: and comparing the current frame depth image with the adjacent frame depth image to obtain the difference of the depth image data of the two adjacent frames, and recording the region corresponding to the difference as a candidate noise point.
In the comparison, one way is to compare the depth information of the corresponding pixels in the first depth image and the second depth image, and to mark the difference between the two as diff. The difference part means that the depth information of the two is inconsistent, which means that the depth information of the corresponding pixel point in the first depth image or the second depth image has a sudden change, and the change of the depth information of the pixel point may occur because the depth image generates noise, so that the pixel contained in the diff can be marked as a noise candidate.
Step S23: and taking the candidate noise point as a center, acquiring a connected region of the candidate noise point, and forming a candidate noise block by the candidate noise point and the connected region thereof.
In this embodiment, the search direction of the connected component may be the neighboring pixels of the noise pixel candidate. Referring to fig. 5, for example, when constructing a candidate noise block in diff, first, a candidate noise point in diff is selected as a center pixel, and its neighboring pixels include 8 pixels adjacent to the center pixel, so that the candidate noise block includes 1 candidate noise point pixel at the center and 8 neighboring pixels surrounding the center. Referring to fig. 6, for another example, if the noise candidate is located at the edge of the depth image, in constructing the noise candidate block of the noise candidate in diff1, the noise candidate is first selected as the center pixel, and its neighboring pixels include 5 pixels neighboring to the center pixel, so that the noise candidate block includes 1 noise candidate pixel located at the center and 5 neighboring pixels surrounding the center. Referring to fig. 7, as another example, if the noise candidate is located at one corner (i.e. the intersection of two edges) of the depth image, when constructing the noise candidate block of the noise candidate in diff1, first, the noise candidate is selected as the central pixel, and its neighboring pixels include 3 pixels neighboring to the central pixel, so that the noise candidate block includes 1 noise candidate pixel located at the center and 3 neighboring pixels surrounding the center. It should be understood that the search mode of the connected region may be other modes, and is not limited to the above-mentioned case.
When constructing the candidate noise block, it is considered that diff may include a plurality of candidate noise points, and these candidate noise points may be adjacent or far apart. When the noise candidates are adjacent, in the process of constructing the noise candidates, the same adjacent pixel may be shared, or one noise candidate may be another adjacent pixel, and at this time, a plurality of noise candidates may be connected to each other to form a larger noise candidate. When the distance between the candidate noise points is long, the constructed candidate noise blocks are independent.
After the candidate noise block is constructed, the candidate noise block needs to be filtered so as to eliminate candidate noise points which are not noise, so that a real noise point can be obtained.
Step S24: and filtering the candidate noise block to obtain a real noise point. Referring to fig. 4, filtering the candidate noise block may be performed as follows:
step S241: it is determined whether the number of pixels in the candidate noise block is within a first threshold range.
The first threshold range refers to the number of pixels, that is, the noise block candidates are first filtered by determining whether the number of pixels in the noise block candidates is within a preset range of the number of pixels. When the number of pixels in the candidate noise block is not within the first threshold, it means that the candidate noise block is not a true noise block, and it is not necessary to perform noise reduction processing on the candidate noise block, so the following steps are required:
step S242: the candidate noise block is filtered.
When the number of pixels in the candidate noise block is within the first threshold, it means that the candidate noise block is considered as a noise block, and if the noise reduction processing is required, the following second filtering is also required:
step S243: and judging whether the number proportion of the pixel points with the depth values in the communication area of the candidate noise points in the candidate noise block is lower than a second threshold value.
At this time, it is necessary to calculate depth values of neighboring pixels of the noise candidate in the noise block, and determine which neighboring pixels have depth values and which neighboring pixels have depth values of 0. The more the number of the pixel points with the depth value is, the less the number of the pixel points with the depth value of 0 is. One limiting case is: the depth values of the adjacent pixels of the candidate noise point are all 0, which means that only the depth value of the candidate noise point in the candidate noise block is not 0, and the other pixels are all 0 (at this time, the proportion of the number of the pixels having the depth values in the connected region of the candidate noise point in the selected noise block is the lowest), and the depth value of the candidate noise point can be considered to have a sudden change, and the sudden change is caused by the occurrence of the noise point, so the candidate noise point can be considered to be the real noise point. Another limiting case is: the depth values of the adjacent pixels of the candidate noise point are not 0, which means that the depth values of all the pixels in the candidate noise block are not 0 (at this time, the proportion of the number of the pixels having the depth values in the connected region of the candidate noise point in the selected noise block is the highest), that is, the depth value of the candidate noise point is not mutated, so that the candidate noise point is considered not to be a real noise point, and therefore, the noise reduction is not needed.
In practical situations, it is not only determined whether the candidate noise is true noise or non-true noise if the two limit conditions occur, so that a second threshold may be preset, and the ratio of the number of pixels having depth values in the connected region of the candidate noise in the candidate noise block may be compared with the second threshold to determine whether the ratio is lower than the second threshold. If the ratio of the number of the pixels with the depth value is not lower than the second threshold, it means that the candidate noise point is not the true noise point, and at this time, the following steps need to be performed:
step S244: and filtering the candidate noise point without carrying out noise reduction processing.
If the number proportion of the pixel points with the depth values is lower than the second threshold value, the candidate noise point is a real noise point, then:
step S245: and determining the candidate noise point as a real noise point.
Step S25: and processing the depth data of the real noise point in the current frame depth image so as to remove the real noise point from the current frame depth image.
The real noise may exist in the first depth image or the second depth image, and therefore, when performing the noise reduction processing, a position corresponding to the real noise needs to be found in the depth image.
After the depth data of the real noise point in the depth image is set to zero, the depth image of the current frame (namely, the second depth image) is output, and meanwhile, the depth data of the first depth image is updated and stored.
It should be understood that, in the above-mentioned denoising process, because there are a plurality of noise candidates and noise blocks to be denoised, the above-mentioned denoising process may be performed in a loop until all noise candidates are finally processed, and a denoised depth image is obtained.
The depth image noise reduction method provided by the embodiment of the invention has the beneficial effects that:
in the embodiment, two frames of depth images are collected for data processing, and the difference between the two frames of depth images is compared to obtain candidate noise points with abrupt depth change; obtaining candidate noise blocks through the connected region; in order to further improve the noise reduction effect, noise filtering is performed twice on the candidate noise block to generate real noise point information, the depth data at the position of the real noise point in the second frame depth image is processed, and meanwhile, the other first depth image data is updated, so that the noise points can be effectively removed, the effect of eliminating the noise point blocks at the most at the same position is achieved, the original shape of the depth image data can be reserved, the technology can be applied to the noise reduction of the depth image under a multi-frame or dynamic environment, and the application prospect is wide.
It should be understood that, in other embodiments, the number of the acquired depth image frames may also be 4 frames or more, and only when the noise reduction processing is performed in consideration of acquiring the depth images of 4 frames or more, a situation that the obstacle is recognized as noise may occur in the depth image with the obstacle, and the amount of calculation may also be greatly increased.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Referring to fig. 9, an embodiment of the present invention further provides a depth image denoising apparatus 30, which can be used for implementing the depth image denoising method described above, including a first module 31, a second module 32, a third module 33, a fourth module 34, and a fifth module 35, where the first module 31 is configured to obtain at least two frames of depth images, and determine one of the frames of depth images as a current frame of depth image; the second module 32 is configured to compare the current frame depth image with an adjacent frame depth image, obtain a difference between two adjacent frame depth image data, and record the difference as a candidate noise point; the third module 33 is configured to obtain a connected region of the candidate noise point by taking the candidate noise point as a center, and the candidate noise point and the connected region thereof form a candidate noise block; the fourth module 34 is configured to filter the candidate noise block to obtain a real noise point; the fifth module 35 is configured to process the depth data of the real noise in the current frame depth image, so that the current frame depth image removes the real noise.
The first module 31 includes a camera component, which may be a separate camera from the depth image noise reducer or may be integrated into the depth image noise reducer.
In one embodiment, the number of frames of the image obtained by the first module 31 is two, and the second module 32 compares the depth information of the corresponding pixels in the first depth image and the second depth image when comparing, and records the difference between the two as diff. The difference part means that the depth information of the corresponding pixel point in the first depth image or the second depth image changes suddenly, and the change of the depth information of the pixel point may occur because the depth image generates noise, so that the pixel contained in diff can be marked as a candidate noise.
In one embodiment, the number of the image frames acquired by the first module 31 is three, and the second module 32 compares the depth information of the corresponding pixels in the first depth image and the second depth image when comparing, and records the difference between the two as diff 1; the depth information of the corresponding pixels in the second depth image and the third depth image are compared, and the difference between the two is denoted as diff 2. The difference portion means that the depth information of the corresponding pixel in the first depth image, the second depth image, or the third depth image has a sudden change, and the change of the depth information of the pixel may occur because the depth image generates noise, so the pixels included in diff1 and diff2 may be regarded as noise candidates.
The third module 33 may search for neighboring pixels whose direction is a candidate noisy point pixel when performing a search for a connected component. For example, when constructing a candidate noise block in diff, first, one candidate noise point in diff is selected as a central pixel, and its neighboring pixels include 8 pixels neighboring to the central pixel, so that the candidate noise block includes 1 candidate noise point pixel at the center and 8 neighboring pixels surrounding the center. Of course, the neighboring pixels may have other values, and are not limited to the above.
Referring to fig. 10, the fourth module 34 includes a first filtering unit 341 and a second filtering unit 342, in which the first filtering unit 341 is configured to determine whether the number of pixels in the candidate noise block is within a first threshold range, filter the candidate noise block if the number of pixels in the candidate noise block is not within the first threshold range, and transmit the image data to the second filtering unit 342 if the number of pixels in the candidate noise block is within the first threshold range; the second filtering unit is configured to determine whether a ratio of the number of pixels having depth values in a connected region of the candidate noise point in the candidate noise block is lower than a second threshold, filter the candidate noise point if the ratio is not lower than the second threshold, and if the ratio is lower than the second threshold, indicate that the candidate noise point is a real noise point, and continue to perform noise reduction processing through the fifth module 35.
The fifth module 35 sets the depth value of the pixel corresponding to the real noise point in the depth image to zero, outputs the depth image of the current frame, and updates and stores the depth data of other depth images.
In one embodiment, the depth image noise reduction apparatus further includes an output interface for communicating with an external device. For example, the output interface may include a USB connection, a firewire connection, an ethernet cable connection, or the like, wired connection. In other embodiments, the depth image noise reduction apparatus may communicate with an external device via a wireless connection, such as bluetooth, a WLAN network, etc. For example, the depth image subjected to noise reduction by the image noise reduction device may be output to the outside through the output interface, or may be transmitted to an external device through a wireless connection.
Fig. 11 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 11, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62, such as a depth image noise reduction program, stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various depth image denoising method embodiments described above, such as the steps S11-S15 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 31 to 35 shown in fig. 9.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the terminal device 6. For example, the computer program 62 may be divided into a synchronization module, a summarization module, an acquisition module, and a return module (a module in a virtual device), each for implementing the above-described functions.
The terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 11 is merely an example of a terminal device 6 and does not constitute a limitation of terminal device 6 and may include more or less components than those shown, or some components in combination, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer programs and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (8)

1. A method of depth image noise reduction, comprising:
acquiring at least two frames of depth images, and determining one frame of depth image as a current frame of depth image;
comparing the current frame depth image with the adjacent frame depth image to obtain the difference of the two adjacent frame depth image data, and recording the region corresponding to the difference as a candidate noise point;
taking the candidate noise point as a center, acquiring a connected region of the candidate noise point, and forming a candidate noise block by the candidate noise point and the connected region thereof;
filtering the candidate noise block to obtain a real noise point; specifically, whether the number of pixels in the candidate noise block is within a first threshold value range is judged; if the number of the pixels in the candidate noise block is within a first threshold range, judging whether the proportion of the number of the pixels with depth values in a connected region of the candidate noise points in the candidate noise block is lower than a second threshold; if the number proportion of pixels with depth values in a communication region of the candidate noise points in the candidate noise block is lower than the second threshold, determining the candidate noise points as real noise points;
and processing the depth data of the real noise in the current frame depth image so as to remove the real noise from the current frame depth image.
2. The method of depth image noise reduction according to claim 1, wherein the connected component comprises a plurality of neighboring pixels of the candidate noise point.
3. The method for depth image noise reduction according to claim 1, wherein the step of determining whether the number of pixels in the candidate noise block is within a first threshold further comprises:
if the number of pixels in the candidate noise block is not within a first threshold range, filtering the candidate noise block;
the step of judging whether the number proportion of the pixels with depth values in the connected region of the candidate noise points in the candidate noise block is lower than a second threshold value further comprises:
and if the number proportion of the pixels with the depth values in the communication area of the candidate noise points in the candidate noise block is not lower than the second threshold, filtering the candidate noise points.
4. The method of reducing noise in a depth image according to claim 1, wherein the step of processing the depth data of the true noise in the depth image of the current frame so that the depth image of the current frame removes the true noise comprises:
acquiring the position of a real noise point in the current frame depth image;
setting the depth value of the real noise point in the current frame depth image to zero;
and outputting the current frame depth image, and updating and storing the depth data of other depth images.
5. The method for reducing the noise of the depth image according to any one of claims 1 to 4, wherein the number of frames of the depth image is two;
or the number of frames of the depth image is three.
6. A depth image noise reduction device, comprising:
the device comprises a first module, a second module and a third module, wherein the first module is used for acquiring at least two frames of depth images and determining one frame of depth image as a current frame of depth image;
the second module is used for comparing the current frame depth image with the adjacent frame depth image to obtain the difference of the two adjacent frame depth image data, and recording the region corresponding to the difference as a candidate noise point;
a third module, configured to obtain a connected region of the candidate noise point with the candidate noise point as a center, where the candidate noise point and the connected region thereof form a candidate noise block;
a fourth module, configured to filter the candidate noise block to obtain a real noise point;
a fifth module, configured to process depth data of a real noise point in the current frame depth image, so that the real noise point is removed from the current frame depth image;
the fourth module includes:
the first filtering unit is used for judging whether the number of pixels in the candidate noise block is within a first threshold range or not, and filtering the candidate noise block if the number of pixels in the candidate noise block is not within the first threshold range;
and the second filtering unit is used for judging whether the number proportion of the pixels with the depth values in the communication region of the candidate noise points in the candidate noise block is lower than a second threshold value or not, and filtering the candidate noise points if the number proportion of the pixels with the depth values in the communication region of the candidate noise points in the candidate noise block is not lower than the second threshold value.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201910027697.4A 2019-01-11 2019-01-11 Depth image noise reduction method and device Active CN109859124B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910027697.4A CN109859124B (en) 2019-01-11 2019-01-11 Depth image noise reduction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910027697.4A CN109859124B (en) 2019-01-11 2019-01-11 Depth image noise reduction method and device

Publications (2)

Publication Number Publication Date
CN109859124A CN109859124A (en) 2019-06-07
CN109859124B true CN109859124B (en) 2020-12-18

Family

ID=66894580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910027697.4A Active CN109859124B (en) 2019-01-11 2019-01-11 Depth image noise reduction method and device

Country Status (1)

Country Link
CN (1) CN109859124B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110691228A (en) * 2019-10-17 2020-01-14 北京迈格威科技有限公司 Three-dimensional transformation-based depth image noise marking method and device and storage medium
CN112712476B (en) * 2020-12-17 2023-06-02 豪威科技(武汉)有限公司 Denoising method and device for TOF ranging and TOF camera
CN114419129B (en) * 2021-12-08 2024-10-18 宁波市临床病理诊断中心 Parallax map impurity point removing method, memory and shooting device
CN114360453B (en) * 2021-12-09 2023-04-07 青岛信芯微电子科技股份有限公司 Noise removing method and device, display equipment, chip and medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014240A (en) * 2010-12-01 2011-04-13 深圳市蓝韵实业有限公司 Real-time medical video image denoising method
CN102509265A (en) * 2011-11-02 2012-06-20 天津理工大学 Digital image denoising method based on gray value difference and local energy
CN102629970A (en) * 2012-03-31 2012-08-08 广东威创视讯科技股份有限公司 Denoising method and system for video images
US8704951B1 (en) * 2013-04-05 2014-04-22 Altera Corporation Efficient 2D adaptive noise thresholding for video processing
CN104253929A (en) * 2013-06-28 2014-12-31 广州华多网络科技有限公司 Video denoising method and video denoising system
CN104486618A (en) * 2014-12-30 2015-04-01 浙江宇视科技有限公司 Video image noise detection method and device
CN104796581A (en) * 2015-04-16 2015-07-22 中国科学院自动化研究所 Video denoising system based on noise distribution feature detection
CN105243649A (en) * 2015-11-09 2016-01-13 天津大学 Image denoising method based on secondary noise point detection
CN105472204A (en) * 2014-09-05 2016-04-06 南京理工大学 Inter-frame noise reduction method based on motion detection
CN106254864A (en) * 2016-09-30 2016-12-21 杭州电子科技大学 Snowflake in monitor video and noise noise detecting method
CN106303157A (en) * 2016-08-31 2017-01-04 广州市百果园网络科技有限公司 A kind of vedio noise reduction processing method and vedio noise reduction processing means
CN107403413A (en) * 2017-04-14 2017-11-28 杭州当虹科技有限公司 A kind of video multiframe denoising and Enhancement Method
CN107786780A (en) * 2017-11-03 2018-03-09 深圳Tcl新技术有限公司 Video image noise reducing method, device and computer-readable recording medium
CN107909554A (en) * 2017-11-16 2018-04-13 深圳市共进电子股份有限公司 Image denoising method, device, terminal device and medium
CN108174057A (en) * 2018-01-10 2018-06-15 武汉烛照科技有限公司 It is a kind of using video image interframe difference to the method and device of picture fast noise reduction

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8218634B2 (en) * 2005-01-13 2012-07-10 Ntt Docomo, Inc. Nonlinear, in-the-loop, denoising filter for quantization noise removal for hybrid video compression
US8330990B2 (en) * 2009-01-12 2012-12-11 Xerox Corporation Method and system for modifying a multi-bit rasterized digital image to reduce registration artifacts
CN103024248B (en) * 2013-01-05 2016-01-06 上海富瀚微电子股份有限公司 The video image noise reducing method of Motion Adaptive and device thereof

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014240A (en) * 2010-12-01 2011-04-13 深圳市蓝韵实业有限公司 Real-time medical video image denoising method
CN102509265A (en) * 2011-11-02 2012-06-20 天津理工大学 Digital image denoising method based on gray value difference and local energy
CN102629970A (en) * 2012-03-31 2012-08-08 广东威创视讯科技股份有限公司 Denoising method and system for video images
US8704951B1 (en) * 2013-04-05 2014-04-22 Altera Corporation Efficient 2D adaptive noise thresholding for video processing
CN104253929A (en) * 2013-06-28 2014-12-31 广州华多网络科技有限公司 Video denoising method and video denoising system
CN105472204A (en) * 2014-09-05 2016-04-06 南京理工大学 Inter-frame noise reduction method based on motion detection
CN104486618A (en) * 2014-12-30 2015-04-01 浙江宇视科技有限公司 Video image noise detection method and device
CN104796581A (en) * 2015-04-16 2015-07-22 中国科学院自动化研究所 Video denoising system based on noise distribution feature detection
CN105243649A (en) * 2015-11-09 2016-01-13 天津大学 Image denoising method based on secondary noise point detection
CN106303157A (en) * 2016-08-31 2017-01-04 广州市百果园网络科技有限公司 A kind of vedio noise reduction processing method and vedio noise reduction processing means
CN106254864A (en) * 2016-09-30 2016-12-21 杭州电子科技大学 Snowflake in monitor video and noise noise detecting method
CN107403413A (en) * 2017-04-14 2017-11-28 杭州当虹科技有限公司 A kind of video multiframe denoising and Enhancement Method
CN107786780A (en) * 2017-11-03 2018-03-09 深圳Tcl新技术有限公司 Video image noise reducing method, device and computer-readable recording medium
CN107909554A (en) * 2017-11-16 2018-04-13 深圳市共进电子股份有限公司 Image denoising method, device, terminal device and medium
CN108174057A (en) * 2018-01-10 2018-06-15 武汉烛照科技有限公司 It is a kind of using video image interframe difference to the method and device of picture fast noise reduction

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Depth image denoising and key point extraction for manipulation plane detection";shuang Ma等;《11th world congresson intelligent control and Automation》;20140630;第3315-3320页 *
"基于嘴唇色度Fisher分类的驾驶疲劳视觉检测";孙伟等;《南京信息工程大学学报:自然科学版》;20111231;第3卷(第4期);第324-330页 *
"基于灰度共生矩阵信息融合的视频噪声识别算法";黄小童等;《计算机应用与软件》;20150228;第32卷(第2期);第143页摘要和图2 *
"基于邻域信息的自适应中值滤波算法";张洁玉等;《计算机应用》;20140710;第34卷(第7期);第2010页摘要 *

Also Published As

Publication number Publication date
CN109859124A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
CN109859124B (en) Depth image noise reduction method and device
CN110473242B (en) Texture feature extraction method, texture feature extraction device and terminal equipment
CN109615596B (en) Depth image denoising method and device and electronic equipment
CN111210429B (en) Point cloud data partitioning method and device and obstacle detection method and device
CN108076338B (en) Image visual processing method, device and equipment
JP3242529B2 (en) Stereo image matching method and stereo image parallax measurement method
CN108986197B (en) 3D skeleton line construction method and device
CN111553946B (en) Method and device for removing ground point cloud and method and device for detecting obstacle
CN111402170A (en) Image enhancement method, device, terminal and computer readable storage medium
CN111144337B (en) Fire detection method and device and terminal equipment
CN104363391A (en) Image defective pixel compensation method and system and photographing device
CN109214996B (en) Image processing method and device
CN111161299B (en) Image segmentation method, storage medium and electronic device
WO2021102704A1 (en) Image processing method and apparatus
CN112150371A (en) Image noise reduction method, device, equipment and storage medium
CN111563517A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112435182B (en) Image noise reduction method and device
CN116563172B (en) VR globalization online education interaction optimization enhancement method and device
CN107079121B (en) Storage unit and medium for detecting defective pixel, photoelectric system, device and method
KR101920159B1 (en) Stereo Matching Method and Device using Support point interpolation
Huang et al. Fast color-guided depth denoising for RGB-D images by graph filtering
CN115717897A (en) Point cloud map dynamic object filtering method, device, medium and robot
CN110516680B (en) Image processing method and device
CN115018730A (en) Method, device, equipment and medium for removing image stripe noise
CN105718851A (en) Fingerprint image filtering method and apparatus applied to fingerprint sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: No. 88, Gaoxin North 1st Road, Songpingshan Community, Xili Street, Nanshan District, Shenzhen City, Guangdong Province, China. 2001, Obi Technology Building

Patentee after: Obi Zhongguang Technology Group Co.,Ltd.

Country or region after: China

Address before: A808 Zhongdi Building China University of Geosciences Industry University Research Base No 8 Yuexing 3rd Road Nanshan District Shenzhen Guangdong Province

Patentee before: SHENZHEN ORBBEC Co.,Ltd.

Country or region before: China