CN115442590A - Performance analysis method and device, electronic equipment and computer readable storage medium - Google Patents
Performance analysis method and device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN115442590A CN115442590A CN202210929890.9A CN202210929890A CN115442590A CN 115442590 A CN115442590 A CN 115442590A CN 202210929890 A CN202210929890 A CN 202210929890A CN 115442590 A CN115442590 A CN 115442590A
- Authority
- CN
- China
- Prior art keywords
- light
- protective cover
- receiving camera
- stray light
- performance analysis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/90—Identifying an image sensor based on its output data
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a performance analysis method, a performance analysis device, electronic equipment and a computer-readable storage medium. The performance analysis method comprises the following steps: acquiring the optical signal power S of reflected light received by a light receiving camera, wherein the reflected light is light which is emitted to an object to be detected by a dot matrix projector and reflected to the light receiving camera by the object to be detected; light intensity I based on stray light at G point position G Calculating the power N of the stray light at the position of the light receiving camera, wherein the stray light at the position of the G point is the light which is emitted to the protective cover by the dot matrix projector and is totally reflected by the protective cover; calculating the signal-to-noise ratio (SNR) of the depth imaging device according to the optical signal power S and the stray light power N; the performance of the depth imaging device is analyzed based on the signal-to-noise ratio, SNR. By analyzing the performance of the structural model of the depth imaging device by using the performance analysis method, the most suitable design parameters are found, and the elimination or reduction of the design parameters is realizedVeiling glare.
Description
Technical Field
The disclosure belongs to the technical field of depth imaging, and particularly relates to a performance analysis method and device, electronic equipment and a computer-readable storage medium.
Background
With the improvement of living standard of people, the indoor robot based on the intelligent navigation scheme gradually enters the life of people, and the 3D sensing system is the most core part of the indoor robot and is used for realizing functions of SLAM (synchronous positioning and mapping), obstacle avoidance and the like. The 3D perception system mostly adopts an active and large wide-angle structured light scheme to realize the spatial three-dimensional reconstruction.
The depth imaging device with the large wide-angle structured light comprises a large wide-angle dot matrix projector, a light receiving camera and a protective cover. However, because the speckle projection field angle of the dot matrix projector is large, in the actual use of products, the problem of stray light such as light leakage and glare often occurs in the image collected by the light receiving camera, which affects the quality of three-dimensional depth reconstruction, and these factors will greatly limit the application of the large wide-angle depth imaging device on a robot.
Disclosure of Invention
The present disclosure is directed to a performance analysis method, a device, an electronic apparatus, and a computer-readable storage medium, which are used to analyze the performance of a depth imaging device when designing the depth imaging device, so as to find out the most suitable design parameters, thereby improving the yield of the depth imaging device, and improving the problem that the image collected by a light receiving camera often has stray light problems such as light leakage and glare.
The first aspect of the present disclosure provides a performance analysis method for a depth imaging apparatus, where the depth imaging apparatus includes a substrate, a functional element and a protective cover, the functional element is disposed on the substrate, the protective cover is supported on the substrate and located on a side of the functional element away from the substrate, and the functional element includes a dot matrix projector and a light receiving camera that are arranged at intervals; wherein, the performance analysis method comprises the following steps:
acquiring the optical signal power S of reflected light rays received by a light receiving camera, wherein the reflected light rays are light rays which are emitted to an object to be detected by a dot matrix projector and reflected to the light receiving camera by the object to be detected;
light intensity I based on stray light at G point position G Calculating the power N of the stray light at the position of the light receiving camera, wherein the stray light at the position of the G point is the light which is emitted to the protective cover by the dot matrix projector and is totally reflected by the protective cover;
calculating the signal-to-noise ratio (SNR) of the depth imaging device according to the optical signal power S and the stray light power N;
analyzing performance of the depth imaging device based on the signal-to-noise ratio (SNR).
In an exemplary embodiment of the present disclosure, the relationship of the optical signal power S, the parasitic light power N, and the signal-to-noise ratio SNR of the depth imaging device satisfies the following formula (1), wherein:
the light intensity I of stray light at the position of the G point G The diameter delta of the image formed by the stray light at the position of the G point at the light receiving camera and the stray light power N at the light receiving camera satisfy the following relation of formula (1), wherein:
in an exemplary embodiment of the disclosure, the obtaining the optical signal power S of the reflected light received by the light receiving camera specifically includes:
acquiring a focal length f of the light receiving camera, a light divergence angle alpha and a single-point light power Is of the dot matrix projector, a reflectivity Rs of the object to be detected and a path D of the reflected light, and calculating the light signal power S based on the following formula (3), wherein:
in an exemplary embodiment of the present disclosure, the light intensity I of the parasitic light at the G-point-based position G And the diameter delta of the image formed by the stray light at the position of the G point at the light receiving camera, and before calculating the stray light power N at the light receiving camera, the performance analysis method further comprises the following steps:
acquiring a vertical distance d1 between the protective cover and an optical center of the light receiving camera;
when it is determined that a vertical distance d1 between the protective cover and the optical center of the light-receiving camera is smaller than a preset distance value, calculating a diameter δ of an image formed by stray light at the light-receiving camera at the G point position based on a lens aperture F, a focus distance FL, a focal length F and the vertical distance d1 of the light-receiving camera, wherein:
in an exemplary embodiment of the present disclosure, the preset distance value is 10mm.
In an exemplary embodiment of the present disclosure, the light intensity I of the parasitic light at the G-point-based position G And the diameter delta of the image formed by the stray light at the position of the G point at the light receiving camera, and before calculating the stray light power N at the light receiving camera, the performance analysis method further comprises the following steps:
obtaining the incident light intensity I of stray light in the light emitted by the dot matrix projector i And the attenuation coefficient u of the protective cover;
determining the reflectivity R of the protective cover, the internal reflection times M when stray light in the protective cover is reflected to the G point position and the single internal reflection path L, and calculating the light intensity I of the stray light at the G point position based on the following formula (5) G Wherein:
I G =I i *R M * exp (-uML) formula (5).
In an exemplary embodiment of the disclosure, the determining the reflectivity R of the protective cover, the number of times of internal reflection M when stray light in the protective cover is reflected to the G point position, and the single internal reflection path L specifically includes:
determining a refraction angle theta t of stray light entering the protective cover and a reflectivity R of the protective cover based on an incidence angle theta i of the stray light in the light rays emitted by the dot matrix projector, a refractive index n1 of a medium between the functional element and the protective cover and a refractive index n2 of the protective cover;
determining a single internal reflection path L based on the thickness d2 of the protective cover and the refraction angle theta t of the protective cover;
and determining the internal reflection times M when the stray light in the protective cover is reflected to the position of the G point based on the incident angle theta i of the stray light, the refraction angle theta t and the thickness d2 of the protective cover, the horizontal distance HG between the optical center of the dot matrix projector and the position of the G point and the vertical distance d1 between the protective cover and the optical center of the light receiving camera, wherein the optical center of the dot matrix projector and the optical center of the light receiving camera are positioned on the same horizontal line.
In an exemplary embodiment of the present disclosure, the incidence angle θ i of the parasitic light, the refractive index n1, the refractive index n2, the refraction angle θ t of the protective cover, the reflectivity R, the thickness d2, the horizontal distance HG, the vertical distance d1, and the number of internal reflections M correspond to satisfy the following formulas (6) to (10), wherein:
n1 sinθ i =n2 sinθ t formula (6);
R(R 0 ,θ t )=R 0 +(1-R 0 )(1-cosθ t ) 5 formula (7);
HG=d1*tanθ i +M*d2*tanθ t equation (10).
In an exemplary embodiment of the disclosure, the obtaining the attenuation coefficient u of the protective cover specifically includes:
obtaining the thickness d2 and the transmittance T of the protective cover, and calculating the attenuation coefficient u of the protective cover based on the following formula (11), wherein:
in an exemplary embodiment of the disclosure, the analyzing the performance of the depth imaging apparatus based on the SNR of the signal to noise ratio specifically includes:
comparing the magnitude relation between the signal-to-noise ratio SNR and a preset ratio;
when the signal-to-noise ratio SNR is smaller than the preset ratio, determining that the performance of the depth imaging device is in a bad state;
and calling out at least part of parameter information used for calculating the optical signal power S and/or the parasitic light power N when the performance of the depth imaging device is in a bad state so as to indicate to adjust the at least part of parameter information.
In an exemplary embodiment of the present disclosure, the preset ratio is 10dB.
The second aspect of the present disclosure provides a performance analysis apparatus, configured to analyze performance of a depth imaging apparatus, where the depth imaging apparatus includes a substrate, a functional element and a protective cover, the functional element is disposed on the substrate, the protective cover is supported on the substrate and is located on a side of the functional element away from the substrate, and the functional element includes a dot matrix projector and a light receiving camera that are arranged at intervals; wherein the performance analysis device comprises:
the device comprises an acquisition module, a detection module and a control module, wherein the acquisition module is used for acquiring the optical signal power S of reflected light rays received by a light receiving camera, and the reflected light rays are light rays which are emitted to an object to be detected by a dot matrix projector and are reflected to the light receiving camera by the object to be detected;
a first calculation module for calculating the intensity of stray light I based on the position of G point G Calculating the power N of the stray light at the position of the light receiving camera, wherein the stray light at the position of the G point is the light which is emitted to the protective cover by the dot matrix projector and is totally reflected by the protective cover;
the second calculation module is used for calculating the signal-to-noise ratio (SNR) of the depth imaging device according to the optical signal power S and the stray light power N;
and the performance analysis module is used for analyzing the performance of the depth imaging device based on the SNR.
A third aspect of the present disclosure provides an electronic device, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored by the memory to perform the performance analysis method of any of the above.
A fourth aspect of the present disclosure provides a computer-readable storage medium having computer-readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the performance analysis method of any one of the above.
The scheme disclosed by the invention has the following beneficial effects:
the performance of the depth imaging device is evaluated by adopting a signal-to-noise ratio SNR, the calculation of the signal-to-noise ratio SNR is related to the power S of an optical signal received by a light receiving camera and the power N of stray light, the power S of the optical signal is a normal signal received by the light receiving camera, the power N of the stray light refers to the power of light which is totally reflected by a protective cover and enters the light receiving camera, and the power N of the stray light is noise received by the light receiving camera.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure. It should be apparent that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived by those of ordinary skill in the art without inventive effort.
Fig. 1 shows a schematic structural diagram of a depth imaging apparatus according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram showing the geometry of the simplified structures of the depth imaging apparatus of FIG. 1;
FIG. 3 is a flow chart illustrating a performance analysis method according to an embodiment of the disclosure;
FIG. 4 is a diagram showing the relationship between the SNR and the path D of the reflected light when the number of internal reflections M of the protective cover is 4;
FIG. 5 is a diagram showing the relationship between the SNR and the path D of the reflected light when the number of internal reflections M of the protective cover is 6;
fig. 6 is a block diagram illustrating a performance analysis apparatus according to an embodiment of the present disclosure;
fig. 7 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the embodiments of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, steps, etc. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The embodiment of the disclosure provides a depth imaging device, which can be used for realizing depth perception, and the mainly adopted realization mode is a visual mode based on lattice projection. The depth imaging device of the present embodiment may be an active monocular structured light or an active binocular structured light, as shown in fig. 1, and the core structures thereof all include: comprising a large wide-angle dot matrix projector 101, a light receiving camera 102 and a protective cover 104.
It should be appreciated that where the depth imaging device is an active binocular configuration light, it may include two light receiving cameras 102 equally spaced on opposite sides of the dot matrix projector 101 as compared to a scheme where it is an active monocular configuration light.
For example, the dot matrix projector 101 may be an infrared speckle projector, and may be composed of an infrared vcsel (vertical cavity surface) laser, a collimating lens, and a diffractive optical element, and its function is to project patterns arranged in a specific manner to a space to increase the characteristics and uniqueness of an object to be measured, and in the design of a product field angle, it adopts a large wide-angle design, for example, 120 ° x 100 °, to ensure that a robot can implement an obstacle avoidance function at a short distance. The light receiving camera 102 may be an infrared receiving camera, and is composed of a photosensitive chip CMOS, an imaging lens, and an infrared narrowband filter, and the resolution thereof may be mega pixels, for example 1280 × 1080, and is used for collecting an infrared speckle pattern of a space to be measured.
The depth imaging device of this embodiment calculates the parallax offset of the same-name point through the infrared speckle pattern of the space to be measured acquired by the light-receiving camera 102, and finally performs depth calculation and depth compensation, so as to generate high-resolution and high-precision image depth information.
The dot matrix projector 101 and the light receiving camera 102 may be regarded as a functional element integrally, and are mounted on the same side of the same substrate 103, and the dot matrix projector 101 and the light receiving camera 102 may be mounted by screw fixing or by dispensing. The center distance between the dot matrix projector 101 and the light receiving camera 102 is called the baseline distance, and the size is selected according to the application, such as short baseline is usually used in short distance application, and long baseline is usually used in long distance application.
The protective cover 104 can be supported on the substrate 103 and located on a side of the functional device away from the substrate 103, that is: on the side of the dot matrix projector 101 and the light receiving camera 102 away from the substrate 103, it should be understood that the protective cover 104 can be supported on the substrate 103 by a supporting housing surrounding the functional device, and the protective cover 104 is located between the functional device and the object to be measured.
In the present embodiment, the protection cover 104 mainly plays a role of dust prevention, water prevention, and the like to ensure the stability of the measurement environment of the depth imaging device. Further, the surface of the protective cover 104 may be coated with an AR antireflection film to ensure that most of the light can be projected without damage, so as to generate high-resolution and high-precision image depth information later.
For example, the protection cover 104 of the present embodiment may be a glass structure, but is not limited thereto, as the case may be.
In this embodiment, the optical center of the dot matrix projector 101 and the optical center of the light receiving camera 102 may be located on the same horizontal line, and the horizontal line may be parallel to the substrate 103. Due to the existence of the protective cover 104, in the actual use of the product, the light with a large field angle in the dot matrix projector 101 is easily totally reflected in the protective cover 104, so as to generate stray light affecting the light receiving camera 102, and the stray light may affect the quality of the image collected by the light receiving camera 102, that is: the image collected by the light receiving camera 102 often has stray light problems such as light leakage and glare, and then the quality of three-dimensional depth reconstruction is seriously affected. These factors will severely limit the application of depth imaging devices in the field of robotic obstacle avoidance. Therefore, it is important to model the parasitic light problem of the large-wide-angle depth imaging device in order to find the most suitable design parameters in product design.
In view of the above problems, the present embodiment provides a performance analysis method, which can be used for performing performance analysis on the depth imaging apparatus shown in fig. 1, and it should be noted that reference may be made to the foregoing for structural features of the depth imaging apparatus, and repeated description is not repeated here.
Fig. 2 is a schematic diagram illustrating a geometrical relationship between the simplified structures of the depth imaging apparatus shown in fig. 1, in fig. 2, O1 is an optical center of the dot-matrix projector 101, O2 is an optical center of the light-receiving camera 102, n2 is a refractive index of the protective cover 104, d2 is a thickness of the protective cover 104, d1 is a vertical distance between the optical centers O1 and O2 and the protective cover 104, and n1 is a refractive index of a medium (e.g., air) on both sides of the protective cover 104, where n1< n2; the horizontal distance between O1 and O2 is a baseline, and fig. 2 simulates a bundle of stray light from an optical center O1 of the dot-matrix projector 101 to enter an n2 medium through an n1 medium, where the incident angle of the stray light is θ i and the refraction angle is θ t, where points a, B, C, D, E, F, and G in fig. 2 are positions where the stray light passes through the protective cover 104, and points H and K in fig. 2 are positions where the optical centers O1 and O2 are respectively projected onto the protective cover 104.
Specifically, with reference to the contents of fig. 1 and fig. 2, the performance analysis method of the present embodiment may be specifically as shown in fig. 3, which includes step S200, step S202, step S204, and step S206, wherein:
in step S200, obtaining an optical signal power S of a reflected light received by the light receiving camera 102, where the reflected light is a light emitted to the object to be measured by the dot matrix projector 101 and reflected to the light receiving camera 102 via the object to be measured;
in the step ofIn step S202, based on the light intensity I of the stray light at the G point position G And the diameter δ of the image formed by the stray light at the position of the G point at the light receiving camera 102, calculating the stray light power N at the position of the light receiving camera 102, where the stray light at the position of the G point is a light ray which is emitted to the protective cover 104 by the dot matrix projector 101 and totally reflected out by the protective cover 104;
in step S204, calculating a signal-to-noise ratio SNR of the depth imaging apparatus according to the optical signal power S and the parasitic light power N;
in step S206, the performance of the depth imaging device is analyzed based on the signal-to-noise ratio (SNR).
In this embodiment, the performance of the depth imaging apparatus is evaluated by using the SNR, where the SNR is calculated in relation to the power S of the optical signal received by the light-receiving camera 102 and the stray light power N, the power S of the optical signal is a normal signal received by the light-receiving camera 102, the stray light power N is the power of the light reflected by the protective cover 104 and entering the light-receiving camera 102, and the stray light power N is the noise received by the light-receiving camera 102, when the depth imaging apparatus is designed, the performance of the structural model of the depth imaging apparatus may be analyzed by using the performance analysis method, and based on the analysis result, the most suitable design parameter is found to avoid or improve the influence of the stray light totally reflected by the protective cover 104 on the light-receiving camera 102, so that the stray light problems such as light leakage and glare often occur in the image collected by the light-receiving camera 102 may be improved, and the yield of the depth imaging apparatus may be further improved.
In step S206, analyzing the performance of the depth imaging apparatus based on the SNR may specifically include:
s2061, comparing the magnitude relation between the SNR and a preset ratio;
step S2062, when the signal-to-noise ratio SNR is smaller than the preset ratio, determining that the performance of the depth imaging device is in a bad state;
step S2063, when the performance of the depth imaging apparatus is in a bad state, calling out at least part of parameter information used when calculating the optical signal power S and/or the parasitic light power N, so as to instruct to adjust the at least part of parameter information.
That is to say, when the SNR is smaller than the preset ratio, it indicates that the ratio of the parasitic light power N in the image information collected by the light receiving camera 102 is relatively large, the ratio of the optical signal power S is relatively small, and the image information collected by the light receiving camera 102 has the more obvious parasitic light problems such as light leakage and glare, that is: determining that the performance of the depth imaging device is in a bad state, and therefore, adjusting a structural model of the depth imaging device is needed, wherein in the performance analysis method, when the performance of the depth imaging device is determined to be in the bad state, namely: calling out at least part of parameter information used when calculating the optical signal power S and/or the parasitic light power N so as to indicate to adjust the called out part of parameter information, quickly constructing the most appropriate structural model of the depth imaging device, and improving the design efficiency of the depth imaging device.
The expression form of at least part of the parameter information used when the optical signal power S and/or the parasitic light power N are called out may be: the parameters are displayed on the terminal, and an operator can manually modify the parameter values and then perform performance analysis; or, calling out at least part of parameter information used for calculating the optical signal power S and/or the stray light power N, then automatically adjusting and modifying the parameter information values based on a set program, and then performing performance analysis until the performance analysis result meets the requirements, namely: the signal-to-noise ratio SNR is greater than or equal to a preset ratio.
For example, the predetermined ratio mentioned in the present embodiment may be 10dB, but is not limited thereto, and the magnitude of the predetermined ratio may be adjusted according to actual requirements.
In step S206, analyzing the performance of the depth imaging apparatus based on the SNR, specifically, the method may further include:
step S2063, when the SNR is greater than or equal to the preset ratio, determining that the performance of the depth imaging apparatus is in a good state, and after determining that the performance of the depth imaging apparatus is in the good state, an operator may use the parameter information corresponding to the depth imaging apparatus at this time as a final design parameter.
In order to facilitate the operator to obtain these final design parameters, in step S206, the performance of the depth imaging apparatus is analyzed based on the SNR, which may specifically include:
and S2064, when the performance of the depth imaging device is determined to be in a good state, displaying all parameter information used for calculating the optical signal power S and/or the parasitic light power N, and designing the depth imaging device meeting the requirement by an operator according to the parameter information.
The following describes the parameter information used in calculating the optical signal power S and the parasitic light power N in detail with reference to a formula.
In this embodiment, the relationship among the optical signal power S, the parasitic light power N, and the SNR of the depth imaging device satisfies the following formula (1), where:
and the light intensity I of stray light at the position of the G point G The diameter δ of the image of the flare at the light-receiving camera 102 at the G point position and the flare power N at the light-receiving camera 102 satisfy the following equation (1) relationship, where:
in step S200, obtaining the optical signal power S of the reflected light received by the light receiving camera 102 may specifically include:
acquiring a focal length f of the light receiving camera 102, a light divergence angle α and a single-point light power Is of the dot matrix projector 101, a reflectivity Rs of the object to be measured, and a path D of the reflected light, and calculating the light signal power S based on the following formula (3), wherein:
in this embodiment, the optical signal power S obtained by the light receiving camera 102 Is calculated by using the parameter information of the focal length f of the light receiving camera 102, the light divergence angle α and the single-point optical power Is of the dot-matrix projector 101, the reflectivity Rs of the object to be measured, and the path D of the reflected light, so that compared with a scheme of directly collecting the optical signal of the depth imaging device in a use state, the value of the optical signal power S Is more accurate, and the subsequent analysis result Is more accurate, and in addition, when the SNR does not meet the requirement, the focal length f of the light receiving camera 102, the light divergence angle α and the single-point optical power Is of the dot-matrix projector 101, the reflectivity Rs of the object to be measured, and the path D of the reflected light can be conveniently retrieved, so that the subsequent adjustment of the parameter information Is facilitated, and the finally designed depth imaging device can avoid or improve the influence of stray light totally reflected by the protective cover 104 on the light receiving camera 102, thereby improving the problems of glare, light leakage, and the yield of the depth imaging device.
In step S202, the light intensity I of the stray light at the position based on the G point G And the diameter δ of the image of the flare at the light-receiving camera 102 at the G point position, before calculating the flare power N at the light-receiving camera 102, the performance analysis method further includes:
step S2011, acquiring a vertical distance d1 between the optical center of the protection cover 104 and the light-receiving camera 102;
step S2012, when it is determined that the vertical distance d1 between the protective cover 104 and the optical center of the light-receiving camera 102 is smaller than the preset distance value, based on the lens aperture F, the focal distance FL, the focal length F of the light-receiving camera 102 and the vertical distance d1, calculating a diameter δ of an image formed by the stray light at the G point position at the light-receiving camera 102, wherein:
in this embodiment, when it is determined that the vertical distance d1 between the protective cover 104 and the optical center of the light receiving camera 102 is smaller than the preset distance value, the position of the G point is a microspur for the light receiving camera 102, the image formed by the light receiving camera 102 can be approximated to a circle of confusion, the calculation formula of the diameter δ of the circle of confusion can refer to the above formula (4), that is, when it is determined that the vertical distance d1 between the protective cover 104 and the optical center of the light receiving camera 102 is smaller than the preset distance value, the diameter δ of the circle of confusion is related to the parameters of the vertical distance d1, the lens aperture F, the focusing distance FL, and the focal length F, and in order to adjust the diameter δ of the circle of confusion to reduce the stray light power N, and to avoid or improve the influence of stray light totally reflected by the protective cover 104 on the light receiving camera 102, the adjustment of the parameters can be achieved.
For example, the aforementioned predetermined distance may be 10mm, but is not limited thereto, and may be adjusted according to the actual situation.
In step S202, the light intensity I of the stray light at the position based on the G point G And the diameter δ of the image of the flare at the light-receiving camera 102 at the G point position, before calculating the flare power N at the light-receiving camera 102, the performance analysis method may further include:
step S2013, obtaining the incident light intensity I of stray light in the light emitted by the dot matrix projector 101 j And the attenuation coefficient u of the protective cover 104;
step S2014 is to determine the reflectivity R of the protection cover 104, the number M of times of internal reflection when the stray light in the protection cover 104 is reflected to the G point position, and the single internal reflection path L, and calculate the light intensity I of the stray light at the G point position based on the following formula (5) G Wherein:
I G =I i *R M * exp (-uML) formula (5).
Based on the above, the light intensity I of the stray light at the G point position G And the incident light intensity I i Attenuation coefficient u, reflectivity R, internal reflection times M and single internal reflection path L, in order to adjust the light intensity I of stray light at the position of G point G To reduce the parasitic light power N, to avoid or improve protectionThe effect of stray light totally reflected off the cover 104 on the light receiving camera 102 can be achieved by adjusting these parameters.
The change of the light intensity is described by taking the single-time internal reflection path L as an example, for example, when the reflected light ray in the protective cover 104 passes from the B point to the C point in fig. 2, the passing path is the single-time internal reflection path L, and the optical intensity I at the C point is the same c The optical intensity from the B-spot position can be expressed as: i is c =I B exp(-uL)。
Therefore, from the A point position (i.e., the stray light incident position) to the G point position (i.e., the stray light emergent position), the light is internally reflected within the protective cover 104 for M times, and the intensity of the light at the A point position (i.e., the incident light intensity I) is obtained i ) With the light intensity I at the G point position G The aforementioned formula (5) can be referred to for the relationship therebetween.
In step S2014, determining the reflectivity R of the protecting cover 104, the number of times of internal reflection M when stray light in the protecting cover 104 is reflected to the G point position, and the single internal reflection path L may specifically include:
step S20141, determining a refraction angle θ t of the stray light entering the protective cover 104 and a reflectivity R of the protective cover 104 based on an incident angle θ i of the stray light in the light emitted by the dot matrix projector 101, a refractive index n1 of a medium between the functional element and the protective cover 104, and a refractive index n2 of the protective cover 104;
step S20142, determining a single-time internal reflection path L based on the thickness d2 of the protective cover 104 and the refraction angle theta t of the protective cover 104;
step S20143, determining the number M of internal reflections when the stray light in the protective cover 104 is reflected to the G point position based on the incident angle θ i of the stray light, the refraction angle θ t and the thickness d2 of the protective cover 104, the horizontal distance HG between the optical center of the dot matrix projector 101 and the G point position, and the vertical distance d1 between the protective cover 104 and the optical center of the light receiving camera 102.
In the embodiment, the incident angle θ i, the refractive index n1, the refractive index n2 of the protection cover 104, the thickness d2 of the protection cover 104, the horizontal distance HG and the vertical distance HG of the stray light can be measuredd1 these parameters can adjust the reflectivity R, the single internal reflection path L and the number M of internal reflections of the protection cover 104, so as to realize the light intensity I of stray light at the G point position G The adjustment is performed to reduce the stray light power N, so as to avoid or improve the influence of the stray light totally reflected by the protection cover 104 on the light-receiving camera 102.
Specifically, the incidence angle θ i of the stray light, the refractive index n1, the refractive index n2, the refraction angle θ t of the protective cover 104, the reflectivity R, the thickness d2, the horizontal distance HG, the vertical distance d1, and the number of internal reflections M satisfy the following formulas (6) to (10), respectively, where:
n1 sinθ i =n2 sinθ t formula (6);
R(R 0 ,θ t )=R 0 +(1-R 0 )(1-cosθ t ) 5 formula (7);
HG=d1*tanθ i +M*d2*tanθ t equation (10).
For example, the medium between the functional element and the protective cover 104 may be air, and the refractive index n1 of air is equal to 1.
For example, in step S2013, the obtaining the attenuation coefficient u of the protective cover 104 may specifically include:
obtaining the thickness d2 of the protective cover 104 and the transmittance T of the protective cover 104, and calculating the attenuation coefficient u of the protective cover 104 based on the following formula (11), wherein:
that is, by adjusting the transmittance T and the protective coverThe thickness d2 of the protective cover 104 can be adjusted to the attenuation coefficient u of the protective cover 104, so that the light intensity I of stray light at the G point can be adjusted G The adjustment is performed to reduce the stray light power N, so as to avoid or improve the influence of the stray light totally reflected by the protection cover 104 on the light receiving camera 102.
It should be understood that, based on the contents of equation (10), the refraction theorem corresponding to equation (6), and the like, the horizontal distance HK between the optical center O1 of the dot matrix projector 101 and the optical center O2 of the light-receiving camera 102 can be derived, wherein,
HK=2*d1*tanθ i +M*d2*tanθ t equation (12).
Based on the above, the parameters involved in the SNR calculation may include the lens aperture F, the focal distance FL, the focal length F, the refractive index n1, the refractive index n2, the transmittance T, the vertical distance D1, the thickness D2, the reflectivity Rs, the light divergence angle α, the single-point optical power Is, the path D of the reflected light, and the incident light intensity I j The incident angle θ i, the horizontal distance HK, that is, in order to make the SNR satisfy the design requirement, the influence of the stray light totally reflected by the protection cover 104 on the light receiving camera 102 can be avoided or improved by adjusting these parameters.
The above is a way to complete a depth imaging device using a performance analysis method. An example of how veiling glare elimination and reduction can be performed in a design follows.
In fact, in the depth imaging device product design: the lens aperture F, the focal distance FL, the focal length F, the refractive index n1, the refractive index n2, the transmittance T, the vertical distance d1, the thickness d2, the light divergence angle α, and the single-point optical power Is are known values, and when the application environment of the depth imaging device product Is determined, that Is: the reflectivity Rs of the object to be measured is also a known value. According to the foregoing equations (1) to (12), given some given parameters, such as f =1.65mm, fl =600mm, f =2, n1=1, n2=1.5, t =0.9, D1=7mm, D2=1.5mm, rs =0.6, α =1 °, is =100uW, the stray light incident light intensity Ii =50mW, and the incident angle θ i =50 °, the SNR can be obtained as follows:
as shown in fig. 4, for the quality of the image collected by the light receiving camera 102, when the SNR is greater than or equal to 10dB, the performance of the three-dimensional reconstruction is more guaranteed, so that the SNR specification of the signal-to-noise ratio in the using range is not greater than 10dB when the total reflection times M =4, especially at a long distance, that is: the path D of the reflected light is particularly large. It is clear that the length of the horizontal distance HK =20mm between the optical center O1 of the dot matrix projector 101 and the optical center O2 of the light receiving camera 102 is not sufficient.
As shown in fig. 5, when the total number of reflections M =6, it can be seen that the SNR is still much larger than 10dB at 4M of the path D of the reflected light, and at this time, the horizontal distance HK =22mm between the optical center O1 of the dot matrix projector 101 and the optical center O2 of the light receiving camera 102, that is, when the horizontal distance HK =22mm between the optical center O1 of the dot matrix projector 101 and the optical center O2 of the light receiving camera 102, the flare problem does not substantially occur.
The embodiment provides a system modeling and stray light elimination method for a large-wide-angle depth imaging device, which comprises the following steps of establishing a stray light model, namely: and establishing a relation between the requirements of the application scene and the design parameters of each module of the depth imaging device by using a formula by using a performance analysis method. The designer can adjust the design parameters of the different components to eliminate the parasitic light problem caused by the protective cover 104 of the large wide-angle depth imaging device. The method can be widely applied to the fields of robot navigation, obstacle avoidance and the like.
When adjusting the parameters, the horizontal distance HK between the optical center O1 of the dot matrix projector 101 and the optical center O2 of the light receiving camera 102 may be preferentially adjusted to adjust the number of times of internal reflection M, and the SNR may not satisfy the design requirement even when the horizontal distance HK is adjusted, and the stray light incident angle θ i, the thickness d2 of the protective cover 104, the transmittance T, and the like may be continuously adjusted.
The embodiment of the present disclosure further provides a performance analysis apparatus for analyzing the performance of the aforementioned depth imaging apparatus, and details of a specific structure of the depth imaging apparatus are not described herein again.
As shown in fig. 6, the performance analysis device includes: the system comprises an acquisition module 400, a first calculation module 410, a second calculation module 420 and a performance analysis module 430.
With reference to the contents of fig. 1, fig. 2 and fig. 6, the obtaining module 400 is configured to obtain the optical signal power S of the reflected light received by the light receiving camera 102, where the reflected light is the light emitted to the object to be measured by the dot matrix projector 101 and reflected to the light receiving camera 102 by the object to be measured;
the first calculating module 410 is used for calculating the intensity of stray light I based on the position of the G point G And the diameter δ of the image formed by the stray light at the position of the G point at the light receiving camera 102, calculating the stray light power N at the position of the light receiving camera 102, where the stray light at the position of the G point is a light ray which is emitted to the protective cover 104 by the dot matrix projector 101 and totally reflected out by the protective cover 104;
the second calculating module 420 is configured to calculate a signal-to-noise ratio SNR of the depth imaging apparatus according to the optical signal power S and the parasitic light power N;
the performance analysis module 430 is configured to analyze the performance of the depth imaging device based on the SNR. Specifically, the obtaining module 400 Is configured to obtain the focal length f of the light receiving camera 102, the light divergence angle α and the single-point light power Is of the dot-matrix projector 101, the reflectivity Rs of the object to be measured, and the path D of the reflected light, and calculate the light signal power S based on the aforementioned formula (3).
Illustratively, the first calculation module 410 is used for calculating the intensity of stray light I based on the position of the G point G And the diameter δ of the image of the veiling glare at the light-receiving camera 102 at the G point position to calculate the veiling glare power N at the light-receiving camera 102, the obtaining module 400 is further configured to: acquiring a vertical distance d1 between the optical centers of the protective cover 104 and the light-receiving camera 102; when it is determined that the vertical distance d1 between the protective cover 104 and the optical center of the light-receiving camera 102 is smaller than the preset distance value, the diameter δ of the image of the stray light at the G point position at the light-receiving camera 102 is calculated based on the lens aperture F, the focal distance FL, the focal length F, and the vertical distance d1 of the light-receiving camera 102, specifically referring to the aforementioned formula (4).
In addition, the obtaining module 400 is further configured to obtain the incident light intensity I of the stray light in the light emitted by the dot matrix projector 101 i And the attenuation coefficient u of the protective cover 104; determining the reflectivity R of the protective cover 104, the number M of internal reflections when stray light in the protective cover 104 is reflected to the G point position and the single internal reflection path L, and calculating the light intensity I of the stray light at the G point position based on the formula (5) G 。
It should be understood that the performance analysis apparatus corresponds to the performance analysis method mentioned above, and detailed description thereof is omitted here.
An electronic device 50 according to an embodiment of the present disclosure is described below with reference to fig. 7. The electronic device 50 shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device 50 is in the form of a general purpose computing device. The components of the electronic device 50 may include, but are not limited to: the at least one processing unit 510, the at least one memory unit 520, and a bus 530 that couples various system components including the memory unit 520 and the processing unit 510.
Wherein the storage unit stores program code that is executable by the processing unit 510 to cause the processing unit 510 to perform steps according to various exemplary embodiments of the present invention as described in the description part of the above exemplary methods of the present specification. For example, the processing unit 510 may perform various steps in the performance analysis methods described previously.
The memory unit 520 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 5201 and/or a cache memory unit 5202, and may further include a read-only memory unit (ROM) 5203.
The electronic device 50 may also communicate with one or more external devices 600 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 50, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 50 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 550. An input/output (I/O) interface 550 is connected to the display unit 540. Also, the electronic device 50 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 560. As shown, the network adapter 560 communicates with the other modules of the electronic device 50 over the bus 530. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 50, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method described in the above method embodiment section.
According to an embodiment of the present disclosure, there is also provided a program product for implementing the method in the above method embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Claims (14)
1. A performance analysis method for a depth imaging device comprises a substrate, a functional element and a protective cover, wherein the functional element is arranged on the substrate, the protective cover is supported on the substrate and is positioned on one side of the functional element, which is far away from the substrate, and the functional element comprises a dot matrix projector and a light receiving camera which are arranged at intervals; the performance analysis method is characterized by comprising the following steps:
acquiring the optical signal power S of reflected light rays received by a light receiving camera, wherein the reflected light rays are light rays which are emitted to an object to be detected by a dot matrix projector and reflected to the light receiving camera by the object to be detected;
light intensity I based on stray light at G point position G Calculating the stray light power N at the position of the light receiving camera, wherein the stray light at the position of the G point is the light which is emitted to the protective cover by the dot matrix projector and is totally reflected by the protective cover;
calculating the signal-to-noise ratio (SNR) of the depth imaging device according to the optical signal power S and the stray light power N;
analyzing performance of the depth imaging device based on the signal-to-noise ratio (SNR).
2. The performance analysis method according to claim 1,
the optical signal power S, the stray light power N and the signal-to-noise ratio SNR of the depth imaging device satisfy the following relation of formula (1), wherein:
the light intensity I of stray light at the position of the G point G The diameter delta of the image formed by the stray light at the position of the G point at the light receiving camera and the stray light power N at the light receiving camera satisfy the following relation of formula (1), wherein:
3. the performance analysis method according to claim 1, wherein the obtaining of the optical signal power S of the reflected light received by the light receiving camera specifically comprises:
acquiring a focal length f of the light receiving camera, a light divergence angle alpha and a single-point light power Is of the dot matrix projector, a reflectivity Rs of the object to be detected and a path D of the reflected light, and calculating the light signal power S based on the following formula (3), wherein:
4. the performance analysis method according to claim 1, wherein the light intensity of the stray light at the position based on the G pointI G And the diameter delta of the image formed by the stray light at the position of the G point at the light receiving camera, and before calculating the stray light power N at the light receiving camera, the performance analysis method further comprises the following steps:
acquiring a vertical distance d1 between the protective cover and an optical center of the light receiving camera;
upon determining that a vertical distance d1 between the protective cover and an optical center of the light-receiving camera is smaller than a preset distance value, calculating a diameter δ of an image formed by stray light at the light-receiving camera at the G point position based on a lens aperture F, a focus distance FL, a focal length F, and the vertical distance d1 of the light-receiving camera, wherein:
5. the performance analysis method of claim 4, wherein the predetermined distance value is 10mm.
6. The performance analysis method according to claim 1, wherein the light intensity I of the parasitic light at the G-point-based position G And the diameter delta of the image formed by the stray light at the position of the G point at the light receiving camera, and before calculating the stray light power N at the light receiving camera, the performance analysis method further comprises the following steps:
obtaining the incident light intensity I of stray light in the light emitted by the dot matrix projector i And the attenuation coefficient u of the protective cover;
determining the reflectivity R of the protective cover, the internal reflection times M when stray light in the protective cover is reflected to the G point position and the single internal reflection path L, and calculating the light intensity I of the stray light at the G point position based on the following formula (5) G Wherein:
I G =I i *R M * exp (-uML) formula (5).
7. The method of claim 6, wherein the determining the reflectivity R of the protection cover, the number M of internal reflections when stray light in the protection cover is reflected to the G point position, and the single internal reflection path L specifically comprises:
determining a refraction angle theta t of stray light entering the protective cover and a reflectivity R of the protective cover based on an incidence angle theta i of the stray light in the light rays emitted by the dot matrix projector, a refractive index n1 of a medium between the functional element and the protective cover and a refractive index n2 of the protective cover;
determining a single internal reflection path L based on the thickness d2 of the protective cover and the refraction angle theta t of the protective cover;
and determining the internal reflection times M when the stray light in the protective cover is reflected to the position of the G point based on the incident angle theta i of the stray light, the refraction angle theta t and the thickness d2 of the protective cover, the horizontal distance HG between the optical center of the dot matrix projector and the position of the G point and the vertical distance d1 between the protective cover and the optical center of the light receiving camera, wherein the optical center of the dot matrix projector and the optical center of the light receiving camera are positioned on the same horizontal line.
8. The performance analysis method according to claim 7,
the incidence angle θ i of the stray light, the refractive index n1, the refractive index n2, the refraction angle θ t of the protective cover, the reflectivity R, the thickness d2, the horizontal distance HG, the vertical distance d1, and the number of internal reflections M satisfy the following equations (6) to (10), where:
n1sinθ i =n2sinθ t formula (6);
R(R 0 ,θ t )=R 0 +(1-R 0 )(1-cosθ t ) 5 formula (7);
HG=d1*tanθ i +M*d2*tanθ t equation (10).
9. The performance analysis method according to claim 7, wherein the obtaining the attenuation coefficient u of the protection cover specifically comprises:
obtaining the thickness d2 and the transmittance T of the protective cover, and calculating the attenuation coefficient u of the protective cover based on the following formula (11), wherein:
10. the performance analysis method according to any one of claims 1 to 9, wherein the analyzing the performance of the depth imaging apparatus based on the signal-to-noise ratio SNR specifically comprises:
comparing the magnitude relation between the SNR and a preset ratio;
when the signal-to-noise ratio SNR is smaller than the preset ratio, determining that the performance of the depth imaging device is in a bad state;
and calling out at least part of parameter information used for calculating the optical signal power S and/or the parasitic light power N when the performance of the depth imaging device is in a bad state so as to indicate to adjust the at least part of parameter information.
11. The method of claim 10, wherein the predetermined ratio is 10dB.
12. A performance analysis device is used for analyzing the performance of a depth imaging device, the depth imaging device comprises a substrate, a functional element and a protective cover, the functional element is arranged on the substrate, the protective cover is supported on the substrate and is positioned on one side of the functional element, which is far away from the substrate, and the functional element comprises a dot matrix projector and a light receiving camera which are arranged at intervals; characterized in that the performance analysis device comprises:
the device comprises an acquisition module, a detection module and a control module, wherein the acquisition module is used for acquiring the optical signal power S of reflected light rays received by a light receiving camera, and the reflected light rays are light rays which are emitted to an object to be detected by a dot matrix projector and are reflected to the light receiving camera by the object to be detected;
a first calculation module for calculating the light intensity I of stray light at the G point G Calculating the stray light power N at the position of the light receiving camera, wherein the stray light at the position of the G point is the light which is emitted to the protective cover by the dot matrix projector and is totally reflected by the protective cover;
the second calculation module is used for calculating the signal-to-noise ratio (SNR) of the depth imaging device according to the optical signal power S and the stray light power N;
and the performance analysis module is used for analyzing the performance of the depth imaging device based on the SNR.
13. An electronic device, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored by the memory to perform the performance analysis method of any of claims 1-11.
14. A computer-readable storage medium having computer-readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the performance analysis method of any one of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210929890.9A CN115442590A (en) | 2022-08-02 | 2022-08-02 | Performance analysis method and device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210929890.9A CN115442590A (en) | 2022-08-02 | 2022-08-02 | Performance analysis method and device, electronic equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115442590A true CN115442590A (en) | 2022-12-06 |
Family
ID=84242018
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210929890.9A Pending CN115442590A (en) | 2022-08-02 | 2022-08-02 | Performance analysis method and device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115442590A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116337225A (en) * | 2023-05-06 | 2023-06-27 | 武汉量子技术研究院 | Method and experimental device for improving photoelectric signal detection signal-to-noise ratio based on vortex rotation |
-
2022
- 2022-08-02 CN CN202210929890.9A patent/CN115442590A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116337225A (en) * | 2023-05-06 | 2023-06-27 | 武汉量子技术研究院 | Method and experimental device for improving photoelectric signal detection signal-to-noise ratio based on vortex rotation |
CN116337225B (en) * | 2023-05-06 | 2023-08-15 | 武汉量子技术研究院 | Method and experimental device for improving photoelectric signal detection signal-to-noise ratio based on vortex rotation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10440498B1 (en) | Estimating room acoustic properties using microphone arrays | |
US10979635B2 (en) | Ultra-wide field-of-view flat optics | |
US8675292B2 (en) | Projection lens and projection apparatus | |
CN112640113A (en) | Pixel unit with multiple photodiodes | |
JP2018084574A (en) | Nte display systems and methods with optical trackers | |
CN115442590A (en) | Performance analysis method and device, electronic equipment and computer readable storage medium | |
CN108603830A (en) | Single wavelength ellipsometric measurement method with improved spot size ability | |
US9658056B2 (en) | Projection apparatus for measurement system based on exit pupil positions | |
CN103091841A (en) | Two-tone infrared imaging guidance simulation optical system based on digital micromirror display (DMD) | |
CN203881441U (en) | Free-form surface-based imaging spectrometer optical splitting system | |
CN103900688A (en) | Imaging spectrometer beam splitting system based on free-form surface | |
CN205449356U (en) | Glass surface stress appearance | |
CN110851965B (en) | Light source optimization method and system based on physical model | |
US7119903B1 (en) | Method and system for measuring differential scattering of light off of sample surfaces | |
KR20110017344A (en) | Catadioptric projection objective | |
KR102023875B1 (en) | Off-axis optic device in which linear astigmatism is removed | |
JP5264847B2 (en) | Ranging device, lens system, and imaging device | |
JP2003226300A (en) | Star sensor | |
US20100328623A1 (en) | Projector apparatus | |
JP5578420B2 (en) | Optical system for image projection device and image projection device | |
CN113138066B (en) | External distortion detection method, system and platform thereof and electronic equipment | |
CN205383999U (en) | Object roughness optical detection system | |
Caldwell et al. | Comparison of acoustic retroreflection from corner cube arrays using FDTD simulation | |
Shi et al. | Stray light analysis and baffle design of remote sensing camera for microsatellite | |
US10732381B2 (en) | Optical device for observing an object, minimising the internal reflection phenomenon |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |