WO2023103559A1 - 三维识别装置、终端、标定方法、存储介质 - Google Patents
三维识别装置、终端、标定方法、存储介质 Download PDFInfo
- Publication number
- WO2023103559A1 WO2023103559A1 PCT/CN2022/123544 CN2022123544W WO2023103559A1 WO 2023103559 A1 WO2023103559 A1 WO 2023103559A1 CN 2022123544 W CN2022123544 W CN 2022123544W WO 2023103559 A1 WO2023103559 A1 WO 2023103559A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- infrared
- lens
- camera
- image set
- rgb
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000003287 optical effect Effects 0.000 claims description 49
- 239000011159 matrix material Substances 0.000 claims description 27
- 238000002834 transmittance Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/029—Location-based management or tracking services
Definitions
- the present application relates to but is not limited to the field of intelligent terminals, and in particular relates to a three-dimensional recognition device, a terminal, a calibration method, and a storage medium.
- Embodiments of the present application provide a three-dimensional recognition device, a terminal, a calibration method, and a storage medium.
- the embodiment of the present application provides a three-dimensional identification device, which is arranged inside the display screen of the terminal.
- the three-dimensional identification device includes: an RGB component, the RGB component includes an RGB camera and a first lens, and the first A first optical channel is connected between the lens and the RGB camera; an infrared emission assembly, the infrared emission assembly includes an infrared floodlight illuminator and a second lens, between the infrared floodlight illuminator and the second lens A second optical channel is connected, the second lens is adjacent to the first lens; an infrared receiving assembly, the infrared receiving assembly includes a first infrared camera and a third lens, the first infrared camera and the first infrared camera A third optical channel is connected between the three lenses, and the third lens is adjacent to the first lens.
- an embodiment of the present application provides a terminal, including: the three-dimensional identification device as described in the first aspect; a display screen, the three-dimensional identification device is arranged inside the display screen, and the display screen and the The area corresponding to the lens of the three-dimensional identification device is the area with enhanced light transmittance.
- the embodiment of the present application provides a calibration method, which is applied to a three-dimensional recognition device, and the three-dimensional recognition device includes an RGB component, an infrared emitting component, and an infrared receiving component, wherein the RGB component includes an RGB camera and a first A lens, a first optical channel is connected between the first lens and the RGB camera head, the infrared emitting assembly includes an infrared floodlight illuminator and a second lens, and the infrared floodlight illuminator and the second lens A second optical channel is connected between them, the second lens is adjacent to the first lens, the infrared receiving assembly includes a first infrared camera and a third lens, and the first infrared camera and the third lens A third optical channel is connected therebetween, and the third lens is adjacent to the first lens.
- the RGB component includes an RGB camera and a first A lens
- a first optical channel is connected between the first lens and the RGB camera head
- the infrared emitting assembly includes an infrared flood
- the calibration method includes: when the infrared flood illuminator is in a working state, according to a first preset time sequence, the first image set is taken by the first infrared camera, and the second image set is taken by the RGB camera.
- Image set wherein, the target objects captured by the first infrared camera at different times are different, the target objects captured by the RGB camera at different times are different, and the target objects captured by the first infrared camera and the RGB camera at the same time The objects are the same; perform the calibration between the first infrared camera and the infrared floodlight irradiator according to the first image set; fuse the first image set and the second image set respectively captured at the same time images to obtain a first fused image set, and perform calibration between the RGB camera and the first infrared camera according to the first fused image set.
- an embodiment of the present application provides a terminal, including: a memory, a processor, and a computer program stored in the memory and operable on the processor, when the processor executes the computer program, the third The calibration method described in the aspect.
- the embodiment of the present application provides a computer-readable storage medium, storing computer-executable instructions, and the computer-executable instructions are used to execute the calibration method as described in the third aspect.
- FIG. 1 is a layout of a full-screen three-dimensional recognition device and an RGB camera in the prior art
- FIG. 2 is a schematic diagram of a three-dimensional recognition device set in a terminal according to an embodiment of the present application
- Fig. 3 is a schematic cross-sectional view of the three-dimensional recognition device provided by the present application.
- Fig. 4 is a front view of the three-dimensional recognition device provided by the present application.
- FIG. 5 is a schematic layout diagram of Embodiment 1 of the present application.
- FIG. 6 is a schematic layout diagram of Embodiment 2 of the present application.
- FIG. 7 is a schematic layout diagram of Embodiment 3 of the present application.
- FIG. 8 is a flow chart of the calibration method applied to the three-dimensional recognition device in Embodiment 1 provided by the present application;
- FIG. 9 is a flow chart of the calibration method applied to the three-dimensional recognition device in Embodiment 2 provided by the present application.
- Fig. 10 is a flow chart of the calibration method applied to the three-dimensional recognition device of the third embodiment provided by the present application.
- Fig. 11 is a flowchart of filtering laser speckle information provided by another embodiment of the present application.
- Fig. 12 is a flowchart of global calibration provided by another embodiment of the present application.
- Fig. 13 is an example diagram of a target object provided by the present application.
- Fig. 14 is an apparatus diagram of a terminal provided by another embodiment of the present application.
- the present application provides a three-dimensional recognition device, a terminal, a calibration method, and a storage medium.
- the three-dimensional recognition device includes: an RGB component, the RGB component includes an RGB camera and a first lens, and the distance between the first lens and the RGB camera is Connected with a first optical channel; an infrared emitting assembly, the infrared emitting assembly includes an infrared flood illuminator and a second lens, a second optical channel is connected between the infrared flood illuminator and the second lens, the The second lens is adjacent to the first lens; an infrared receiving assembly, the infrared receiving assembly includes a first infrared camera and a third lens, and a third infrared camera is connected between the first infrared camera and the third lens In the optical channel, the third lens is adjacent to the first lens.
- the RGB camera, the infrared floodlight irradiator and the first infrared camera can be arranged separately from the lens, and the transmission of light can be realized through the optical channel, so that the layout of multiple lenses under the screen can be more compact, thereby effectively Reduce the area of the area that is specially treated to improve light transmittance, effectively improve the display effect of the full screen, and improve user experience.
- the embodiment of the present application provides a three-dimensional identification device, which is arranged inside the display screen of the terminal, and the three-dimensional identification device includes:
- the RGB component includes an RGB camera 210 and a first lens 212, and the first lens 212 is connected to the RGB camera 210 through a first optical channel 211;
- the infrared emitting assembly includes an infrared floodlight illuminator 310 and a second lens 312, a second optical channel 311 is connected between the infrared floodlight illuminator 310 and the second lens 312, the second lens 312 and the first lens 211 Adjacent;
- the infrared receiving assembly includes a first infrared camera 410 and a third lens 412, a third optical channel 411 is connected between the first infrared camera 410 and the third lens 412, and the third lens 412 is adjacent to the first lens 212 .
- FIG. 1 the layout of a common 3D recognition device is shown in FIG. 1.
- the 3D recognition devices are arranged side by side on the display screen 110 of the terminal 10. Since the body size of the 3D recognition device is relatively large, there is a certain distance between each lens. The distance results in a larger area of the light transmittance enhanced region 120, which affects the display experience of the full screen.
- the three-dimensional identification device of this embodiment can realize the separation of the main body and the lens of the three-dimensional identification device. In the manner shown in FIG. 2, only the size of the lens can be considered for a compact layout. The transmission of light through the light channel can effectively reduce the area of the light transmittance enhanced region 120 and improve user experience.
- the size of the first lens 212 of the RGB camera 210 is large, and the size of the lenses of the infrared emitting component and the infrared receiving component is relatively small, usually a small-aperture lens. Therefore, as shown in FIG. 2 , the first lens 212 Surrounded by several small-aperture lenses, the compactness of the layout is further improved.
- the small-aperture lenses are the second lens 312 and the third lens 412. Since the RGB camera 210 requires a large amount of light, the apertures of the second lens 312 and the third lens 412 are compared to The aperture of the first lens 212 is relatively small, therefore, arranging the second lens 312 and the third lens 412 adjacent to the first lens 212 will not interfere with the body of the RGB camera 210 .
- the apertures of the second lens 312 and the third lens 412 can be adjusted according to the actual light input requirements, and the specific sizes are not limited here.
- the first optical channel 211 , the second optical channel 311 and the third optical channel 411 can be in any shape, as long as they can ensure that light can enter or exit, and there is no limitation here.
- FIG. 3 is a cross-sectional view taken along the vertical direction when the terminal is placed horizontally.
- the RGB lens 210 is connected to the first lens 212 through the first optical channel 211, there is a certain distance between the RGB lens 210 and the first lens 212.
- the second lens 312 and the third lens 412 It can be arranged adjacent to the first lens 212 in the manner shown in FIG. 5 , that is, it surrounds the first lens 212.
- the specific position can be adjusted according to actual needs, as long as there is no interference between them, which is not the case in this embodiment. More limited.
- the body size of the RGB camera 210, the infrared floodlight irradiator 310 and the first infrared camera 410 is larger than the corresponding lens, if the optical channels are all linear channels, in order to realize the lens layout shown in Figure 5, It is necessary to set a longer first optical channel 211 to ensure that the infrared flood illuminator 310 and the first infrared camera 410 do not interfere with the RGB camera 210, which will lead to an increase in the thickness of the terminal. Therefore, the second optical channel in this embodiment 311 and the third optical channel 411 both adopt the right-angle structure shown in FIG.
- the reflector changes the propagation direction of the infrared light, so that the infrared flood irradiator 310 and the first infrared camera 410 can be laterally staggered from the RGB camera 210, and the internal layout obtained can refer to the front view shown in Figure 4, which effectively reduces the number of three-dimensional identification devices
- the required longitudinal space can effectively reduce the thickness of the three-dimensional identification device.
- three-dimensional recognition can be realized by using Time of Flight (TOF).
- TOF Time of Flight
- VCSEL Surface Emitting Laser
- the inside of the first infrared camera 410 can also be adjusted according to actual needs, and it only needs to be able to realize accurate collection of TOF, and this embodiment does not limit the internal structure of the device.
- the infrared emitting assembly of this embodiment also includes an infrared dot matrix projector 320 and a fourth lens 322, and a fourth optical channel 321 is connected between the infrared dot matrix projector 320 and the fourth lens 322,
- the fourth lens 322 is adjacent to the first lens 212;
- a third reflector (not shown) is also arranged in the fourth optical channel 321, and the third reflector is used for infrared dot matrix projector 320 emission The light is reflected to the fourth lens.
- the setting principle of the infrared dot matrix projector 320, the fourth lens 321, the fourth light channel 321 and the third reflector can refer to the infrared flood illuminator 310, the second lens 312, the second light channel 311 and the first reflector 313 , will not be repeated here for the sake of brevity.
- the fourth lens 322, the second lens 312, and the third lens 412 are arranged around the first lens 311, and the specific positions can be adjusted according to actual needs, which is not discussed in this embodiment. More limited.
- the infrared flood irradiator 310 uses a high-power VCSEL, while the structure of this embodiment adopts a monocular structured light scheme, so the infrared flood irradiator 310 can use a relatively Low-power VCSEL, infrared dot matrix projector 320 adopts high-power VCSEL to transmit laser speckle, and the structure of the first infrared camera 410 can also be simplified compared with Embodiment 1, and the internal structure of the device is not described here. Too much to repeat.
- the infrared receiving assembly of this embodiment includes a second infrared camera 420 and a fifth lens 422, a fifth optical channel 421 is connected between the second infrared camera 420 and the fifth lens 422, and the fifth lens 422 Adjacent to the first lens 212, in addition, a fourth reflector (not shown in the figure) is also arranged in the fifth optical channel 421, and the fourth reflector is used to reflect the light incident through the fifth lens 422 to the second Infrared camera 420.
- the setting principles of the second infrared camera 420, the fifth lens 422, the fifth optical channel 421 and the fourth reflector can refer to the first infrared camera 410, the third lens 412, the third optical channel 411 and the second reflector, in order to describe It is convenient and will not be repeated here.
- the fifth lens 422 , the fourth lens 322 , the second lens 312 and the third lens 412 are arranged around the first lens 311 , and the specific positions can be adjusted according to actual needs. The embodiment does not limit this much.
- the second embodiment can realize three-dimensional recognition of monocular structured light, and this embodiment adds a second infrared camera 420, so it can realize three-dimensional recognition of binocular structured light, and the specific device parameters are selected according to actual needs. Yes, no more limitations here.
- the main body of the identification device can be set in the internal space of the terminal, effectively reducing the area of the enhanced light transmittance area 120, thereby reducing the display area affected by the enhanced light transmittance area 120, and improving The overall display effect of the terminal is improved, thereby improving the user experience.
- an embodiment of the present application also provides a terminal, and the terminal 10 includes:
- the display screen 110 , the three-dimensional identification device is arranged inside the display screen 110 , and the area corresponding to the lens of the display screen 110 and the three-dimensional identification device is a light transmittance enhanced area 120 .
- the number of lenses of the 3D recognition device depends on the scheme of the 3D recognition device, as shown in Figure 4, which is the layout of the 3D recognition device corresponding to Embodiment 3 Schematic diagram
- the infrared dot matrix projector 320 and the second infrared camera 420 can be reduced on the basis of Figure 4.
- the area of the enhanced translucency region 120 can be further reduced on the basis of FIG. 4 , and the specific area of the enhanced translucency region 120 can be adjusted according to the number of lenses corresponding to the structure required for three-dimensional recognition.
- the present application also provides a calibration method applied to the three-dimensional recognition device described in Embodiment 1.
- the calibration method includes but is not limited to the following steps:
- Step S810 when the infrared floodlight illuminator is in the working state, according to the first preset time sequence, the first image set is taken by the first infrared camera, and the second image set is taken by the RGB camera, wherein the first infrared camera
- the target objects photographed at different times are different, the target objects photographed by the RGB camera are different at different moments, and the target objects photographed by the first infrared camera and the RGB camera are the same at the same moment;
- Step S820 performing calibration between the first infrared camera and the infrared flood illuminator according to the first image set;
- Step S830 fusing images captured at the same time in the first image set and the second image set respectively to obtain a first fused image set, and performing calibration between the RGB camera and the first infrared camera according to the first fused image set.
- the baselines 130 between multiple lenses coincide, so multiple photosensitive devices can be calibrated directly.
- the first baseline 510 between the second lens 312 and the first lens 212 and the second baseline 520 between the third lens 412 and the first lens 212 cannot ensure On the same straight line, therefore, on the basis of the structure of the three-dimensional recognition device of the embodiment of the present application, two calibrations are required between the optical sensing devices, that is, the first infrared camera 410 and the RGB camera 210 are calibrated, and the infrared reflective illuminator 310 and the RGB camera 210 are calibrated.
- the RGB camera 210 is calibrated to ensure that multiple optical sensor devices that are not distributed in parallel can be calibrated normally and used in conjunction with each other.
- the target object can usually be calibrated with obvious contrast features such as a checkerboard diagram or a dot map, or the same pattern can be photographed from different angles through the photosensitive device.
- this The embodiment can use the calibration object shown in FIG. 13 as the target object, the target object has 4 sides, each side has the same checkerboard figure, and the angles of the patterns of the checkerboard figure on each side are different, so that when shooting different sides The image content that can be obtained is different.
- the target object shown in FIG. 13 can also be configured with a mechanical device to realize each side tilting forward or backward under command control, so as to capture more image content from different angles.
- background infrared light sources and background visible light sources can also be added according to actual needs to ensure sufficient light, and details will not be repeated here.
- the 3D recognition device can be fixedly placed, and the RGB camera 210 is facing the target object. Since the first lens 212 is adjacent to the second lens 312 and the third lens 412 and has a small aperture, it can be regarded as facing the calibration object.
- the first preset time sequence is the interval between two adjacent shots.
- the first preset time sequence can be set to be the same as the rotation period of the target object, for example, each side shot In the case of one sheet, it takes 2 seconds for the target object to turn from one side to the other, then the first preset time sequence can be 0 seconds, 2 seconds, 4 seconds, etc.
- the first preset time sequence can be 0 seconds, 2 seconds, 4 seconds, etc.
- the specific number can be determined according to the number of homography matrices corresponding to the number of internal parameters in the calibration parameters, for example, the internal parameters
- the number of coefficients to be solved is n, and n (n is an even number) or n+1 (n is an odd number) equations that can be solved by the least square method need to be solved.
- the number of homography matrices is n/2 (n is an even number) Or (n+1)/2 (n is an odd number), then the number of photos taken in the second round for target objects with different inclinations can be determined as n/2 (n is an even number) or (n+1)/2 (n is odd number).
- the number of shots can also be increased according to actual needs, as long as calibration can be achieved, and there are no limitations here.
- the image reflected by the infrared flood illuminator 310 is a two-dimensional image
- the plane equation of the plane of the target object under the camera coordinates can be estimated from the two-dimensional image
- the laser point cloud of the infrared reflective illuminator 310 Associated with the above plane transform the laser point cloud to the camera coordinate system through the mapping relationship between the laser coordinate system and the camera coordinate system, construct the minimum distance from each point in the laser point cloud to the plane, and then solve the minimum distance by the least square method
- the distance value forms the calibration.
- the above calibration method is only an example.
- the calibration of the first infrared camera 410 and the RGB camera 210 can also be implemented in other ways, which will not be limited here.
- the images in the first image set and the second image set are captured by different devices at the same time, so the images captured at the same moment can be fused.
- the technical Personnel are familiar with how to complete the calibration of the RGB camera 210 , for example, by using methods such as Zhang Zhengyou method, OpenCV, and Matlab to obtain the respective internal parameters and distortion coefficients, and will not repeat them here.
- the calibration method of the present application can also be applied to the three-dimensional recognition device described in the second embodiment above, which also includes but is not limited to the following steps:
- Step S910 when the infrared dot matrix projector is in the working state, according to the second preset time sequence, the third image set is taken by the first infrared camera, and the fourth image set is taken by the RGB camera, wherein the first infrared camera
- the target objects photographed at different times are different
- the target objects photographed by the RGB camera are different at different moments
- the target objects photographed by the first infrared camera and the RGB camera are the same at the same moment
- Step S920 performing calibration between the first infrared camera and the infrared dot matrix projector according to the third image set;
- Step S930 fusing images taken at the same time in the third image set and the fourth image set respectively to obtain a second fused image set, and performing calibration between the RGB camera and the first infrared camera according to the second fused image set.
- the image capture of the third image set can be performed after the image capture of the first image set is completed.
- alternately Start the infrared flood irradiator 310 and the infrared dot matrix projector 320 to shoot that is, alternately shoot the images of the first image set and the third image set, and select a specific working method according to the timing requirements, and there is not much to do here limited.
- the infrared dot matrix projector 320 and the infrared flood light irradiator 310 work alternately, which can prevent the laser scatter plate of the infrared dot matrix projector 320 from interfering with the infrared light emitted by the infrared flood light irradiator 310 .
- the infrared light emitted by the infrared dot matrix projector 320 is laser speckle
- the first infrared camera 410 receives multiple three-dimensional light reflected from the target object irradiated by the infrared dot matrix projector 320 .
- the laser speckle image after that, the three-dimensional laser speckle image is corresponding to the two-dimensional image of the first infrared camera 410 and converted.
- This conversion process introduces homogeneous coordinates to realize the transformation from three-dimensional space points to two-dimensional images, so as to realize the transformation of each Each point is translated and rotated, and then the plane equations of multiple groups of target objects in the camera coordinates are obtained, and the distance error between the point and the plane is optimized to achieve calibration.
- the calibration method of the present application can also be applied to the three-dimensional recognition device described in the third embodiment above, which also includes but is not limited to the following steps:
- Step S1010 when the infrared dot matrix projector is in the working state, according to the second preset time sequence, the third image set is taken by the second infrared camera, and the fourth image set is taken by the RGB camera, wherein the second infrared camera
- the target objects photographed at different times are different, the target objects photographed by the RGB camera are different at different moments, and the target objects photographed by the first infrared camera and the RGB camera are the same at the same moment;
- Step S1020 performing calibration between the second infrared camera and the infrared dot matrix projector according to the third image set;
- Step S1030 fusing images taken at the same time in the third image set and the fourth image set respectively to obtain a second fused image set, and performing calibration between the RGB camera and the second infrared camera according to the second fused image set.
- the technical principle of this embodiment can refer to the principle of the embodiment described in FIG. 310 for matching work, the second infrared camera 420 and the infrared dot matrix projector 320 for matching work, of course, the matching relationship can also be exchanged, and no more limitations are made here.
- the first image set can also be taken by the first infrared camera 410 and the second infrared camera 420 respectively under the condition that the infrared reflective illuminator 310 works, and two available images can be obtained respectively.
- the images are cross-referenced, which can effectively improve the efficiency of shooting and calibration.
- the third image set can also take a similar operation, which will not be repeated here.
- the target object Assume that the time sequence is rotated, that is, the image capture of the first image set and the second image set is completed for the first side, and the image capture of the third image set and the fourth image set is completed for the second side, even if it can be done in the mechanical device It is difficult to adjust the inclination angle of the side of the target object to be photographed under the control of the target object, and it is also difficult to ensure that the image captured after the target object turns around is sufficient.
- the target object after the target object completes one round of rotation, it can be increased by 90 degrees on the basis of the first rotation of the second round, that is, the image capture of the first image set and the second image set is performed for the second side , to ensure that each image set takes a different image than the first round.
- the target object can also be rotated in a third round with an uneven angle or a small corner. side to take pictures simultaneously.
- the uneven angle of the calibration object or the rotation of a small corner will cause the camera to take pictures across two sides or only take pictures of a single side part.
- this wheel is only one of the first two rounds of rotation.
- Supplementary photography on the one hand, only a small number of images are added for the correction of laser speckle interference, and on the other hand, it is also used to effectively supplement when the first two rounds of images of the target object are insufficient.
- the above-mentioned adjustment method of the target object is only an example of this embodiment, and may also be adjusted according to the timing requirements, or a plurality of different target objects may be configured, which is not limited here.
- step S910 shown in FIG. 9 or after executing step S1010 shown in FIG. 10 , it also includes but is not limited to the following steps:
- Step S1110 filtering out the laser speckle information in the images of the third image set.
- the image captured by the second infrared camera 420 contains laser speckle information.
- it needs to be filtered out by denoising. For example, it can be used for Overcome the multiplicative distortion caused by speckle, convert the noise into an additive model with natural logarithm during iteration, then convert the image from RGB space to Hue Lightness Saturation (HLS) space, and extract the red space range, Then switch back to RGB space and further convert to gray space, perform histogram equalization and filtering processing, calculate the corner points to generate a checkerboard image and then perform calibration.
- HLS Hue Lightness Saturation
- step S1030 shown in FIG. 10 is executed, the following steps are also included but not limited to:
- Step S1210 obtaining calibration parameters, where the calibration parameters include a first calibration parameter of the first infrared camera, a second calibration parameter of the second infrared camera, and a third calibration parameter of the RGB camera;
- Step S1220 perform global calibration among the first infrared camera, the second infrared camera and the RGB camera according to the calibration parameters.
- the three cameras need to be globally calibrated to realize cooperative work. Based on this, the respective calibration parameters need to be calculated, wherein the calibration parameters usually include internal parameters, distortion coefficients, external parameters and image scale factors.
- the internal parameters and distortion coefficients can be obtained by fused calculations of one or more rounds of rotation of the target object captured by the three cameras, and those skilled in the art are familiar with how to calculate the corresponding parameters. More details.
- the calculation of the extrinsic parameters requires three cameras to take a picture at the same time to calculate, that is, the three cameras need to take pictures of the same target object in a static state at the same time, and then perform feature calculations for their respective images.
- the extrinsic parameters Calculation is a technique well known to those skilled in the art, and details are not repeated here.
- the image scale factor represents the difference between the imaging of space objects in the two images due to the deviation of the optical center of the infrared camera and the RGB camera and the difference in the focal length of infrared and visible light. Therefore, the image scale factor can be obtained by comparing the infrared and RGB pixel differences of the two-dimensional calibration image taken on the side of the target object, so that the size of the spatial object can be unified on the infrared and RGB images.
- the infrared and RGB images can be aligned.
- the terminal 1400 includes: a memory 1410 , a processor 1420 , and a computer program stored in the memory 1410 and operable on the processor 1420 .
- the processor 1420 and the memory 1410 may be connected through a bus or in other ways.
- the non-transitory software programs and instructions required to realize the calibration method of the above-mentioned embodiment are stored in the memory 1410, and when executed by the processor 1420, the calibration method in the above-mentioned embodiment is executed, for example, the above-described execution in FIG. 8 Method step S810 to step S830, method step S910 to step S930 in FIG. 9 , method step S1010 to step S1030 in FIG. 10 , method step S1110 in FIG. 11 , method step S1210 to step S1220 in FIG. 12 .
- the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, that is, they may be located in one place, or they may be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- an embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by a processor or a controller, for example, by the above-mentioned Execution by a processor in the terminal embodiment can cause the above-mentioned processor to execute the calibration method in the above-mentioned embodiment, for example, execute the method steps S810 to S830 in FIG. 8 and the method steps S910 to S910 in FIG. 9 described above. S930, method step S1010 to step S1030 in FIG. 10 , method step S1110 in FIG. 11 , method step S1210 to step S1220 in FIG. 12 .
- Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer.
- communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
- the embodiment of the present application includes: an RGB component, the RGB component includes an RGB camera and a first lens, a first optical channel is connected between the first lens and the RGB camera; an infrared emitting component, the infrared emitting component includes An infrared floodlight irradiator and a second lens, a second optical channel is connected between the infrared floodlight irradiator and the second lens, and the second lens is adjacent to the first lens; an infrared receiving assembly, The infrared receiving component includes a first infrared camera and a third lens, a third optical channel is connected between the first infrared camera and the third lens, and the third lens is adjacent to the first lens.
- the RGB camera, the infrared floodlight irradiator and the first infrared camera can be arranged separately from the lens, and the transmission of light can be realized through the optical channel, so that the layout of multiple lenses under the screen can be more compact, thereby effectively Reduce the area of the area that is specially treated to improve light transmittance, effectively improve the display effect of the full screen, and improve user experience.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims (12)
- 一种三维识别装置,设置于终端的显示屏内侧,所述三维识别装置包括:红绿蓝RGB组件,所述RGB组件包括RGB摄像头和第一镜头,所述第一镜头与所述RGB摄像头之间连接有第一光通道;红外发射组件,所述红外发射组件包括红外泛光照射器和第二镜头,所述红外泛光照射器与所述第二镜头之间连接有第二光通道,所述第二镜头与所述第一镜头相邻;红外接收组件,所述红外接收组件包括第一红外摄像头和第三镜头,所述第一红外摄像头和所述第三镜头之间连接有第三光通道,所述第三镜头与所述第一镜头相邻。
- 根据权利要求1所述的三维识别装置,其中:所述第二光通道中还设置有第一反射镜,所述第一反射镜用于将所述红外泛光照射器发射的红外光反射至所述第二镜头;所述第三光通道还设置有第二反射镜,所述第二反射镜用于将通过所述第三镜头入射的光反射至所述第一红外摄像头。
- 根据权利要求2所述的三维识别装置,其中:所述红外发射组件还包括红外点阵投影器和第四镜头,所述红外点阵投影器与所述第四镜头之间连接有第四光通道,所述第四镜头与所述第一镜头相邻;所述第四光通道中还设置有第三反射镜,所述第三反射镜用于将所述红外点阵投影器发射的红外光反射至所述第四镜头。
- 根据权利要求3所述的三维识别装置,其中:所述红外接收组件还包括第二红外摄像头和第五镜头,所述第二红外摄像头和所述第五镜头之间连接有第五光通道,所述第五镜头与所述第一镜头相邻;所述第五光通道中还设置有第四反射镜,所述第四反射镜用于将通过所述第五镜头入射的光反射至所述第二红外摄像头。
- 一种终端,包括:如权利要求1至4任意一项所述的三维识别装置;显示屏,所述三维识别装置设置于所述显示屏内侧,所述显示屏与所述三维识别装置的镜头所对应的区域为透光性增强区域。
- 一种标定方法,应用于三维识别装置,所述三维识别装置包括RGB组件、红外发射组件和红外接收组件,其中,所述RGB组件包括RGB摄像头和第一镜头,所述第一镜头与所述RGB摄像头之间连接有第一光通道,所述红外发射组件包括红外泛光照射器和第二镜头,所述红外泛光照射器与所述第二镜头之间连接有第二光通道,所述第二镜头与所述第一镜头相邻,所述红外接收组件包括第一红外摄像头和第三镜头,所述第一红外摄像头和所述第三镜头之间连接有第三光通道,所述第三镜头与所述第一镜头相邻;所述标定方法包括:在所述红外泛光照射器处于工作状态的情况下,根据第一预设时间序列,通过所述第一红外摄像头拍摄第一图像集,通过所述RGB摄像头拍摄第二图像集,其中,所述第一红外摄像头在不同时刻拍摄的目标物体不同,所述RGB摄像头在不同时刻拍摄的目标物体不同,所述第一红外摄像头和所述RGB摄像头在相同时刻拍摄的目标物体相同;根据所述第一图像集进行所述第一红外摄像头和所述红外泛光照射器之间的标定;融合所述第一图像集和所述第二图像集中分别拍摄于相同时刻的图像,得到第一融合图像集,根据所述第一融合图像集进行所述RGB摄像头和所述第一红外摄像头之间的标定。
- 根据权利要求6所述的方法,其中,所述红外发射组件还包括红外点阵投影器和第四镜头,所述红外点阵投影器与所述第四镜头之间连接有第四光通道,所述第四镜头与所述第一镜头相邻;所述方法还包括:在所述红外点阵投影器处于工作状态的情况下,根据第二预设时间序列,通过所述第一红外摄像头拍摄第三图像集,通过所述RGB摄像头拍摄第四图像集,其中,所述第一红外摄像头在不同时刻拍摄的目标物体不同,所述RGB摄像头在不同时刻拍摄的目标物体不同,所述第一红外摄像头和所述RGB摄像头在相同时刻拍摄的目标物体相同;根据所述第三图像集进行所述第一红外摄像头和所述红外点阵投影器之间的标定;融合所述第三图像集和所述第四图像集中分别拍摄于相同时刻的图像,得到第二融合图像集,根据所述第二融合图像集进行所述RGB摄像头和所述第一红外摄像头之间的标定。
- 根据权利要求6所述的方法,其中,所述红外接收组件还包括第二红外摄像头和第五镜头,所述第二红外摄像头和所述第五镜头之间连接有第五光通道,所述第五镜头与所述第一镜头相邻;所述方法还包括:在所述红外点阵投影器处于工作状态的情况下,根据第二预设时间序列,通过所述第二红外摄像头拍摄第三图像集,通过所述RGB摄像头拍摄第四图像集,其中,所述第二红外摄像头在不同时刻拍摄的目标物体不同,所述RGB摄像头在不同时刻拍摄的目标物体不同,所述第一红外摄像头和所述RGB摄像头在相同时刻拍摄的目标物体相同;根据所述第三图像集进行所述第二红外摄像头和所述红外点阵投影器之间的标定;融合所述第三图像集和所述第四图像集中分别拍摄于相同时刻的图像,得到第二融合图像集,根据所述第二融合图像集进行所述RGB摄像头和所述第二红外摄像头之间的标定。
- 根据权利要求7或8所述的方法,其中,在所述通过所述RGB摄像头拍摄第四图像集之后,所述方法还包括:滤除所述第三图像集的图像中的激光散斑信息。
- 根据权利要求8所述的方法,其中,在所述根据所述第二融合图像集进行所述RGB摄像头和所述第二红外摄像头之间的标定之后,所述方法还包括:获取标定参数,所述标定参数包括所述第一红外摄像头的第一标定参数、所述第二红外摄像头的第二标定参数和所述RGB摄像头的第三标定参数;根据所述标定参数对所述第一红外摄像头、所述第二红外摄像头和所述RGB摄像头之间进行全局标定。
- 一种终端,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现如权利要求6至10任意一项所述的标定方法。
- 一种计算机可读存储介质,存储有计算机可执行指令,其中,所述计算机可执行指令用于执行如权利要求6至10任意一项所述的标定方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22902984.8A EP4436219A1 (en) | 2021-12-07 | 2022-09-30 | Three-dimensional recognition device, terminal, calibration method, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111516186.2 | 2021-12-07 | ||
CN202111516186.2A CN116249069A (zh) | 2021-12-07 | 2021-12-07 | 三维识别装置、终端、标定方法、存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023103559A1 true WO2023103559A1 (zh) | 2023-06-15 |
Family
ID=86628285
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/123544 WO2023103559A1 (zh) | 2021-12-07 | 2022-09-30 | 三维识别装置、终端、标定方法、存储介质 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4436219A1 (zh) |
CN (1) | CN116249069A (zh) |
WO (1) | WO2023103559A1 (zh) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106200249A (zh) * | 2016-08-30 | 2016-12-07 | 辽宁中蓝电子科技有限公司 | 结构光和rgb传感器模组整体式集成系统3d相机 |
US20170186166A1 (en) * | 2015-12-26 | 2017-06-29 | Intel Corporation | Stereo depth camera using vcsel with spatially and temporally interleaved patterns |
CN108564613A (zh) * | 2018-04-12 | 2018-09-21 | 维沃移动通信有限公司 | 一种深度数据获取方法及移动终端 |
CN111083453A (zh) * | 2018-10-18 | 2020-04-28 | 中兴通讯股份有限公司 | 一种投影装置、方法及计算机可读存储介质 |
CN112927307A (zh) * | 2021-03-05 | 2021-06-08 | 深圳市商汤科技有限公司 | 一种标定方法、装置、电子设备及存储介质 |
US20210266387A1 (en) * | 2020-02-21 | 2021-08-26 | Lg Electronics Inc. | Mobile terminal |
-
2021
- 2021-12-07 CN CN202111516186.2A patent/CN116249069A/zh active Pending
-
2022
- 2022-09-30 WO PCT/CN2022/123544 patent/WO2023103559A1/zh active Application Filing
- 2022-09-30 EP EP22902984.8A patent/EP4436219A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170186166A1 (en) * | 2015-12-26 | 2017-06-29 | Intel Corporation | Stereo depth camera using vcsel with spatially and temporally interleaved patterns |
CN106200249A (zh) * | 2016-08-30 | 2016-12-07 | 辽宁中蓝电子科技有限公司 | 结构光和rgb传感器模组整体式集成系统3d相机 |
CN108564613A (zh) * | 2018-04-12 | 2018-09-21 | 维沃移动通信有限公司 | 一种深度数据获取方法及移动终端 |
CN111083453A (zh) * | 2018-10-18 | 2020-04-28 | 中兴通讯股份有限公司 | 一种投影装置、方法及计算机可读存储介质 |
US20210266387A1 (en) * | 2020-02-21 | 2021-08-26 | Lg Electronics Inc. | Mobile terminal |
CN112927307A (zh) * | 2021-03-05 | 2021-06-08 | 深圳市商汤科技有限公司 | 一种标定方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP4436219A1 (en) | 2024-09-25 |
CN116249069A (zh) | 2023-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10897609B2 (en) | Systems and methods for multiscopic noise reduction and high-dynamic range | |
WO2018161466A1 (zh) | 深度图像获取系统和方法 | |
CN112634374B (zh) | 双目相机的立体标定方法、装置、系统及双目相机 | |
CN110572630B (zh) | 三维图像拍摄系统、方法、装置、设备以及存储介质 | |
WO2019100933A1 (zh) | 用于三维测量的方法、装置以及系统 | |
CN108765542B (zh) | 图像渲染方法、电子设备和计算机可读存储介质 | |
CN110111262A (zh) | 一种投影仪畸变校正方法、装置和投影仪 | |
US20200007736A1 (en) | Exposure Control Method, Exposure Control Device and Electronic Device | |
US8334893B2 (en) | Method and apparatus for combining range information with an optical image | |
US20170134713A1 (en) | Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof | |
JP4115801B2 (ja) | 3次元撮影装置 | |
CN107241592B (zh) | 一种成像设备及成像方法 | |
CN107370951B (zh) | 图像处理系统及方法 | |
JP2014522591A (ja) | 角スライス実像3dディスプレイのためのアライメント、キャリブレーション、およびレンダリングのシステムおよび方法 | |
JP2017220780A (ja) | 撮影装置および車両 | |
WO2022142139A1 (zh) | 投影面选取和投影图像校正方法、装置、投影仪及介质 | |
JP7489253B2 (ja) | デプスマップ生成装置及びそのプログラム、並びに、デプスマップ生成システム | |
CN111757086A (zh) | 有源双目相机、rgb-d图像确定方法及装置 | |
CN108322726A (zh) | 一种基于双摄像头的自动对焦方法 | |
JP2020191624A (ja) | 電子機器およびその制御方法 | |
EP4443379A1 (en) | Three-dimensional recognition apparatus, terminal, image enhancement method and storage medium | |
WO2023103559A1 (zh) | 三维识别装置、终端、标定方法、存储介质 | |
US12069227B2 (en) | Multi-modal and multi-spectral stereo camera arrays | |
CN113301321A (zh) | 成像方法、系统、装置、电子设备及可读存储介质 | |
JP4543821B2 (ja) | 3次元形状測定装置および方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22902984 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2024529399 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022902984 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022902984 Country of ref document: EP Effective date: 20240619 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |