WO2023124223A1 - 多相机系统校准方法、装置、系统、电子设备、存储介质及计算机程序产品 - Google Patents

多相机系统校准方法、装置、系统、电子设备、存储介质及计算机程序产品 Download PDF

Info

Publication number
WO2023124223A1
WO2023124223A1 PCT/CN2022/117908 CN2022117908W WO2023124223A1 WO 2023124223 A1 WO2023124223 A1 WO 2023124223A1 CN 2022117908 W CN2022117908 W CN 2022117908W WO 2023124223 A1 WO2023124223 A1 WO 2023124223A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
information
sub
target
difference
Prior art date
Application number
PCT/CN2022/117908
Other languages
English (en)
French (fr)
Inventor
黄雷
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023124223A1 publication Critical patent/WO2023124223A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Definitions

  • the embodiment of the present disclosure is based on the Chinese patent application with the application number 202111624552.6, the application date is December 28, 2021, and the application name is "multi-camera system calibration method, device, system, electronic equipment and storage medium", and requires the Chinese The priority of the patent application, the entire content of the Chinese patent application is hereby incorporated by reference into this disclosure.
  • the present disclosure relates to but not limited to the technical field of image processing, and in particular relates to a multi-camera system calibration method, device, system, electronic equipment, storage medium and computer program product.
  • Multi-camera systems are often used in computer vision.
  • applications such as 3D reconstruction, motion capture, and multi-viewpoint video
  • a multi-camera system composed of various cameras, light sources, and storage devices is often required to obtain them.
  • the reconstruction accuracy of the 3D model of the multi-camera system is not high, and the restoration workload is heavy.
  • Embodiments of the present disclosure provide a multi-camera system calibration method, device, system, electronic device storage medium, and computer program product.
  • An embodiment of the present disclosure provides a method for calibrating a multi-camera system, including:
  • the camera parameters are adjusted in a manner that multiple cameras shoot the target object at the same time, and multiple pictures taken from multiple angles can be used to simultaneously adjust the parameters of multiple cameras, which not only improves the efficiency of camera parameter adjustment, It is also possible to accurately adjust the parameters of the cameras in the multi-camera system, so that the accuracy of parameters such as the position and angle of the camera after parameter adjustment is improved, which is conducive to improving the use of images captured by each camera in the multi-camera system for 3D The accuracy of modeling reduces the workload of post-repair 3D modeling, thereby improving user experience.
  • An embodiment of the present disclosure also provides a multi-camera system calibration device, including:
  • the image acquisition part is configured to acquire multiple images obtained by simultaneously shooting the target object with at least some of the cameras in the multi-camera system;
  • the modeling part is configured to construct a point cloud model of the target object based on the plurality of images
  • the adjusting part is configured to adjust camera parameters of at least one camera in the multi-camera system based on difference information between the point cloud model and the target object.
  • An embodiment of the present disclosure also provides a multi-camera system calibration system, including multiple cameras, camera parameter adjustment components, and a target object; the target object is set within the target shooting field of view formed by the multiple cameras;
  • At least some of the cameras in the plurality of cameras are used to capture images of the target object at the same time, and send the captured images to the camera parameter adjustment component;
  • the camera parameter adjustment component is used to adjust the parameters of at least one camera in the plurality of cameras according to the above multi-camera system calibration method
  • An embodiment of the present disclosure also provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processor and the The memories communicate through the bus, and the machine-readable instructions execute the steps in the multi-camera system calibration method above when executed by the processor.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps in the above multi-camera system calibration method are executed.
  • An embodiment of the present disclosure provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is run on an electronic device, the electronic device executes the above multi-camera system calibration method in the steps.
  • FIG. 1 is a schematic diagram of a multi-camera system provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of the implementation flow of a multi-camera system calibration method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of multiple cameras shooting a target object according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of an implementation flow of another multi-camera system calibration method provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of the composition and structure of a multi-camera system calibration device provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of the composition and structure of a multi-camera system calibration system provided by an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of a hardware entity of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 1 is a schematic diagram of a multi-camera system provided by an embodiment of the present disclosure.
  • multiple cameras in the multi-camera system 101 shoot an object from multiple angles, and then use the multiple pictures taken to Construct a color 3D model and generate a point cloud model 102 corresponding to the object.
  • This technique is becoming more and more common.
  • Multiple cameras shoot objects from multiple angles, the field of view can be superimposed, and the data is seamlessly fused.
  • the synchronous combined camera can even shoot flexible objects (such as human body, animals), so as to record every moment of the object. There will be slight differences in camera parameters between multiple cameras.
  • Tolerances in the design and manufacture of multi-camera systems may cause the cameras not to be positioned at the correct position or at the correct angle, for example, the cameras may be laterally or vertically misaligned relative to each other. These defects may cause serious problems in the images generated by the multi-angle camera system, such as excessive color differences in different parts, severe distortion or double vision, which require a lot of manual repairs in the later stage or even cannot be repaired, seriously reducing the user experience.
  • the multiple cameras of a multi-camera system may not be positioned or oriented as intended for design purposes, in which case the actual positions of the cameras and their relative rotations to the design are unknown. This issue can lead to visual artifacts when combining images captured by multiple cameras. For example, generating overlapping images between two cameras will have color mottling.
  • the present disclosure provides a method for calibrating a multi-camera system, which adjusts camera parameters by taking pictures of target objects with multiple cameras at the same time, and can simultaneously adjust the parameters of multiple cameras by using multiple pictures taken from multiple angles, which can not only improve
  • the camera parameter adjustment efficiency can also accurately adjust the parameters of the cameras in the multi-camera system, so that the accuracy of parameters such as the position and angle of the camera after parameter adjustment is improved, which is conducive to improving the use of each camera in the multi-camera system.
  • the accuracy of the 3D modeling of the captured images reduces the workload of post-repair 3D modeling, thereby improving the user experience.
  • the method for calibrating a multi-camera system can be performed by an electronic device, wherein the electronic device can be a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (for example, a mobile phone, a portable music player, a personal digital assistant) , dedicated messaging devices, portable game devices) and other types of terminals can also be implemented as servers.
  • the electronic device can be a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (for example, a mobile phone, a portable music player, a personal digital assistant) , dedicated messaging devices, portable game devices) and other types of terminals can also be implemented as servers.
  • the server can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, or it can provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, intermediate Cloud servers for basic cloud computing services such as mail service, domain name service, security service, content delivery network (Content Delivery Network, CDN), and big data and artificial intelligence platforms.
  • cloud services cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, intermediate Cloud servers for basic cloud computing services such as mail service, domain name service, security service, content delivery network (Content Delivery Network, CDN), and big data and artificial intelligence platforms.
  • FIG. 2 is a schematic diagram of the implementation flow of a multi-camera system calibration method provided by an embodiment of the present disclosure.
  • the multi-camera system calibration method provided by the present disclosure includes the following steps S210 to S230, wherein:
  • Step S210 acquiring multiple images obtained by capturing the target object simultaneously with at least some cameras in the multi-camera system.
  • FIG. 3 is a schematic diagram of multiple cameras shooting a target object provided by an embodiment of the present disclosure. As shown in FIG. 3 , it may include multiple cameras with different shooting angles. 4 cameras: camera 1, camera 2, camera 3 and camera 4. The 4 cameras shoot the target object from different angles, so that images including different angles of the target object can be obtained. Four image fusion can get the omni-directional image of the target object. That is, different cameras in the multi-camera system have different shooting angles for the target object, and the target shooting angle obtained by superimposing the shooting angles of the respective cameras is used as the omnidirectional shooting angle corresponding to the target object. This can ensure that the multi-camera system can capture a complete target object, which is beneficial to improving the accuracy of camera calibration.
  • the above-mentioned target object may be an object with multiple surfaces, and different surfaces have different information such as positions and colors, and camera parameters are adjusted by using multiple information of the target object with multiple surfaces, which can improve the accuracy of the adjustment.
  • the above-mentioned target object can include at least one sub-block.
  • Camera parameter adjustments can be made using multiple information such as the position and color of the camera, and more information can be used for camera parameter adjustments, thereby effectively improving the accuracy of adjustments.
  • the above multiple images may be taken by multiple cameras at the same time, and the multiple images can reflect the parameter information of the corresponding multiple cameras at the shooting time, such as shooting position, shooting angle, shooting aperture, etc.
  • Using multiple images captured at the same time can simultaneously adjust the parameters of multiple cameras that capture the images.
  • the distance between each camera and the target object satisfies a preset condition; wherein the preset condition may include but is not limited to that each camera forms a sphere or a ring for the shooting field of view of the target object, and The target object is located at a preset position within the sphere or ring, so that the similarity of the light reflected by the target object to each camera is higher than the preset similarity.
  • the target object is located in the center of the sphere or ring, so that the light used by each camera to capture the image of the target object is relatively consistent, and the target object is relatively consistent in color, structure, texture, and shape, which is conducive to improving the accuracy of calibration. accuracy.
  • Step S220 building a point cloud model of the target object based on the multiple images.
  • multiple images can be processed by photogrammetry tool software to obtain a colored point cloud three-dimensional model, that is, the above-mentioned point cloud model.
  • Step S230 based on the difference information between the point cloud model and the target object, adjust the camera parameters of at least one camera in the multi-camera system.
  • the above-mentioned difference information may include the difference between the point cloud model in size and/or color and the real target object, and these difference information can reflect the deviation of the camera that captures the image in the shooting position, shooting angle, and shooting aperture , therefore, adjusting the camera parameters of the cameras in the multi-camera system by using the above difference information can effectively improve the adjustment accuracy.
  • the above difference information may include but not limited to size difference sub-information between the point cloud model and the target object. Before step S230, it is also necessary to determine the above size difference sub-information:
  • step S230 may include step S231, wherein:
  • Step S231 based on the size difference sub-information between the point cloud model and the target object, perform camera parameter adjustment on at least one camera in the multi-camera system.
  • the size difference sub-information can reflect the deviation of at least some cameras in the multi-camera system in shooting positions and shooting angles. Therefore, the subsequent use of the size difference sub-information to adjust the camera parameters of the cameras in the multi-camera system can effectively improve the adjustment accuracy. .
  • step S231 includes steps S241 to S243, wherein:
  • Step S241 acquiring a size difference threshold
  • Step S242 based on the size difference sub-information between the point cloud model and the target object, and the size difference threshold, determine the first target camera in the multi-camera system that needs parameter adjustment;
  • Step S243 adjusting a first preset parameter of the first target camera based on the size difference sub-information.
  • the aforementioned size difference threshold may be set according to specific requirements and scenarios, for example, set to 0.01. This disclosure does not limit it.
  • the size preset threshold it can accurately determine whether the constructed point cloud model is obviously inaccurate in size, that is, it can accurately determine whether the cameras in the multi-camera system need to adjust the corresponding parameters, and can accurately filter The camera whose parameters need to be adjusted is selected, that is, the first target camera. Subsequent adjustment of the parameters of the first target camera according to the size difference sub-information can not only improve the adjustment efficiency, but also accurately adjust the parameters of the cameras that need parameter adjustment, which is beneficial to improve the accuracy of parameter adjustment.
  • the target object includes at least one sub-block;
  • the standard size information includes standard size sub-information corresponding to each sub-block, for example: the standard size information includes the length standard value, width of each sub-block standard value and height standard value.
  • the model size information includes model size sub-information corresponding to each sub-block, for example, the model size information includes a model length value, a model width value, and a model height value of each sub-block.
  • the size difference sub-information includes first difference sub-information corresponding to each sub-block; wherein, the first difference sub-information is used to represent the difference between the standard size sub-information and the model size sub-information of the corresponding sub-block diff information.
  • the following formula (1-1) can be used to determine the first difference sub-information corresponding to a certain sub-block:
  • D1 represents the above-mentioned first difference sub-information
  • x1 represents the length standard value
  • y1 represents the width standard value
  • z1 represents the height standard value
  • x2 represents the model length value
  • y2 represents the model width value
  • z2 represents the model height value.
  • a target object including at least one sub-block has multiple surfaces, each surface or each block has size information, and camera parameters are adjusted using multiple size information, which is beneficial to improving the accuracy of parameter adjustment.
  • step S242 includes steps S2421 to S2422, wherein:
  • Step S2421 taking the sub-block whose difference value corresponding to the first difference sub-information is larger than the size difference threshold as the first target sub-block;
  • Step S2422 Determine the camera corresponding to the first target sub-block, and use the determined camera as the first target camera corresponding to the first target sub-block.
  • step S243 includes step S2431, wherein:
  • Step S2431 based on the first difference sub-information corresponding to the first target sub-block, adjust the first preset parameter of the first target camera corresponding to the first target sub-block.
  • the sub-blocks with larger size deviations can be accurately screened out, that is, the above-mentioned first target sub-blocks; then, using the first difference sub-information corresponding to the screened first target sub-blocks, the Adjusting the parameters of the camera corresponding to the first target sub-block can not only improve the adjustment efficiency, but also accurately adjust the parameters of the relevant cameras, thereby helping to improve the accuracy of the parameter adjustment.
  • the first preset parameter may include but not limited to the shooting position and shooting angle of the corresponding camera.
  • the first preset parameter of the corresponding first target camera may be manually adjusted according to the first difference sub-information.
  • the step S2431 includes steps S251 to S253, wherein:
  • Step S251 based on the first difference sub-information corresponding to the first target sub-block, determine the first target parameter information such as the target shooting position and target shooting angle of the corresponding first target camera;
  • Step S252 according to the parameter information such as the current shooting position and shooting angle of the first target camera and the above-mentioned first target parameter information, determine the first parameter adjustment information such as position adjustment information, angle adjustment information, etc.;
  • Step S253 adjusting first preset parameters such as shooting position and shooting angle of the first target camera according to the above-mentioned first parameter adjustment information.
  • step S230 includes step S261 to step S263, wherein:
  • Step S261 after adjusting the first preset parameter of the first target camera based on the size difference sub-information, reacquire the information obtained by at least some of the cameras shooting the target object simultaneously in the multi-camera system multiple new images;
  • Step S262 rebuilding a new point cloud model of the target object based on the multiple new images, and determining new difference information between the new point cloud model and the target object;
  • Step S263 when it is determined based on the size difference threshold that the new size difference sub-information in the new difference information does not meet the first preset condition, based on the new size difference sub-information, At least one camera in the camera system performs camera parameter adjustment.
  • the new size difference sub-information in the above-mentioned new difference information does not meet the first preset condition may include but not limited to that there is at least one sub-block corresponding to the new first difference sub-information whose difference value is greater than the size difference threshold .
  • the fact that the new size difference sub-information in the new difference information satisfies the first preset condition may include, but is not limited to, that the difference values of the new first difference sub-information corresponding to all sub-blocks are smaller than the size difference threshold.
  • the implementation step of adjusting camera parameters of at least one camera in the multi-camera system based on the new size difference sub-information is the same as that of adjusting at least one camera parameter in the multi-camera system based on the size difference sub-information in the above embodiment.
  • the camera performs the same camera parameter adjustment.
  • second preset parameters such as the shooting aperture of the camera can be further adjusted.
  • the second preset parameters such as the shooting aperture of the camera.
  • the step of adjusting the first preset parameter and the step of adjusting the second preset parameter are two independent and unrelated steps. The adjustment of the second preset parameter will be described below.
  • the above difference information may include but not limited to color difference sub-information between the point cloud model and the target object.
  • the target object includes at least one sub-block; the color difference sub-information includes second difference sub-information corresponding to each sub-block; wherein, the second difference sub-information is used to represent the corresponding sub-block at a point.
  • the target object in at least one sub-block has multiple surfaces, each surface or each block has color information, and camera parameters are adjusted by using multiple color information, which is beneficial to improve the accuracy of parameter adjustment.
  • step S230 After determining the color difference sub-information between the point cloud model and the target object, step S230 includes steps S271 to S273, wherein:
  • Step S271 acquiring a color difference threshold
  • Step S272 based on the color difference sub-information between the point cloud model and the target object, and the color difference threshold, determine a second target camera in the multi-camera system that needs parameter adjustment;
  • Step S273 adjusting a second preset parameter of the second target camera based on the color difference sub-information.
  • the above color difference threshold may be set according to specific requirements and scenarios, for example, set to 0.01. This disclosure does not limit it.
  • the color preset threshold it can be accurately determined whether the constructed point cloud model is obviously inaccurate in color, that is, it can be accurately determined whether the cameras in the multi-camera system need to adjust the corresponding parameters, and can be accurately screened
  • the camera whose parameters need to be adjusted is selected, that is, the second target camera.
  • Subsequent parameter adjustment of the second target camera according to the color difference sub-information can not only improve the adjustment efficiency, but also accurately adjust the parameters of the cameras that need parameter adjustment, thereby improving the accuracy of parameter adjustment.
  • the target object includes at least one sub-block; the color difference sub-information includes second difference sub-information corresponding to each sub-block; wherein, the second difference sub-information is used to represent the corresponding The difference information between the color information of the sub-block in the point cloud model and the real color information.
  • the color information of a certain sub-block in the point cloud model includes the model RGB value of the sub-block
  • the real color information of a certain sub-block includes the standard RGB value of the sub-block.
  • D2 represents the above-mentioned second difference sub-information
  • R1 represents the standard R value in the standard RGB value
  • G1 represents the standard G value in the standard RGB value
  • B1 represents the standard B value in the standard RGB value
  • R2 represents the model RGB value
  • the model R value of , G2 represents the model G value in the model RGB value
  • B2 represents the model B value in the model RGB value.
  • step S272 includes steps S2721 to S2722, wherein:
  • Step S2721 taking the sub-block whose difference value corresponding to the second difference sub-information is greater than the color difference threshold as the second target sub-block;
  • Step S2722 Determine the camera corresponding to the second target sub-block, and use the determined camera as the second target camera corresponding to the second target sub-block.
  • step S273 includes step S2731, wherein:
  • Step S2731 based on the second difference sub-information corresponding to the second target sub-block, adjust the second preset parameter of the second target camera corresponding to the second target sub-block.
  • the color difference threshold can be used to accurately filter out the sub-blocks with large color deviations, that is, the above-mentioned second target sub-block; , adjusting the parameters of the camera corresponding to the second target sub-block can not only improve the adjustment efficiency, but also accurately adjust the parameters of the relevant cameras, thereby helping to improve the accuracy of the parameter adjustment.
  • the above-mentioned second preset parameters may include but not limited to shooting aperture, shutter speed, sensitivity, etc. of the corresponding camera.
  • the second preset parameter of the corresponding second target camera may be manually adjusted according to the second difference sub-information.
  • step S2731 includes steps S281 to S283, wherein:
  • Step S281 based on the second difference sub-information corresponding to the second target sub-block, determine the corresponding second target parameter information such as target shooting aperture and target sensitivity of the second target camera;
  • Step S282 according to the current shooting aperture, sensitivity and other parameter information of the second target camera and the above-mentioned second target parameter information, determine second parameter adjustment information such as aperture adjustment information and sensitivity adjustment information;
  • Step S283 adjusting second preset parameters such as shooting aperture and sensitivity of the second target camera according to the above-mentioned second parameter adjustment information.
  • step S230 includes steps S291 to S293, wherein:
  • Step S291 after the second preset parameter of the second target camera is adjusted based on the color difference sub-information, reacquire the information obtained by at least some of the cameras simultaneously shooting the target object in the multi-camera system multiple new images;
  • Step S292 rebuilding a new point cloud model of the target object based on the multiple new images, and determining new difference information between the new point cloud model and the target object;
  • Step S293 when it is determined based on the color difference threshold that the new color difference sub-information in the new difference information does not satisfy a second preset condition, based on the new color difference sub-information, The second preset parameters of the second target camera are adjusted.
  • the fact that the new color difference sub-information in the above new difference information does not meet the second preset condition may include but is not limited to the existence of at least one sub-block corresponding to the new second difference sub-information whose difference value is greater than the color difference threshold .
  • the new color difference sub-information in the new difference information satisfying the second preset condition may specifically include, but is not limited to, that the difference values of the new second difference sub-information corresponding to all sub-blocks are all smaller than the color difference threshold.
  • the implementation step of adjusting the second preset parameter of the second target camera based on the new color difference sub-information is the same as that of adjusting the second target camera’s second preset parameter based on the color difference sub-information in the above embodiment.
  • the second preset parameters are adjusted the same.
  • the above embodiments can perform one or more rounds of parameter adjustment.
  • different images provided by multiple cameras can capture a set of images of a target object located at the center of the multi-camera system, such as a Rubik's Cube. Modeling is performed based on the captured image to obtain a point cloud model, and at the same time generate detailed information of the target object, such as the position, size, shape, color and texture of the Rubik's Cube. Then adjust the camera parameters according to the detailed information of the target object obtained by modeling and the standard information of the target object. Rubik's cube can change shape, so that multiple point cloud models can be generated for multiple verifications.
  • the method in the above embodiment can quickly obtain the error of each camera parameter, realize precise adjustment of camera parameters, and generally correct the calibration within 10 minutes, which reduces the manual adjustment in the prior art, which requires 2 to 20 hours of calibration time possibility.
  • FIG. 4 is a schematic diagram of the implementation flow of another multi-camera system calibration method provided by an embodiment of the present disclosure, as shown in FIG. 4:
  • Step S1 setting multiple cameras and Rubik's Cube.
  • the system including the third-order Rubik's cube of the target object, multiple cameras, and target adjustment components is placed on the shooting range test site.
  • the layout of the shooting range test site is the same as that of the traditional multi-angle shooting camera system.
  • the third-order Rubik's cube is set in the center of the target shooting field of view formed by multiple cameras, and the multi-camera system surrounds the third-order Rubik's cube in a spherical or cylindrical shape. All cameras are aimed at the Rubik's Cube for image capture.
  • Multiple cameras are connected to the target assistant component, such as a computer, through a router through a network cable. Install group control system software, photogrammetry tool software and calibration system software on the computer.
  • the calibration system software stores the length standard value x1, width standard value y1, and height standard value z1 of each sub-block of the Rubik’s Cube, and the standard R value R1 and standard G value G1 in the standard RGB values of each sub-block.
  • Step S2 controlling at least some of the cameras to take photos of the Rubik's Cube at the same time to obtain multiple original images.
  • At least some of the cameras in the multi-camera system are controlled to take pictures at the same time through the group control system software on the computer to obtain multiple original images.
  • Step S3 sending multiple original images to the computer.
  • the original images captured by different cameras are transferred to the computer through the group control system software.
  • Step S4 constructing a point cloud model of the Rubik's Cube.
  • a plurality of original images are processed through photogrammetry tool software to obtain a colored three-dimensional point cloud model of the Rubik's Cube.
  • Step S5 determining the first difference sub-information D1 of each sub-block.
  • the calibration system software calculates the model length x2, model width y2, and model height z2 of each sub-block in the point cloud model. Combined with the built-in x1, y1, and z1 of the calibration system software, the first difference sub-information D1 is obtained.
  • Step S6 judging whether D1 is less than or equal to a size difference threshold.
  • step S7 it is determined that the camera corresponding to the sub-block whose difference value corresponding to the first difference sub-information D1 is greater than the size difference threshold is the first target camera, and then jump to step S7. If it is determined that there is no first target camera, go to step S8.
  • Step S7 adjust the first preset parameter, and return to step S2 after the adjustment is completed.
  • the first preset parameters such as the shooting position and shooting direction of the first target camera are adjusted.
  • Step S8 determining the second difference sub-information D2 of each sub-block.
  • the calibration system software calculates the model RGB value of each sub-block in the point cloud model: the model R value R2 in the model RGB value, the model G value G2 in the model RGB value, and the model B value B2 in the model RGB value. Combined with the built-in R1, G1, and B1 of the calibration system to obtain the second difference sub-information D2.
  • Step S9 judging whether D2 is less than or equal to the color difference threshold.
  • step S10 it is determined that the phase corresponding to the sub-block whose difference value corresponding to the second difference sub-information is greater than the color difference threshold is the second target camera, and then jump to step S10. If it is determined that there is no second target camera, go to step S11.
  • Step S10 adjust the second preset parameter, and return to step S2 after the adjustment is completed.
  • second preset parameters such as shooting aperture and sensitivity of the second target camera are adjusted.
  • Step S11 the parameter adjustment is completed.
  • the calibration is continued until the difference values corresponding to the second difference sub-information of all sub-blocks are smaller than the color difference threshold.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • the embodiment of the present disclosure also provides a multi-camera system calibration device corresponding to the multi-camera system calibration method, because the problem-solving principle of the device in the embodiment of the present disclosure is similar to the above-mentioned multi-camera system calibration method in the embodiment of the present disclosure , so the implementation of the device can refer to the implementation of the method.
  • FIG. 5 is a schematic diagram of the composition and structure of a multi-camera system calibration device provided by an embodiment of the present disclosure. As shown in FIG. 5 , the device includes:
  • the image acquiring part 510 is configured to acquire multiple images of the target object captured by at least some of the cameras in the multi-camera system at the same time.
  • the modeling part 520 is configured to construct a point cloud model of the target object based on the multiple images.
  • the adjusting part 530 is configured to adjust camera parameters of at least one camera in the multi-camera system based on difference information between the point cloud model and the target object.
  • the difference information includes size difference sub-information between the point cloud model and the target object; the device further includes: a first acquisition part configured to acquire the standard of the target object Size information, and model size information of the point cloud model; a first determining part configured to determine difference information between the standard size information and the model size information, and obtain the size difference sub-information; The adjusting part is further configured to: adjust camera parameters of at least one camera in the multi-camera system based on the size difference sub-information between the point cloud model and the target object.
  • the adjusting part further includes: a second acquiring part configured to acquire a size difference threshold; a second determining part configured to base the size between the point cloud model and the target object The difference sub-information, and the size difference threshold, determine the first target camera in the multi-camera system that needs parameter adjustment; the first adjustment part is configured to, based on the size difference sub-information, The first preset parameters of the target camera are adjusted.
  • the target object includes at least one sub-block;
  • the standard size information includes standard size sub-information corresponding to each sub-block;
  • the model size information includes a model size corresponding to each sub-block Sub-information;
  • the size difference sub-information includes first difference sub-information corresponding to each sub-block; wherein, the first difference sub-information is used to represent the standard size sub-information and model size sub-information of the corresponding sub-block difference information between.
  • the second determination part is further configured to: use the sub-block whose difference value corresponding to the first difference sub-information is larger than the size difference threshold as the first target sub-block; determine the The camera corresponding to the first target sub-block, and the determined camera is used as the first target camera corresponding to the first target sub-block; the first adjustment part is further configured to: based on the first target sub-block The first difference sub-information corresponding to the block adjusts the first preset parameter of the first target camera corresponding to the first target sub-block.
  • the adjustment part is further configured to: re-acquire the multi-camera after the first preset parameter of the first target camera is adjusted based on the size difference sub-information.
  • the cameras simultaneously capture multiple new images of the target object; based on the multiple new images, reconstruct a new point cloud model of the target object, and determine the new point cloud New difference information between the model and the target object; when it is determined based on the size difference threshold that the new size difference sub-information in the new difference information does not meet the first preset condition, based on the The new size difference sub-information is for adjusting camera parameters of at least one camera in the multi-camera system.
  • the difference information further includes color difference sub-information between the point cloud model and the target object; the adjustment part is further configured to: determine the new color difference based on the size difference threshold If the new size difference sub-information in the difference information satisfies the first preset condition, acquire a color difference threshold; based on the color difference sub-information between the point cloud model and the target object, and the color difference threshold , determining a second target camera in the multi-camera system that requires parameter adjustment; and adjusting a second preset parameter of the second target camera based on the color difference sub-information.
  • the difference information includes color difference sub-information between the point cloud model and the target object; the adjustment part further includes: a third acquisition part configured to acquire a color difference threshold; a third The determining part is configured to determine a second target camera in the multi-camera system that requires parameter adjustment based on the color difference sub-information between the point cloud model and the target object, and the color difference threshold; The second adjustment part is configured to adjust a second preset parameter of the second target camera based on the color difference sub-information.
  • the target object includes at least one sub-block; the color difference sub-information includes second difference sub-information corresponding to each sub-block; wherein, the second difference sub-information is used to represent the corresponding The difference information between the color information of the sub-block in the point cloud model and the real color information.
  • the third determining part is further configured to: use the sub-block corresponding to the second difference sub-information with a difference value greater than the color difference threshold as the second target sub-block; determine the The camera corresponding to the second target sub-block, and the determined camera is used as the second target camera corresponding to the second target sub-block; the second adjustment part is further configured to: based on the second target sub-block The second difference sub-information corresponding to the block adjusts the second preset parameter of the second target camera corresponding to the second target sub-block.
  • the adjustment part is further configured to: re-acquire the multi-camera after the second preset parameter of the second target camera is adjusted based on the color difference sub-information.
  • the cameras simultaneously capture multiple new images of the target object; based on the multiple new images, reconstruct a new point cloud model of the target object, and determine the new point cloud New difference information between the model and the target object; when it is determined based on the color difference threshold that the new color difference sub-information in the new difference information does not satisfy a second preset condition, based on the The new color difference sub-information adjusts the second preset parameters of the second target camera.
  • different cameras in the multi-camera system have different shooting angles for the target object, and the target shooting angle obtained by superimposing the shooting angles of each camera is used as the omnidirectional shooting angle corresponding to the target object .
  • the distance between each camera of the multi-camera system and the target object satisfies a preset condition; wherein the preset condition includes that each camera forms a sphere or an annulus with respect to the shooting field of view of the target object , and the target object is located at a preset position within the sphere or ring, so that the similarity of the light reflected by the target object to each camera is higher than the preset similarity.
  • a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course it may also be a unit, a module or a non-modular one.
  • the embodiment of the present disclosure also provides a multi-camera system calibration system corresponding to the multi-camera system calibration method, because the principle of solving the problem of the system in the embodiment of the present disclosure is similar to the above-mentioned multi-camera system calibration method in the embodiment of the present disclosure , so the implementation of the system can refer to the implementation of the method.
  • FIG. 6 is a schematic structural diagram of a multi-camera system calibration system provided by an embodiment of the present disclosure. As shown in FIG. 6 , the system includes a plurality of cameras 610 , a camera parameter adjustment component 620 and a target object 630 .
  • the target object 630 is set within the target shooting field of view formed by the plurality of cameras 610 . At least some of the cameras in the plurality of cameras 610 are used to capture images of the target object 630 at the same time, and send the captured images to the camera parameter adjustment component 620; the camera parameter adjustment component 620 is used for the above implementation In the multi-camera system calibration method in the example, parameter adjustment is performed on at least one camera among the plurality of cameras.
  • FIG. 7 is a schematic diagram of a hardware entity of an electronic device 700 provided by an embodiment of the present disclosure. As shown in FIG. 7 , it includes a processor 71 , a memory 72 , and a bus 73 .
  • the memory 72 is used to store execution instructions, including a memory 721 and an external memory 722; the memory 721 here is also called an internal memory, and is used to temporarily store calculation data in the processor 71 and exchange data with an external memory 722 such as a hard disk.
  • the processor 71 exchanges data with the external memory 722 through the memory 721.
  • the processor 71 communicates with the memory 72 through the bus 73, so that the processor 71 executes the following instructions:
  • An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the multi-camera system calibration method described in the above-mentioned method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the computer program product of the multi-camera system calibration method includes a computer-readable storage medium storing program codes, and the instructions included in the program code can be used to execute the multi-camera system described in the above method embodiments
  • the computer program product can be specifically realized by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in other embodiments, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are illustrative.
  • the division of the units is a logical function division.
  • multiple units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
  • An embodiment of the present disclosure provides a multi-camera system calibration method, device, system, electronic device storage medium, and computer program product.
  • the method includes: acquiring multiple images obtained by simultaneously photographing a target object by at least some of the cameras in the multi-camera system. image; based on the plurality of images, constructing a point cloud model of the target object; based on the difference information between the point cloud model and the target object, performing camera parameters for at least one camera in the multi-camera system Adjustment.
  • the above solution can use multiple pictures taken from multiple angles to adjust the parameters of multiple cameras at the same time, which can not only improve the efficiency of camera parameter adjustment, but also accurately adjust the parameters of the cameras in the multi-camera system, so that after parameter adjustment
  • the accuracy of parameters such as camera position and angle is improved, which is conducive to improving the accuracy of 3D modeling using images captured by each camera in the multi-camera system, reducing the workload of post-repair 3D modeling, and thus improving user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本公开提供了一种多相机系统校准方法、装置、系统、电子设备、存储介质及计算机程序产品,所述方法包括:获取多相机系统中,至少部分相机同时拍摄目标物体所得到的多张图像;基于所述多张图像,构建所述目标物体的点云模型;基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整。

Description

多相机系统校准方法、装置、系统、电子设备、存储介质及计算机程序产品
相关申请的交叉引用
本公开实施例基于申请号为202111624552.6、申请日为2021年12月28日、申请名称为“多相机系统校准方法、装置、系统、电子设备及存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及但不限于图像处理技术领域,尤其涉及一种多相机系统校准方法、装置、系统、电子设备、存储介质及计算机程序产品。
背景技术
多台相机从多个角度拍摄物体,视野可以叠加,继而通过3D模型构建,能够实现检测得到的数据无缝融合。多相机系统常用于计算机视觉中,对于:3D重建、运动捕捉、多视点视频等应用,常常需要各种不同相机、光源、存储设备等组成的多相机系统才能得到。在相关技术中,多相机系统的3D模型重建的精确度不高、修复工作量大。
发明内容
本公开实施例提供一种多相机系统校准方法、装置、系统、电子设备存储介质及计算机程序产品。
本公开实施例提供了一种多相机系统校准方法,包括:
获取多相机系统中,至少部分相机同时拍摄目标物体所得到的多张图像;
基于所述多张图像,构建所述目标物体的点云模型;
基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整。
在本公开实施例中,通过多个相机同时拍摄目标物体的方式进行相机参数调整,能够利用多个角度拍摄的多张图片同时对多个相机的参数进行调整,不仅能够提高相机参数调整效率,还能够实现精确地对多相机系统中的相机进行参数调整,使得参数调整后的相机的位置和角度等参数的准确度得到提升,从而有利于提高利用多相机系统中各个相机拍摄的图像进行3D建模的精确性,降低了3D建模后期修复的工作量,进而提高了用户体验。
本公开实施例还提供了一种多相机系统校准装置,包括:
图像获取部分,被配置为获取多相机系统中,至少部分相机同时拍摄目标物体所得 到的多张图像;
建模部分,被配置为基于所述多张图像,构建所述目标物体的点云模型;
调整部分,被配置为基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整。
本公开实施例还提供了一种多相机系统校准系统,包括多个相机、相机调参部件以及目标物体;所述目标物体设置在所述多个相机形成的目标拍摄视野内;
所述多个相机中的至少部分相机用于同时拍摄所述目标物体的图像,并将拍摄得到的图像发送给所述相机调参部件;
所述相机调参部件用于按照上述多相机系统校准方法对所述多个相机中的至少一个相机进行参数调整
本公开实施例还提供一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行上述多相机系统校准方法中的步骤。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述多相机系统校准方法中的步骤。
本公开实施例提供一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在电子设备上运行的情况下,使得所述电子设备执行上述多相机系统校准方法中的步骤。
关于上述多相机系统校准装置、系统、电子设备、计算机可读存储介质及计算机程序产品的效果描述参见上述多相机系统校准方法的说明。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举实施例,并配合所附附图,作详细说明如下。应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1为本公开实施例提供的一种多相机系统的示意图;
图2为本公开实施例提供的一种多相机系统校准方法的实现流程示意图;
图3为本公开实施例提供的多个相机拍摄目标物体的示意图;
图4为本公开实施例提供的另一种多相机系统校准方法的实现流程示意图;
图5为本公开实施例提供的一种多相机系统校准装置的组成结构示意图;
图6为本公开实施例提供的一种多相机系统校准系统的组成结构示意图;
图7为本公开实施例提供的电子设备的一种硬件实体示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是表示本公开的一部分实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
图1为本公开实施例提供的一种多相机系统的示意图,如图1所示,通过多相机系统101中的多个相机对某一物体进行多角度拍摄,之后利用拍摄的多张图片来进行彩色3D模型构建,生成该物体对应的点云模型102,这种技术的应用逐渐普遍。多个相机从多个角度拍摄物体,视野可以叠加,数据无缝融合。同步工作的组合相机甚至可以拍摄柔性物体(如人体,动物),从而将物体的每一瞬间都记录下来。多个相机之间会存在相机参数的细微差别。多相机系统在设计和制造中的公差可能会导致相机不能被设置在正确的位置或具有正确的角度,例如,相机可以相对彼此在横向或者垂直方向上发生错位。这些缺陷可能导致由多角度相机系统生成的图像中的严重问题,例如不同部位颜色差异过大、严重失真或双重视觉,这需要后期大量的手动修复甚至无法修复,严重降低了用户体验。
另外,多相机系统的多个摄像机可能没有按照预期的设计目的定位或定向,在这种情况下,相机的实际位置及其相对于设计的相对旋转是未知的。当组合由多个相机捕捉的图像时,该问题可能导致视觉伪像。例如生成两个相机之间的重叠图像会出现色斑。
本公开提供了一种多相机系统校准方法,通过多个相机同时拍摄目标物体的方式进行相机参数调整,能够利用多个角度拍摄的多张图片同时对多个相机的参数进行调整,不仅能够提高相机参数调整效率,还能够实现精确地对多相机系统中的相机进行参数调 整,使得参数调整后的相机的位置和角度等参数的准确度得到提升,从而有利于提高利用多相机系统中各个相机拍摄的图像进行3D建模的精确性,降低了3D建模后期修复的工作量,进而提高了用户体验。本公开实施例提供的多相机系统校准方法可以由电子设备执行,其中,电子设备可以为笔记本电脑,平板电脑,台式计算机,机顶盒,移动设备(例如,移动电话,便携式音乐播放器,个人数字助理,专用消息设备,便携式游戏设备)等各种类型的终端,也可以实施为服务器。服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(Content Delivery Network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器。
下面,将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述。
图2为本公开实施例提供的一种多相机系统校准方法的实现流程示意图,如图2所示,本公开提供的多相机系统校准方法包括如下步骤S210至步骤S230,其中:
步骤S210、获取多相机系统中,至少部分相机同时拍摄目标物体所得到的多张图像。
这里,上述多相机系统中包括具有不同的拍摄角度的多个相机,图3为本公开实施例提供的多个相机拍摄目标物体的示意图,如图3所示,可以包括具有不同的拍摄角度的4个相机:相机1、相机2、相机3和相机4。4个相机从不同的角度拍摄目标物体,从而能够得到包括目标物体的不同角度的图像。4个图像融合可以得到目标物体的全方位的图像。即,多相机系统中的不同相机针对所述目标物体具有不同的拍摄角度,并且将各个相机的拍摄角度叠加得到的目标拍摄角度作为所述目标物体对应的全方位拍摄角度。这样能够保证多相机系统能够拍摄到完整的目标物体,有利于提高相机校准的准确度。
其中,上述目标物体可以是具有多个面的物体,不同的面具有不同的位置、颜色等信息,利用具有多个面的目标物体的多个信息进行相机参数调整,能够提高调整的准确性。
在一些实施方式中,上述目标物体可以包括至少一个子区块,如图3所示,目标物体可以是一个魔方301,魔方具有多个活动连接的子区块302,利用魔方中多个子区块的位置、颜色等多个信息进行相机参数调整,能够实现利用较多的信息进行相机参数调整,从而能够有效提高调整的准确性。
上述多张图像可以是多个相机在同一时刻拍摄的,该多张图像能够反应对应的多个 相机在拍摄时刻的参数信息,例如,拍摄位置、拍摄角度、拍摄光圈等。利用同一时刻拍摄的多张图像能够对拍摄图像的多个相机同时进行参数调整。
在一些实施方式中,各个相机与所述目标物体之间的距离满足预设条件;其中所述预设条件可以包括但不限于各个相机针对所述目标物体的拍摄视野形成球体或环形体,并且所述目标物体位于所述球体或环形体内的预设位置等,以使所述目标物体反射给各个相机的光线的相似度高于预设相似度。例如,目标物体位于所述球体或环形体内的中心位置,这样各个相机拍摄目标物体的图像所用的光线较一致,目标物体在颜色、结构、纹理以及形状等方面都比较一致,有利于提高校准的准确性。
步骤S220、基于所述多张图像,构建所述目标物体的点云模型。
这里,可以通过摄影测量学工具软件对多张图像进行处理,得到带颜色的点云三维模型,即上述点云模型。
步骤S230、基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整。
这里,上述差异信息可以包括点云模型在尺寸上和/或在颜色上与真实的目标物体之间的差异,这些差异信息能够反应拍摄图像的相机在拍摄位置、拍摄角度、拍摄光圈上的偏差,因此,通过上述差异信息对多相机系统中的相机进行相机参数调整,能够有效提高调整精度。
在一些实施例中,上述差异信息可以包括但不限于所述点云模型与所述目标物体之间的尺寸差异子信息。在步骤S230之前,还需要确定上述尺寸差异子信息:
首先,获取所述目标物体的标准尺寸信息,以及,所述点云模型的模型尺寸信息;之后,确定所述标准尺寸信息与所述模型尺寸信息之间的差异信息,得到的所述尺寸差异子信息。
在确定了尺寸差异子信息之后,上述步骤S230可以包括步骤S231,其中:
步骤S231、基于所述点云模型与所述目标物体之间的尺寸差异子信息,对所述多相机系统中的至少一个相机进行相机参数调整。
通过目标物体的标准尺寸信息和点云模型的模型尺寸信息,能够准确地确定点云模型与真实的目标物体在尺寸上的差异信息,即上述尺寸差异子信息。该尺寸差异子信息能够反应多相机系统中至少部分相机在拍摄位置和拍摄角度上的偏差,因此,后续利用该尺寸差异子信息对多相机系统中的相机进行相机参数调整,能够有效提高调整精度。
在一些实施方式中,在确定了上述尺寸差异子信息之后,上述步骤S231包括步骤S241至步骤S243,其中:
步骤S241,获取尺寸差异阈值;
步骤S242,基于所述点云模型和所述目标物体之间的尺寸差异子信息,和所述尺寸差异阈值,确定所述多相机系统中的需要进行参数调整的第一目标相机;
步骤S243,基于所述尺寸差异子信息,对所述第一目标相机的第一预设参数进行调整。
这里,上述尺寸差异阈值可以是根据具体的需求和场景设置的,例如设置为0.01。本公开对此并不进行限定。
利用尺寸预设阈值,能够准确地确定构建的点云模型在尺寸上是否出现了明显的不准确,即能够准确的确定多相机系统中的相机是否需要进行对应参数的调整,以及能够准确地筛选出需要进行参数调整的相机,即第一目标相机。后续根据尺寸差异子信息对第一目标相机进行参数调整,不仅能够提高调整效率,还能够准确地对有需要进行参数调整的相机进行参数调整,从而有利于提高参数调整的准确性。
在一些实施方式中,上述目标物体包括至少一个子区块;所述标准尺寸信息包括每个子区块分别对应的标准尺寸子信息,例如:标准尺寸信息包括每个子区块的长度标准值、宽度标准值和高度标准值。所述模型尺寸信息包括每个子区块分别对应的模型尺寸子信息,例如:模型尺寸信息包括每个子区块的模型长度值、模型宽度值和模型高度值。所述尺寸差异子信息包括每个子区块分别对应的第一差异子信息;其中,所述第一差异子信息用于表征对应的子区块的标准尺寸子信息与模型尺寸子信息之间的差异信息。
在一些实施方式中,可以利用如下公式(1-1)确定某一子区块对应的第一差异子信息:
Figure PCTCN2022117908-appb-000001
其中,D1表示上述第一差异子信息,x1表示长度标准值,y1表示宽度标准值,z1表示高度标准值,x2表示模型长度值,y2表示模型宽度值,z2表示模型高度值。
包括至少一个子区块的目标物体具有多个面,每个面或者每个区块都有尺寸信息,利用多个尺寸信息进行相机参数的调整,有利于提高参数调整准的准确度。
在一些实施方式中,在确定了上述每个子区块对象的第一差异子信息之后,步骤S242包括步骤S2421至步骤S2422,其中:
步骤S2421,将第一差异子信息对应的差异值大于所述尺寸差异阈值的子区块,作为第一目标子区块;
步骤S2422,确定所述第一目标子区块对应的相机,并将确定的相机作为所述第一目标子区块对应的第一目标相机。
在一些实施方式中,在确定了某一第一目标子区块对应的第一目标相机之后,步骤 S243包括步骤S2431,其中:
步骤S2431,基于该第一目标子区块对应的第一差异子信息,对该第一目标子区块对应的第一目标相机的第一预设参数进行调整。利用尺寸差异阈值能够准确地筛选出出现较大的尺寸偏差的子区块,即上述第一目标子区块;之后,利用筛选出的第一目标子区块对应的第一差异子信息,对该第一目标子区块对应的相机进行参数调整,不仅能够提高调整效率,还能够准确地对相关的相机进行参数调整,从而有利于提高参数调整的准确性。
这里,第一预设参数可以包括但不限于对应相机的拍摄位置、拍摄角度等。
在一些实施方式中,可以是人工根据第一差异子信息来调整对应的第一目标相机的第一预设参数。
在一些实施方式中,所述步骤S2431包括步骤S251至步骤S253,其中:
步骤S251,基于第一目标子区块对应的第一差异子信息,确定对应的第一目标相机的目标拍摄位置、目标拍摄角度等第一目标参数信息;
步骤S252,根据第一目标相机当前的拍摄位置、拍摄角度等参数信息和上述第一目标参数信息,确定位置调整信息、角度调整信息等第一参数调整信息;
步骤S253,按照上述第一参数调整信息对第一目标相机的拍摄位置、拍摄角度等第一预设参数进行调整。
在一些实施方式中,通过一次参数调整很可能不能得到最准确的相机参数,因此,在调整之后可以重新采集图像,通过点云模型重建,重新确定新的差异信息等步骤,确定多相机系统中的相机是否调节得到最准确的相机参数,并在多相机系统中的相机未调节得到最准确的相机参数时,再次进行相机参数调整,以提高调整精度。在一些实施方式中,步骤S230包括步骤S261至步骤S263,其中:
步骤S261,在所述基于所述尺寸差异子信息,对所述第一目标相机的第一预设参数进行调整之后,重新获取所述多相机系统中,至少部分相机同时拍摄目标物体所得到的多张新的图像;
步骤S262,基于所述多张新的图像,重新构建所述目标物体的新的点云模型,以及,确定所述新的点云模型和所述目标物体之间的新的差异信息;
步骤S263,在基于所述尺寸差异阈值,确定所述新的差异信息中新的尺寸差异子信息不满足第一预设条件的情况下,基于所述新的尺寸差异子信息,对所述多相机系统中的至少一个相机进行相机参数调整。
这里,上述新的差异信息中新的尺寸差异子信息不满足第一预设条件可以包括但不限于存在至少一个子区块对应的新的第一差异子信息的差异值大于所述尺寸差异阈 值。新的差异信息中新的尺寸差异子信息满足第一预设条件可以包括但不限于所有的子区块对应的新的第一差异子信息的差异值均小于所述尺寸差异阈值。
上述基于所述新的尺寸差异子信息,对所述多相机系统中的至少一个相机进行相机参数调整的实现步骤与上述实施例中基于尺寸差异子信息,对所述多相机系统中的至少一个相机进行相机参数调整相同。
在新的差异信息中新的尺寸差异子信息满足第一预设条件时,可以进一步对相机的拍摄光圈等第二预设参数进行进一步的调整。当然也可以直接对相机的拍摄光圈等第二预设参数进行调整,此时,调整第一预设参数的步骤和调整第二预设参数的步骤是两个独立没有关系的步骤。下面对第二预设参数的调整进行说明。
在一些实施方式中,上述差异信息可以包括但不限于点云模型与所述目标物体之间的颜色差异子信息。所述目标物体包括至少一个子区块;所述颜色差异子信息包括每个子区块分别对应的第二差异子信息;其中,所述第二差异子信息用于表征对应的子区块在点云模型中的颜色信息与真实颜色信息之间的差异信息。
至少一个子区块的目标物体具有多个面,每个面或者每个区块都有颜色信息,利用多个颜色信息进行相机参数的调整,有利于提高参数调整准的准确度。
在确定了点云模型与所述目标物体之间的颜色差异子信息之后,步骤S230包括步骤S271至步骤S273,其中:
步骤S271,获取颜色差异阈值;
步骤S272,基于所述点云模型和所述目标物体之间的颜色差异子信息,和所述颜色差异阈值,确定所述多相机系统中的需要进行参数调整的第二目标相机;
步骤S273,基于所述颜色差异子信息,对所述第二目标相机的第二预设参数进行调整。
这里,上述颜色差异阈值可以是根据具体的需求和场景设置的,例如设置为0.01。本公开对此并不进行限定。
利用颜色预设阈值,能够准确地确定构建的点云模型在颜色上是否出现了明显的不准确,即能够准确的确定多相机系统中的相机是否需要进行对应参数的调整,以及能够准确地筛选出需要进行参数调整的相机,即第二目标相机。后续根据颜色差异子信息对第二目标相机进行参数调整,不仅能够提高调整效率,还能够准确地对需要进行参数调整的相机进行参数调整,从而有利于提高参数调整的准确性。
在一些实施方式中,所述目标物体包括至少一个子区块;所述颜色差异子信息包括每个子区块分别对应的第二差异子信息;其中,所述第二差异子信息用于表征对应的子区块在点云模型中的颜色信息与真实颜色信息之间的差异信息。在一些实施方式中,某 一子区块在点云模型中的颜色信息包括该子区块的模型RGB值,某一子区块的真实颜色信息包括该子区块的标准RGB值。
在一些实施方式中,可以利用如下公式(1-2)确定某一子区块对应的第二差异子信息:
Figure PCTCN2022117908-appb-000002
其中,D2表示上述第二差异子信息,R1表示标准RGB值中的标准R值,G1表示标准RGB值中的标准G值,B1表示标准RGB值中的标准B值,R2表示模型RGB值中的模型R值,G2表示模型RGB值中的模型G值,B2表示模型RGB值中的模型B值。
在一些实施方式中,在确定了上述每个子区块对象的第二差异子信息之后,步骤S272包括步骤S2721至步骤S2722,其中:
步骤S2721,将第二差异子信息对应的差异值大于所述颜色差异阈值的子区块,作为第二目标子区块;
步骤S2722,确定所述第二目标子区块对应的相机,并将确定的相机作为所述第二目标子区块对应的第二目标相机。
在一些实施方式中,在确定了某一第二目标子区块对应的第二目标相机之后,步骤S273包括步骤S2731,其中:
步骤S2731,基于该第二目标子区块对应的第二差异子信息,对该第二目标子区块对应的第二目标相机的第二预设参数进行调整。
这里,利用颜色差异阈值能够准确地筛选出出现较大的颜色偏差的子区块,即上述第二目标子区块;之后,利用筛选出的第二目标子区块对应的第二差异子信息,对该第二目标子区块对应的相机进行参数调整,不仅能够提高调整效率,还能够准确地对相关的相机进行参数调整,从而有利于提高参数调整的准确性。
上述第二预设参数可以包括但不限于对应相机的拍摄光圈、快门、感光度等。
在一些实施方式中,可以是人工根据第二差异子信息来调整对应的第二目标相机的第二预设参数。
在一些实施方式中,步骤S2731包括步骤S281至步骤S283,其中:
步骤S281,基于第二目标子区块对应的第二差异子信息,确定对应的第二目标相机目标拍摄光圈、目标感光度等第二目标参数信息;
步骤S282,根据当前第二目标相机当前的拍摄光圈、感光度等参数信息和上述第二目标参数信息,确定光圈调整信息、感光度调整信息等第二参数调整信息;
步骤S283,按照上述第二参数调整信息对第二目标相机的拍摄光圈、感光度等第二预设参数进行调整。
在一些实施方式中,通过一次参数调整很可能不能得到最准确的相机参数,因此,在调整之后通过点云模型重建,重新确定新的差异信息等步骤,能够准确地确定多相机系统中的相机是否调节得到最准确的相机参数,并在多相机系统中的相机未调节得到最准确的相机参数时,再次进行相机参数调整,能够有效提高调整精度。在一些实施方式中,步骤S230包括步骤S291至步骤S293,其中:
步骤S291,在所述基于所述颜色差异子信息,对所述第二目标相机的第二预设参数进行调整之后,重新获取所述多相机系统中,至少部分相机同时拍摄目标物体所得到的多张新的图像;
步骤S292,基于所述多张新的图像,重新构建所述目标物体的新的点云模型,以及,确定所述新的点云模型和所述目标物体之间的新的差异信息;
步骤S293,在基于所述颜色差异阈值,确定所述新的差异信息中新的颜色差异子信息不满足第二预设条件的情况下,基于所述新的颜色差异子信息,对所述第二目标相机的第二预设参数进行调整。
这里,上述新的差异信息中新的颜色差异子信息不满足第二预设条件可以包括但不限于存在至少一个子区块对应的新的第二差异子信息的差异值大于所述颜色差异阈值。新的差异信息中新的颜色差异子信息满足第二预设条件具体可以包括但不限于所有的子区块对应的新的第二差异子信息的差异值均小于所述颜色差异阈值。
上述基于所述新的颜色差异子信息,对所述第二目标相机的第二预设参数进行调整的实现步骤与上述实施例中基于所述颜色差异子信息,对所述第二目标相机的第二预设参数进行调整相同。
上述实施例可以执行一轮或多轮参数调整,在一轮参数调整期间,多个相机提供的不同图像可以捕获位于多相机系统中心的目标物体,例如魔方的一组图像。基于捕获的图像进行建模,得到点云模型,同时生成目标物体的详细信息,例如魔方的位置、大小、形状、颜色和纹理等。之后根据建模得到的目标物体的详细信息和目标物体的标准信息进行相机参数调整。魔方可以变化形状,从而能够生成多个点云模型进行多次验证。
上述实施例中的方法能够快速得到各个相机参数的误差,实现相机参数精确调节,一般10分钟内就能校准正确,降低了现有技术中人工调节,需要2~20个小时的校准时间的情况的可能性。
下面通过一个具体的实施例对本公开中的多相机系统校准方法进行说明。图4为本公开实施例提供的另一种多相机系统校准方法的实现流程示意图,如图4所示:
步骤S1、设置多个相机和魔方。
这里,将包括目标物体三阶魔方、多个相机、目标调参部件的系统放在靶场测试现场。靶场测试现场布置和传统多角度拍摄相机系统相同。三阶魔方设置在多个相机形成的目标拍摄视野的中心,多相机系统的以球形或圆柱状围绕三阶魔方。所有相机对准魔方进行图像拍摄。多个相机通过网线经过路由器连接到目标调参部件,例如电脑上。电脑上安装群控系统软件、摄影测量学工具软件和校准系统软件等。其中,校准系统软件存储有魔方每个子区块的长度标准值x1、宽度标准值y1、高度标准值z1,以及,每个子区块的标准RGB值中的标准R值R1、标准G值G1、标准B值B1。
步骤S2、控制至少部分相机同时对魔方进行拍摄拍照,得到多张原始图像。
这里,在电脑上通过群控系统软件控制多相机系统中的至少部分相机同时拍照,得到多张原始图像。
步骤S3、将多张原始图像发送给电脑。
这里,通过群控系统软件将不同相机拍摄的原始图像传输到电脑中。
步骤S4、构建魔方的点云模型。
这里,通过摄影测量学工具软件对多张原始图像进行处理,得到魔方的带颜色的三维的点云模型。
步骤S5、确定每个子区块的第一差异子信息D1。
这里,校准系统软件计算出点云模型中每个子区块的模型长度值x2、模型宽度值y2值、模型高度值z2。与校准系统软件内置的x1,y1,z1结合得到第一差异子信息D1。
步骤S6、判断D1是否小于或等于尺寸差异阈值。
这里,确定第一差异子信息D1对应的差异值大于尺寸差异阈值的子区块对应相机即为第一目标相机,则跳转至步骤S7。如果确定没有第一目标相机,则跳转到步骤S8。
步骤S7、调整第一预设参数,调整完成后返回步骤S2。
这里,调整第一目标相机的拍摄位置和拍摄方向等第一预设参数。
步骤S8、确定每个子区块的第二差异子信息D2。
这里,校准系统软件计算出点云模型中每个子区块的模型RGB值:模型RGB值中的模型R值R2、模型RGB值中的模型G值G2、模型RGB值中的模型B值B2。与校准系统内置的R1,G1,B1结合得到第二差异子信息D2。
步骤S9、判断D2是否小于等于颜色差异阈值。
这里,确定第二差异子信息对应的差异值大于颜色差异阈值的子区块对应相即为第二目标相机,则跳转至步骤S10。如果确定没有第二目标相机,则跳转进入步骤S11。
步骤S10、调整第二预设参数,调整完成后返回步骤S2。
这里,调整第二目标相机的拍摄光圈和感光度等第二预设参数。
步骤S11、参数调整完成。
这里,继续校准,直到所有子区块的第二差异子信息对应的差异值均小于颜色差异阈值。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了多相机系统校准方法对应的多相机系统校准装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述多相机系统校准方法相似,因此装置的实施可以参见方法的实施。
图5为本公开实施例提供的一种多相机系统校准装置的组成结构示意图,如图5所示,所述装置包括:
图像获取部分510,被配置为获取多相机系统中,至少部分相机同时拍摄目标物体所得到的多张图像。
建模部分520,被配置为基于所述多张图像,构建所述目标物体的点云模型。
调整部分530,被配置为基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整。
在一些实施方式中,所述差异信息包括所述点云模型与所述目标物体之间的尺寸差异子信息;所述装置还包括:第一获取部分,被配置为获取所述目标物体的标准尺寸信息,以及,所述点云模型的模型尺寸信息;第一确定部分,被配置为确定所述标准尺寸信息与所述模型尺寸信息之间的差异信息,得到的所述尺寸差异子信息;所述调整部分,还被配置为:基于所述点云模型与所述目标物体之间的尺寸差异子信息,对所述多相机系统中的至少一个相机进行相机参数调整。
在一些实施方式中,所述调整部分,还包括:第二获取部分,被配置为获取尺寸差异阈值;第二确定部分,被配置为基于所述点云模型和所述目标物体之间的尺寸差异子信息,和所述尺寸差异阈值,确定所述多相机系统中的需要进行参数调整的第一目标相机;第一调整部分,被配置为基于所述尺寸差异子信息,对所述第一目标相机的第一预设参数进行调整。
在一些实施方式中,所述目标物体包括至少一个子区块;所述标准尺寸信息包括每个子区块分别对应的标准尺寸子信息;所述模型尺寸信息包括每个子区块分别对应的模型尺寸子信息;所述尺寸差异子信息包括每个子区块分别对应的第一差异子信息;其中, 所述第一差异子信息用于表征对应的子区块的标准尺寸子信息与模型尺寸子信息之间的差异信息。
在一些实施方式中,所述第二确定部分,还被配置为:将第一差异子信息对应的差异值大于所述尺寸差异阈值的子区块,作为第一目标子区块;确定所述第一目标子区块对应的相机,并将确定的相机作为所述第一目标子区块对应的第一目标相机;所述第一调整部分,还被配置为:基于所述第一目标子区块对应的第一差异子信息,对所述第一目标子区块对应的第一目标相机的第一预设参数进行调整。
在一些实施方式中,所述调整部分,还被配置为:在所述基于所述尺寸差异子信息,对所述第一目标相机的第一预设参数进行调整之后,重新获取所述多相机系统中,至少部分相机同时拍摄目标物体所得到的多张新的图像;基于所述多张新的图像,重新构建所述目标物体的新的点云模型,以及,确定所述新的点云模型和所述目标物体之间的新的差异信息;在基于所述尺寸差异阈值,确定所述新的差异信息中新的尺寸差异子信息不满足第一预设条件的情况下,基于所述新的尺寸差异子信息,对所述多相机系统中的至少一个相机进行相机参数调整。
在一些实施方式中,所述差异信息还包括点云模型与所述目标物体之间的颜色差异子信息;所述调整部分,还被配置为:在基于所述尺寸差异阈值,确定所述新的差异信息中新的尺寸差异子信息满足第一预设条件的情况下,获取颜色差异阈值;基于所述点云模型和所述目标物体之间的颜色差异子信息,和所述颜色差异阈值,确定所述多相机系统中的需要进行参数调整的第二目标相机;基于所述颜色差异子信息,对所述第二目标相机的第二预设参数进行调整。
在一些实施方式中,所述差异信息包括点云模型与所述目标物体之间的颜色差异子信息;所述调整部分,还包括:第三获取部分,被配置为获取颜色差异阈值;第三确定部分,被配置为基于所述点云模型和所述目标物体之间的颜色差异子信息,和所述颜色差异阈值,确定所述多相机系统中的需要进行参数调整的第二目标相机;第二调整部分,被配置为基于所述颜色差异子信息,对所述第二目标相机的第二预设参数进行调整。
在一些实施方式中,所述目标物体包括至少一个子区块;所述颜色差异子信息包括每个子区块分别对应的第二差异子信息;其中,所述第二差异子信息用于表征对应的子区块在点云模型中的颜色信息与真实颜色信息之间的差异信息。
在一些实施方式中,所述第三确定部分,还被配置为:将第二差异子信息对应的差异值大于所述颜色差异阈值的子区块,作为第二目标子区块;确定所述第二目标子区块对应的相机,并将确定的相机作为所述第二目标子区块对应的第二目标相机;所述第二调整部分,还被配置为:基于所述第二目标子区块对应的第二差异子信息,对所述第二 目标子区块对应的第二目标相机的第二预设参数进行调整。
在一些实施方式中,所述调整部分,还被配置为:在所述基于所述颜色差异子信息,对所述第二目标相机的第二预设参数进行调整之后,重新获取所述多相机系统中,至少部分相机同时拍摄目标物体所得到的多张新的图像;基于所述多张新的图像,重新构建所述目标物体的新的点云模型,以及,确定所述新的点云模型和所述目标物体之间的新的差异信息;在基于所述颜色差异阈值,确定所述新的差异信息中新的颜色差异子信息不满足第二预设条件的情况下,基于所述新的颜色差异子信息,对所述第二目标相机的第二预设参数进行调整。
在一些实施方式中,所述多相机系统中的不同相机针对所述目标物体具有不同的拍摄角度,并且将各个相机的拍摄角度叠加得到的目标拍摄角度作为所述目标物体对应的全方位拍摄角度。
在一些实施方式中,所述多相机系统的各个相机与所述目标物体之间的距离满足预设条件;其中所述预设条件包括各个相机针对所述目标物体的拍摄视野形成球体或环形体,并且所述目标物体位于所述球体或环形体内的预设位置,以使所述目标物体反射给各个相机的光线的相似度高于预设相似度。
在本公开实施例以及其他的实施例中,“部分”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是单元,还可以是模块也可以是非模块化的。
基于同一发明构思,本公开实施例中还提供了多相机系统校准方法对应的多相机系统校准系统,由于本公开实施例中的系统解决问题的原理与本公开实施例上述多相机系统校准方法相似,因此系统的实施可以参见方法的实施。
图6为本公开实施例提供的一种多相机系统校准系统的组成结构示意图,如图6所示,所述系统包括多个相机610、相机调参部件620以及目标物体630。
所述目标物体630设置在所述多个相机610形成的目标拍摄视野内。所述多个相机610中的至少部分相机用于同时拍摄所述目标物体630的图像,并将拍摄得到的图像发送给所述相机调参部件620;所述相机调参部件620用于上述实施例中的多相机系统校准方法对所述述多个相机中的至少一个相机进行参数调整。
基于同一技术构思,本公开实施例还提供了一种电子设备。图7为本公开实施例提供的电子设备700的一种硬件实体示意图,如图7所示,包括处理器71、存储器72、和总线73。其中,存储器72用于存储执行指令,包括内存721和外部存储器722;这里的内存721也称内存储器,用于暂时存放处理器71中的运算数据,以及与硬盘等外部存储器722交换的数据,处理器71通过内存721与外部存储器722进行数据交换, 当电子设备700运行时,处理器71与存储器72之间通过总线73通信,使得处理器71在执行以下指令:
获取多相机系统中,至少部分相机同时拍摄目标物体所得到的多张图像;基于所述多张图像,构建所述目标物体的点云模型;基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的多相机系统校准方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例所提供的多相机系统校准方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述的多相机系统校准方法的步骤,具体可参见上述方法实施例。该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一些实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一些实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例是示意性的,例如,所述单元的划分,为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以 使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以权利要求的保护范围为准。
工业实用性
本公开实施例提供了一种多相机系统校准方法、装置、系统、电子设备存储介质及计算机程序产品,所述方法包括:获取多相机系统中,至少部分相机同时拍摄目标物体所得到的多张图像;基于所述多张图像,构建所述目标物体的点云模型;基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整。上述方案能够利用多个角度拍摄的多张图片同时对多个相机的参数进行调整,不仅能够提高相机参数调整效率,还能够实现精确地对多相机系统中的相机进行参数调整,使得参数调整后的相机的位置和角度等参数的准确度得到提升,从而有利于提高利用多相机系统中各个相机拍摄的图像进行3D建模的精确性,降低了3D建模后期修复的工作量,进而提高了用户体验。

Claims (18)

  1. 一种多相机系统校准方法,包括:
    获取多相机系统中,至少部分相机同时拍摄目标物体所得到的多张图像;
    基于所述多张图像,构建所述目标物体的点云模型;
    基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整。
  2. 根据权利要求1所述的方法,其中,所述差异信息包括所述点云模型与所述目标物体之间的尺寸差异子信息;
    在所述基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整之前,还包括:
    获取所述目标物体的标准尺寸信息,以及,所述点云模型的模型尺寸信息;
    确定所述标准尺寸信息与所述模型尺寸信息之间的差异信息,得到的所述尺寸差异子信息;
    所述基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整,包括:
    基于所述点云模型与所述目标物体之间的尺寸差异子信息,对所述多相机系统中的至少一个相机进行相机参数调整。
  3. 根据权利要求2所述的方法,其中,所述基于所述点云模型与所述目标物体之间的尺寸差异子信息,对所述多相机系统中的至少一个相机进行相机参数调整,包括:
    获取尺寸差异阈值;
    基于所述点云模型和所述目标物体之间的尺寸差异子信息,和所述尺寸差异阈值,确定所述多相机系统中的需要进行参数调整的第一目标相机;
    基于所述尺寸差异子信息,对所述第一目标相机的第一预设参数进行调整。
  4. 根据权利要求3所述的方法,其中,所述目标物体包括至少一个子区块;所述标准尺寸信息包括每个子区块分别对应的标准尺寸子信息;所述模型尺寸信息包括每个子区块分别对应的模型尺寸子信息;所述尺寸差异子信息包括每个子区块分别对应的第一差异子信息;其中,所述第一差异子信息用于表征对应的子区块的标准尺寸子信息与模型尺寸子信息之间的差异信息。
  5. 根据权利要求4所述的方法,其中,所述基于所述点云模型和所述目标物体之间的尺寸差异子信息,和所述尺寸差异阈值,确定所述多相机系统中的需要进行参数调整的第一目标相机,包括:
    将第一差异子信息对应的差异值大于所述尺寸差异阈值的子区块,作为第一目标子 区块;
    确定所述第一目标子区块对应的相机,并将确定的相机作为所述第一目标子区块对应的第一目标相机;
    所述基于所述尺寸差异子信息,对所述第一目标相机的第一预设参数进行调整,包括:
    基于所述第一目标子区块对应的第一差异子信息,对所述第一目标子区块对应的第一目标相机的第一预设参数进行调整。
  6. 根据权利要求3至5任一项所述的方法,其中,所述基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整,还包括:
    在所述基于所述尺寸差异子信息,对所述第一目标相机的第一预设参数进行调整之后,重新获取所述多相机系统中,至少部分相机同时拍摄目标物体所得到的多张新的图像;
    基于所述多张新的图像,重新构建所述目标物体的新的点云模型,以及,确定所述新的点云模型和所述目标物体之间的新的差异信息;
    在基于所述尺寸差异阈值,确定所述新的差异信息中新的尺寸差异子信息不满足第一预设条件的情况下,基于所述新的尺寸差异子信息,对所述多相机系统中的至少一个相机进行相机参数调整。
  7. 根据权利要求6所述的方法,其中,所述差异信息还包括点云模型与所述目标物体之间的颜色差异子信息;
    所述基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整,还包括:
    在基于所述尺寸差异阈值,确定所述新的差异信息中新的尺寸差异子信息满足第一预设条件的情况下,获取颜色差异阈值;
    基于所述点云模型和所述目标物体之间的颜色差异子信息,和所述颜色差异阈值,确定所述多相机系统中的需要进行参数调整的第二目标相机;
    基于所述颜色差异子信息,对所述第二目标相机的第二预设参数进行调整。
  8. 根据权利要求1至7中任一项所述的方法,其中,所述差异信息包括点云模型与所述目标物体之间的颜色差异子信息;
    所述基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整,包括:
    获取颜色差异阈值;
    基于所述点云模型和所述目标物体之间的颜色差异子信息,和所述颜色差异阈值,确定所述多相机系统中的需要进行参数调整的第二目标相机;
    基于所述颜色差异子信息,对所述第二目标相机的第二预设参数进行调整。
  9. 根据权利要求8所述的方法,其中,所述目标物体包括至少一个子区块;所述颜色差异子信息包括每个子区块分别对应的第二差异子信息;其中,所述第二差异子信息用于表征对应的子区块在点云模型中的颜色信息与真实颜色信息之间的差异信息。
  10. 根据权利要求9所述的方法,其中,所述基于所述点云模型和所述目标物体之间的颜色差异子信息,和所述颜色差异阈值,确定所述多相机系统中的需要进行参数调整的第二目标相机,包括:
    将第二差异子信息对应的差异值大于所述颜色差异阈值的子区块,作为第二目标子区块;
    确定所述第二目标子区块对应的相机,并将确定的相机作为所述第二目标子区块对应的第二目标相机;
    所述基于所述颜色差异子信息,对所述第二目标相机的第二预设参数进行调整,包括:
    基于所述第二目标子区块对应的第二差异子信息,对所述第二目标子区块对应的第二目标相机的第二预设参数进行调整。
  11. 根据权利要求9或10所述的方法,其中,所述基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整,还包括:
    在所述基于所述颜色差异子信息,对所述第二目标相机的第二预设参数进行调整之后,重新获取所述多相机系统中,至少部分相机同时拍摄目标物体所得到的多张新的图像;
    基于所述多张新的图像,重新构建所述目标物体的新的点云模型,以及,确定所述新的点云模型和所述目标物体之间的新的差异信息;
    在基于所述颜色差异阈值,确定所述新的差异信息中新的颜色差异子信息不满足第二预设条件的情况下,基于所述新的颜色差异子信息,对所述第二目标相机的第二预设参数进行调整。
  12. 根据权利要求1至11任一项所述的方法,其中,所述多相机系统中的不同相机针对所述目标物体具有不同的拍摄角度,并且将各个相机的拍摄角度叠加得到的目标拍摄角度作为所述目标物体对应的全方位拍摄角度。
  13. 根据权利要求1至12任一项所述的方法,其中,所述多相机系统的各个相机与所述目标物体之间的距离满足预设条件;其中所述预设条件包括各个相机针对所述目 标物体的拍摄视野形成球体或环形体,并且所述目标物体位于所述球体或环形体内的预设位置,以使所述目标物体反射给各个相机的光线的相似度高于预设相似度。
  14. 一种多相机系统校准装置,包括:
    图像获取部分,被配置为获取多相机系统中,至少部分相机同时拍摄目标物体所得到的多张图像;
    建模部分,被配置为基于所述多张图像,构建所述目标物体的点云模型;
    调整部分,被配置为基于所述点云模型和所述目标物体之间的差异信息,对所述多相机系统中的至少一个相机进行相机参数调整。
  15. 一种多相机系统校准系统,包括多个相机、相机调参部件以及目标物体;所述目标物体设置在所述多个相机形成的目标拍摄视野内;
    所述多个相机中的至少部分相机用于同时拍摄所述目标物体的图像,并将拍摄得到的图像发送给所述相机调参部件;
    所述相机调参部件用于按照权利要求1至13任一项所述的多相机系统校准方法对所述多个相机中的至少一个相机进行参数调整。
  16. 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行权利要求1至13任一项所述的多相机系统校准方法的步骤。
  17. 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行权利要求1至13任一项所述的多相机系统校准方法的步骤。
  18. 一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在电子设备上运行的情况下,使得所述电子设备执行权利要求1至13中任一项所述的多相机系统校准方法的步骤。
PCT/CN2022/117908 2021-12-28 2022-09-08 多相机系统校准方法、装置、系统、电子设备、存储介质及计算机程序产品 WO2023124223A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111624552.6A CN114299158A (zh) 2021-12-28 2021-12-28 多相机系统校准方法、装置、系统、电子设备及存储介质
CN202111624552.6 2021-12-28

Publications (1)

Publication Number Publication Date
WO2023124223A1 true WO2023124223A1 (zh) 2023-07-06

Family

ID=80972584

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/117908 WO2023124223A1 (zh) 2021-12-28 2022-09-08 多相机系统校准方法、装置、系统、电子设备、存储介质及计算机程序产品

Country Status (2)

Country Link
CN (1) CN114299158A (zh)
WO (1) WO2023124223A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299158A (zh) * 2021-12-28 2022-04-08 北京市商汤科技开发有限公司 多相机系统校准方法、装置、系统、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130293532A1 (en) * 2012-05-04 2013-11-07 Qualcomm Incorporated Segmentation of 3d point clouds for dense 3d modeling
CN108171758A (zh) * 2018-01-16 2018-06-15 重庆邮电大学 基于最小光程原理和透明玻璃标定板的多相机标定方法
CN109116397A (zh) * 2018-07-25 2019-01-01 吉林大学 一种车载多相机视觉定位方法、装置、设备及存储介质
CN114299158A (zh) * 2021-12-28 2022-04-08 北京市商汤科技开发有限公司 多相机系统校准方法、装置、系统、电子设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130293532A1 (en) * 2012-05-04 2013-11-07 Qualcomm Incorporated Segmentation of 3d point clouds for dense 3d modeling
CN108171758A (zh) * 2018-01-16 2018-06-15 重庆邮电大学 基于最小光程原理和透明玻璃标定板的多相机标定方法
CN109116397A (zh) * 2018-07-25 2019-01-01 吉林大学 一种车载多相机视觉定位方法、装置、设备及存储介质
CN114299158A (zh) * 2021-12-28 2022-04-08 北京市商汤科技开发有限公司 多相机系统校准方法、装置、系统、电子设备及存储介质

Also Published As

Publication number Publication date
CN114299158A (zh) 2022-04-08

Similar Documents

Publication Publication Date Title
US10602126B2 (en) Digital camera device for 3D imaging
JP6619105B2 (ja) カメラ較正システム
WO2019233445A1 (zh) 对房屋进行数据采集和模型生成的方法
TW201915944A (zh) 圖像處理方法、裝置、系統和儲存介質
CN112311965B (zh) 虚拟拍摄方法、装置、系统及存储介质
WO2019049331A1 (ja) キャリブレーション装置、キャリブレーションシステム、およびキャリブレーション方法
WO2021139176A1 (zh) 基于双目摄像机标定的行人轨迹跟踪方法、装置、计算机设备及存储介质
US8917317B1 (en) System and method for camera calibration
US11095871B2 (en) System that generates virtual viewpoint image, method and storage medium
US10726612B2 (en) Method and apparatus for reconstructing three-dimensional model of object
CN109074624A (zh) 三维重建方法
US20220164988A1 (en) Methods and Systems for Calibrating Surface Data Capture Devices
US20220067974A1 (en) Cloud-Based Camera Calibration
WO2018101652A1 (ko) 실감형 미디어 영상을 제공하는 장치
JP2018169690A (ja) 画像処理装置、画像処理方法及び画像処理プログラム
AU2020203790B2 (en) Transformed multi-source content aware fill
WO2023207452A1 (zh) 基于虚拟现实的视频生成方法、装置、设备及介质
WO2023124223A1 (zh) 多相机系统校准方法、装置、系统、电子设备、存储介质及计算机程序产品
US20220237880A1 (en) System and method of generating a 3d representation of an object
JP7407428B2 (ja) 三次元モデル生成方法及び三次元モデル生成装置
CN103533326B (zh) 用于立体视图对齐的系统和方法
CN115457176A (zh) 一种图像生成方法、装置、电子设备及存储介质
US11166005B2 (en) Three-dimensional information acquisition system using pitching practice, and method for calculating camera parameters
WO2021022989A1 (zh) 标定参数的获取方法、装置、处理器及电子设备
Shabanov et al. Self-supervised depth denoising using lower-and higher-quality RGB-d sensors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22913496

Country of ref document: EP

Kind code of ref document: A1