CN114022570B - Method for calibrating external parameters between cameras and electronic equipment - Google Patents

Method for calibrating external parameters between cameras and electronic equipment Download PDF

Info

Publication number
CN114022570B
CN114022570B CN202210005580.8A CN202210005580A CN114022570B CN 114022570 B CN114022570 B CN 114022570B CN 202210005580 A CN202210005580 A CN 202210005580A CN 114022570 B CN114022570 B CN 114022570B
Authority
CN
China
Prior art keywords
camera
image
group
electronic device
image group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210005580.8A
Other languages
Chinese (zh)
Other versions
CN114022570A (en
Inventor
刘小伟
陈兵
周俊伟
王国毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Glory Smart Technology Development Co ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210005580.8A priority Critical patent/CN114022570B/en
Publication of CN114022570A publication Critical patent/CN114022570A/en
Application granted granted Critical
Publication of CN114022570B publication Critical patent/CN114022570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a calibration method of external parameters between cameras and electronic equipment. The method for calibrating the external reference between the cameras provided by the embodiment of the application comprises the following steps: the method comprises the steps of firstly determining a common-view image in images acquired by two cameras, then determining coordinates of the common-view image in a world coordinate system, then determining initial values of external parameters between the cameras of the two cameras, then establishing mapping relations of the common-view image in pixel coordinate systems or image coordinate systems of different cameras, and finally determining final values of the external parameters of the cameras based on the mapping relations and the initial values of the external parameters between the cameras. The calibration method for the external reference between the cameras provided by the embodiment of the application can calibrate the external reference between the cameras of two or more cameras under the condition that the internal reference of the cameras is not known in advance and under the help of a specific calibration instrument.

Description

Method for calibrating external parameters between cameras and electronic equipment
Technical Field
The present disclosure relates to the field of machine vision, and in particular, to a method for calibrating an external parameter between cameras and an electronic device.
Background
In the field of machine vision and the like, before determining the real distance between any two pixels in an image, an electronic device needs to determine a mapping relationship between a three-dimensional geometric position of a certain point on the surface of a spatial object and a corresponding point of the spatial object in the image, and the mapping relationship can be called as a camera parameter. The camera parameters comprise camera internal parameters and camera external parameters.
In the case of an electronic device with two or more cameras, simultaneous localization and mapping (SLAM) can be performed by using images captured by the two or more cameras. Because the poses (positions) of different cameras in the real space are different, external parameters between the cameras of two or more cameras need to be determined, the mapping relation between images shot by different cameras can be determined, and then the images shot by different cameras can be processed in a combined mode to achieve positioning and image construction. Wherein the inter-camera external reference between the two or more cameras refers to a conversion relationship of the camera coordinate systems of any two cameras among the two or more cameras.
In order to obtain the camera-to-camera external parameters of different cameras, one possible method is: the method comprises the steps of directly determining internal references and external references of a camera 1 and a camera 2 through reference objects such as a calibration plate and a calibration rod, then determining a mapping relation between the camera 1 and the camera 2 by comparing an image of the reference object acquired by the camera 1 with an image of the reference object acquired by the camera 2, and finally directly solving to obtain the external references between the camera 1 and the camera 2 through the mapping relation, the internal references and the external references of the camera 1 and the internal references and the external references of the camera 2.
However, the method needs to depend on specific and precise auxiliary instruments, such as a calibration plate, a calibration rod and the like. The consumer can not calibrate the external parameters between the cameras in a simple and convenient way, so that the accuracy and the capability of positioning and drawing of the electronic equipment are reduced, and the user experience is further influenced.
Disclosure of Invention
The embodiment of the application provides a calibration method of external parameters between cameras and electronic equipment. The method for calibrating the external reference between the cameras provided by the embodiment of the application comprises the following steps: the method comprises the steps of firstly determining a common-view image in images acquired by two cameras, then determining coordinates of the common-view image in a world coordinate system, then determining initial values of external parameters between the cameras of the two cameras, then establishing mapping relations of the common-view image in pixel coordinate systems or image coordinate systems of different cameras, and finally determining final values of the external parameters of the cameras based on the mapping relations and the initial values of the external parameters between the cameras. The calibration method for the external reference between the cameras provided by the embodiment of the application can calibrate the external reference between the cameras of two or more cameras under the condition that the internal reference of the cameras is not known in advance and under the condition that the external reference is not dependent on a specific calibration instrument.
In a first aspect, an embodiment of the present application provides a method for calibrating an external parameter between cameras, which is applied to an electronic device including a first camera and a second camera, and includes: executing a first motion, or displaying a first notification, wherein the first notification is used for reminding a user of executing the first motion on the electronic equipment; in the first movement process, acquiring a first image group and a second image group which comprise common-view images, wherein the first image group is an image group acquired by the first camera, the second image group is an image group acquired by the second camera, and the relative poses of the first camera and the second camera are unchanged; determining the coordinates of the common view image under a world coordinate system according to the first image group or the second image group; determining a first external parameter group according to the first image group, and determining a second external parameter group according to the second image group, wherein the first external parameter group comprises the camera external parameters of the first camera when acquiring the images in the first image group, and the second external parameter group comprises the camera external parameters of the second camera when acquiring the images in the second image group; determining a first parameter according to the coordinate of the common view image in a world coordinate system, the first external parameter group and the second external parameter group, wherein the first parameter is an initial value of the external parameter between the cameras of the first camera and the second camera; and establishing a mapping relation of the common image in the first image group and the second image group, and determining a second parameter according to the mapping relation and the first parameter, wherein the second parameter is a final value of the external reference between the cameras of the first camera and the second camera.
In the above embodiments, the inter-camera external references of two or more cameras can be calibrated without knowing the internal references of the cameras in advance and without depending on the help of a specific calibration instrument, thereby providing a basis for jointly processing images of multiple cameras.
In some embodiments, in combination with some embodiments of the first aspect, it is determined that the first camera and the second camera do not have a common view area, the first motion being related to a relative pose of the first camera and the second camera, the first motion being used to cause a plurality of image pairs having the common view image to be present in the first image group and the second image group; the determining the coordinates of the common view image in the world coordinate system according to the first image group or according to the second image group specifically includes: determining a plurality of image pairs with the common-view image in the first image group and the second image group, and determining the coordinate of the common-view image in a world coordinate system according to the plurality of image pairs; the establishing of the mapping relationship between the first image group and the common image in the second image group specifically includes: and establishing a mapping relation of the common-view image in the plurality of image pairs.
In the above-described embodiment, in the case where the first camera and the second camera do not have a common view region, it is necessary to make a plurality of image pairs having a common view image exist in the first image group and the second image group by the first motion. Furthermore, since the plurality of image pairs having the common view image are obtained only by performing the first motion, a part of images in the first image group and the second image group do not belong to the plurality of image pairs, and thus when the mapping relationship of the common view image in the plurality of image pairs is established, the mapping relationship may be established only for the common view image in the image pairs.
In combination with some embodiments of the first aspect, in some embodiments, it is determined that the first camera and the second camera have a common view area; the determining the coordinates of the common view image in the world coordinate system according to the first image group or according to the second image group specifically includes: determining a plurality of image pairs with the common view image in the first image group and the second image group, and determining the coordinate of the common view image in a world coordinate system according to the plurality of image pairs; the establishing of the mapping relationship between the first image group and the common image in the second image group specifically includes: and establishing a mapping relation of the common-view image in the plurality of image pairs.
In the above-described embodiment, in the case where the first camera and the second camera do not have the common-view area, theoretically, there is a common-view image on each of the images in the first image group and the second image group. However, in consideration of the influence of the actual visible region, some of the images in the first image group and the second image group do not belong to the plurality of image pairs, and when the mapping relationship of the co-viewing images in the plurality of image pairs is established, the mapping relationship may be established only for the co-viewing images in the image pairs.
In combination with some embodiments of the first aspect, in some embodiments, before the performing the first motion or displaying the first notification, the method further comprises: and in the case that the image acquired by the first camera does not meet the first condition, turning on the second camera, and acquiring the image through the first camera and the second camera.
In the above embodiment, when the image acquired by the first camera does not satisfy the first condition, the second camera may be started to acquire the image, and the image is processed in a combined manner to obtain a more accurate positioning and mapping result.
In combination with some embodiments of the first aspect, in some embodiments, the first condition includes: the texture degree of the image is lower than the texture threshold, the number of the dynamic objects in the image is more than the dynamic object number threshold, and the area of the dynamic objects in the image is more than the dynamic object area threshold.
In the above embodiment, it is difficult to directly determine whether the positioning and mapping result is accurate, so it can be determined whether the image will cause the inaccurate positioning and mapping result according to the fact that the texture degree of the image is lower than the texture threshold, the number of dynamic objects in the image is greater than the threshold of the number of dynamic objects, the area of dynamic objects in the image is greater than the threshold of the area of dynamic objects, and the like.
In some embodiments, in combination with some embodiments of the first aspect, the plurality of image pairs includes a first image pair including a first image and a second image, the first image belonging to a first image group and the second image belonging to a second image group, a confidence of the first image pair being greater than a confidence threshold, the confidence describing rotational invariance of a common view image in the image pair.
In the above embodiment, when the rotation invariance of the common view image of the image 1 and the image 2 is poor, the data participation calculation of the image 1 and the image 2 can make the finally determined inter-camera extrinsic parameters not easily converge to the vicinity of the optimal value, so that the image pair can be screened, and the convergence speed is further improved.
In a second aspect, an embodiment of the present application provides a method for calibrating an external parameter between cameras, which is applied to a system including a first electronic device and a second electronic device, where the first electronic device is configured with a first camera, and the second electronic device is configured with a second camera, and includes: the first electronic device and the second electronic device execute a first movement, or the first electronic device and the second electronic device display a first notification, wherein the first notification is used for reminding a user of executing the first movement on the first electronic device and the second electronic device; in the first motion process, the first electronic device acquires a first image group through a first camera, and the second electronic device acquires a second image group through a second camera; in the first motion process, the relative poses of the first camera and the second camera are unchanged; the first electronic equipment determines coordinates of the common view image under a world coordinate system, a first external parameter group and a second external parameter group according to the first image group or the second image group, wherein the first external parameter group comprises camera external parameters of the first camera when the first camera acquires images in the first image group, and the second external parameter group comprises camera external parameters of the second camera when the second camera acquires images in the second image group; the first electronic equipment determines the coordinates of the common view image under a world coordinate system according to the first image group or according to the second image group; determining a first external reference group according to the first image group, determining a second external reference group according to the second image group, wherein the first external reference group comprises external reference of the first camera when acquiring images in the first image group, the second external reference group comprises external reference of the second camera when acquiring images in the second image group, and the first electronic device and the second electronic device have the same world coordinate system; the first electronic equipment determines a first parameter according to the coordinate of the common-view image in a world coordinate system, the first external parameter group and the second external parameter group, wherein the first parameter is an initial value of the external parameter between the cameras of the first camera and the second camera; the first electronic equipment establishes a mapping relation of a common image in the first image group and the second image group, and determines a second parameter according to the mapping relation and the first parameter, wherein the second parameter is a final value of camera external parameters of the first camera and the second camera.
In the above-described embodiments, the inter-camera external references of two or more cameras can be calibrated without knowing the internal references of the cameras in advance and without relying on the help of a specific calibration instrument, thereby providing a basis for jointly processing images of multiple cameras.
In combination with some embodiments of the first aspect, in some embodiments, before the first electronic device and the second electronic device perform the first motion or the first electronic device and the second electronic device display the first notification, the method further comprises: the first electronic equipment determines that an image acquired by the first camera does not meet a first condition; the first condition includes: the texture degree of the image is lower than a texture threshold, the number of the dynamic objects in the image is more than a dynamic object number threshold, and the area of the dynamic objects in the image is more than a dynamic object area threshold; the first electronic device sends a request to the second electronic device to acquire an image of the second camera.
In the above embodiment, when the first electronic device cannot obtain an accurate positioning and mapping result based on an image acquired by the camera of the first electronic device, the first electronic device may request the second electronic device to acquire an image acquired by a camera on the second electronic device, thereby providing a basis for jointly processing the image to determine the accurate positioning and mapping result.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors and memory; the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform: executing a first motion or displaying a first notification, wherein the first notification is used for reminding a user of executing the first motion on the electronic equipment; in the first movement process, acquiring a first image group and a second image group which comprise common-view images, wherein the first image group is an image group acquired by the first camera, the second image group is an image group acquired by the second camera, and the relative poses of the first camera and the second camera are unchanged; determining the coordinates of the common view image under a world coordinate system according to the first image group or the second image group; determining a first external parameter group according to the first image group, and determining a second external parameter group according to the second image group, wherein the first external parameter group comprises the camera external parameters of the first camera when acquiring the images in the first image group, and the second external parameter group comprises the camera external parameters of the second camera when acquiring the images in the second image group; determining a first parameter according to the coordinate of the common-view image in a world coordinate system, the first external parameter group and the second external parameter group, wherein the first parameter is an initial value of the external parameter between the cameras of the first camera and the second camera; and establishing a mapping relation of the common image in the first image group and the second image group, and determining a second parameter according to the mapping relation and the first parameter, wherein the second parameter is a final value of the external reference between the cameras of the first camera and the second camera.
In the above-described embodiments, the inter-camera external references of two or more cameras can be calibrated without knowing the internal references of the cameras in advance and without relying on the help of a specific calibration instrument.
In some embodiments, in combination with some embodiments of the third aspect, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: determining that the first camera and the second camera do not have a common view area, the first motion being related to a relative pose of the first camera and the second camera, the first motion being used to cause a plurality of image pairs having the common view image to exist in the first image group and the second image group; determining a plurality of image pairs with the common-view image in the first image group and the second image group, and determining the coordinate of the common-view image in a world coordinate system according to the plurality of image pairs; and establishing a mapping relation of the common-view image in the plurality of image pairs.
With reference to some embodiments of the third aspect, in some embodiments, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: and establishing a mapping relation of the common-view image in the plurality of image pairs. The coordinate of the common view image in a world coordinate system is determined according to the first image group or the second image group; determining a plurality of image pairs with the common-view image in the first image group and the second image group, and determining the coordinate of the common-view image in a world coordinate system according to the plurality of image pairs; and establishing a mapping relation of the common-view image in the plurality of image pairs.
In some embodiments, in combination with some embodiments of the third aspect, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: and in the case that the image acquired by the first camera does not meet the first condition, turning on the second camera, and acquiring the image through the first camera and the second camera.
In combination with some embodiments of the third aspect, in some embodiments, the first condition includes: the texture degree of the image is lower than the texture threshold, the number of the dynamic objects in the image is more than the dynamic object number threshold, and the area of the dynamic objects in the image is more than the dynamic object area threshold.
In some embodiments in combination with some embodiments of the third aspect, the plurality of image pairs includes a first image pair including a first image and a second image, the first image belonging to a first image group and the second image belonging to a second image group, the first image pair having a confidence level greater than a confidence threshold, the confidence level describing rotational invariance of co-view images in the image pair.
In a fourth aspect, an embodiment of the present application provides a chip system, where the chip system is applied to an electronic device, and the chip system includes one or more processors, and the processor is configured to invoke computer instructions to cause the electronic device to perform a method as described in the first aspect and any possible implementation manner of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product including instructions, which, when run on an electronic device, cause the electronic device to perform the method described in the first aspect and any possible implementation manner of the first aspect.
In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, which includes instructions, when the instructions are executed on an electronic device, the electronic device is caused to perform the method described in the first aspect and any possible implementation manner of the first aspect.
It is to be understood that the electronic device provided by the third aspect, the chip system provided by the fourth aspect, the computer program product provided by the fifth aspect, and the computer storage medium provided by the sixth aspect are all configured to execute the method provided by the embodiments of the present application. Therefore, the beneficial effects achieved by the method can refer to the beneficial effects in the corresponding method, and are not described herein again.
Drawings
Fig. 1A is an exemplary schematic diagram of an influence of a positioning result and a mapping result on an electronic device according to an embodiment of the present application.
Fig. 1B and fig. 1C are another exemplary schematic diagrams of the influence of the positioning result and the mapping result on the electronic device according to the embodiment of the present application.
Fig. 2A and fig. 2B are schematic diagrams of an exemplary method for improving simultaneous localization and mapping by using multiple images according to an embodiment of the present application.
Fig. 3 is an exemplary schematic diagram of a flow of a calibration method for an external parameter between cameras according to an embodiment of the present application.
Fig. 4A is an exemplary schematic diagram of a common view image provided by the present application.
Fig. 4B is another exemplary diagram of a common view image provided by the present application.
Fig. 5 is an exemplary diagram of a first motion provided by an embodiment of the present application.
Fig. 6 is an exemplary schematic view of an interface for an electronic device to prompt a user to make a first motion to the electronic device according to an embodiment of the present application.
Fig. 7 is an exemplary schematic diagram of a process of solving coordinates of an object corresponding to a co-view image in a world coordinate system by an algebraic method according to an embodiment of the present application.
Fig. 8 is an exemplary schematic diagram of rotation of an electronic device according to an embodiment of the present application.
Fig. 9 is an exemplary diagram of image pair confidence detection provided by the embodiments of the present application.
Fig. 10 is an exemplary schematic diagram of a relationship between an initial value of an external parameter and a mapping error between cameras according to an embodiment of the present application.
Fig. 11 is an exemplary schematic diagram of a hardware architecture of an electronic device according to an embodiment of the present application.
Fig. 12A is an exemplary schematic diagram of an electronic device software architecture provided in an embodiment of the present application.
FIG. 12B is an exemplary diagram of data flow for implementing the calibration method for camera-to-camera external references under the software architecture shown in FIG. 12A.
Detailed Description
The terminology used in the following embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of the present application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of this application, a "plurality" means two or more unless indicated otherwise.
The term "User Interface (UI)" in the following embodiments of the present application is a media interface for interaction and information exchange between an application program or an operating system and a user, and implements conversion between an internal form of information and a form acceptable to the user. The user interface is source code written by java, extensible markup language (XML) and other specific computer languages, and the interface source code is analyzed and rendered on the electronic equipment and finally presented as content which can be identified by a user. A commonly used presentation form of the user interface is a Graphical User Interface (GUI), which refers to a user interface related to computer operations and displayed in a graphical manner. It may be a visual interface element such as text, an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc. displayed in the display of the electronic device.
When the electronic device provides an Augmented Reality (AR) function, a Virtual Reality (VR) function, and other functions that require synchronous positioning and mapping of an environment, the electronic device needs to determine camera internal parameters and camera external parameters of a camera or other devices capable of acquiring images.
When the synchronous positioning and mapping result is inaccurate, the inaccurate positioning result and mapping result may affect the work of the electronic device. For example, when the electronic device is a sweeping robot, if the electronic device does not accurately locate itself or the mapping of the home area is not accurate, the home area cannot be completely swept as shown in fig. 1A; for another example, when the electronic device is a mobile phone, a projector, VR glasses, or the like, and the map of the environment of the electronic device is not accurately created, the images in the AR and VR functions of the electronic device are distorted as shown in fig. 1B and 1C.
Fig. 1A is an exemplary schematic diagram of an influence of a positioning result and a mapping result on an electronic device according to an embodiment of the present application.
As shown in fig. 1A, when the electronic device is a sweeping robot, the electronic device needs to map an area to be cleaned and locate a position of the electronic device in the area to be cleaned, so as to complete a cleaning task. When the positioning and mapping results of the electronic device are not accurate enough, for example, the mapping results of the electronic device are sub-areas of a home area, the electronic device will only sweep the sub-areas, and leave a part of the areas without sweeping. Or, the mapping result of the electronic device is accurate, but the positioning result is not accurate, which may cause similar problems.
Fig. 1B and fig. 1C are another exemplary schematic diagrams of the influence of the positioning result and the mapping result on the electronic device according to the embodiment of the present application.
As shown in fig. 1B, the electronic apparatus is photographing a solid rectangle, and an image 1 positioned on the solid rectangle is displayed on the screen by the AR function. Wherein, the image 1 is generated by the electronic device through the AR function. Since the AR function of the electronic device depends on the positioning result and the mapping result of the electronic device, when the positioning result or the mapping result has an error, the image 1 generated by the electronic device may be shifted and deformed, as shown in fig. 1C.
In the above scenarios as shown in fig. 1A, 1B, and 1C, the accuracy of positioning and mapping greatly affects whether the electronic device can accurately perform the machine vision-based function.
Fig. 2A and fig. 2B are schematic diagrams of an exemplary method for improving simultaneous localization and mapping by using multiple images according to an embodiment of the present application.
As shown in fig. 2A, when the electronic device performs simultaneous localization and mapping based on the image obtained by the camera 1, because the image obtained by the camera 1 has a weak texture or more dynamic objects, the electronic device performs simultaneous localization and mapping based on the image obtained by the camera 1, such as the image 11 in fig. 2A, with a poor effect.
In this case, the electronic device may turn on the camera 2 or the camera 3, and perform joint simultaneous positioning and mapping according to the camera external parameters of different cameras and the images acquired by different cameras, so as to improve the precision of simultaneous positioning and mapping.
For example, in fig. 2A, camera 2 and camera 1 may be different rear cameras of a cell phone. After determining the camera external parameters of the camera 1 and the camera 2, the electronic device may perform joint processing on the image acquired by the camera 1 and the image acquired by the camera 2, for example, the image 12 and the image 21 in fig. 2A, and further perform joint simultaneous positioning and mapping, thereby improving the precision of simultaneous positioning and mapping.
Similarly, in fig. 2B, after determining the camera external parameters of the camera 1 and the camera 3, the image acquired by the camera 1 and the image acquired by the camera 3 may be processed jointly, for example, the image 12 and the image 31 in fig. 2A, so as to perform joint simultaneous positioning and mapping, thereby improving the precision of simultaneous positioning and mapping.
It is understood that when multiple cameras are provided on the electronic device, images acquired by the multiple cameras can be jointly processed to more accurately locate and map the environment; or, when the electronic device has a plurality of cameras, and the sight line of the working camera 1 in the plurality of cameras is shielded, the other cameras may be turned on to continue to acquire images, and then continue to execute SLAM; or, in the case that the image texture captured by the working camera 1 among the plurality of cameras is weak and there are many dynamic objects, the other cameras may be turned on to continue to acquire images, and then continue to perform positioning and mapping.
However, whether the images acquired by multiple cameras are processed jointly or the operating cameras are switched to continue to perform simultaneous positioning and mapping, the inter-camera parameters of the different cameras need to be determined. Wherein the camera-to-camera external parameters of different cameras are determined by the relative poses of the different cameras.
The embodiment of the application provides a method for calibrating camera external parameters, which can quickly and accurately position the camera external parameters of different cameras.
The following describes an exemplary calibration method for external references between cameras and an electronic device according to an embodiment of the present application.
Firstly, the calibration method of the camera external reference provided by the embodiment of the application needs to determine a common-view image; secondly, coordinates of the common-view image under a world coordinate system are obtained through three-dimensional reconstruction, a mapping relation of the common-view image among camera coordinate systems of different cameras is established, and initial values of external parameters among the cameras are further obtained; and finally, establishing a mapping relation of the common-view image in pixel coordinate systems/image coordinate systems of different cameras, and optimizing to obtain accurate camera external parameters on the basis of initial values of the camera external parameters by minimizing mapping errors.
Obviously, the calibration method of the camera external reference provided by the embodiment of the application does not need to use special instruments, such as a calibration disc and a calibration rod; secondly, the calibration method of the external parameters among the cameras provided by the embodiment of the application does not need to determine the relative poses among a plurality of cameras in advance and does not need to know the internal parameters of different cameras in advance.
The method flow of the calibration method of the external reference between cameras is described in the following with reference to the content shown in fig. 3.
Fig. 3 is an exemplary schematic diagram of a flow of a calibration method for an external reference between cameras according to an embodiment of the present application.
The method for calibrating the external reference between the cameras provided by the embodiment of the application comprises the following steps:
s301: optionally, it is determined that the image captured by the first camera does not satisfy the first condition.
Optionally, in some embodiments of the present application, in the process of performing simultaneous localization and mapping by the electronic device, and in the case that the electronic device performs simultaneous localization and mapping based on the image captured by the first camera, before the electronic device performs simultaneous localization and processing on each frame of image, it is determined whether the image captured by the first camera satisfies the first condition.
When the image acquired by the first camera meets a first condition, the image acquired by the first camera can support the electronic equipment to perform simultaneous positioning and mapping; when the image acquired by the first camera does not meet the first condition, the image acquired by the first camera does not support the electronic equipment to perform simultaneous positioning and mapping, or the result obtained by the electronic equipment performing simultaneous positioning and mapping has lower precision.
Specifically, the first condition may include one or more of the following: (a) the number of dynamic objects in the image is greater than a threshold; (b) the range of the dynamic object in the image is larger than a threshold value; (c) the clarity of the texture of the image is below a threshold; (d) the brightness of the image is below a threshold, and so on.
S302: and starting the second camera to acquire the image acquired by the second camera.
The electronic device starts the second camera and acquires the image acquired by the second camera.
Optionally, in some embodiments of the present application, the second camera may be located on other electronic devices. For example, a telecommunication connection, such as a bluetooth connection, a WIFI connection, etc., is established between the electronic device 1 and the electronic device 2, the first camera is located on the electronic device 1, and the second camera is located on the electronic device 2.
It is worth noting that the relative poses of the first camera and the second camera do not change when step S302 and the subsequent steps are performed. Meanwhile, the relative poses of the first camera and the second camera may be changed before performing step S302.
It is worth mentioning that when the first camera and the second camera are on different electronic devices, the relative poses of the first camera and the second camera do not change when step S302 and the subsequent steps are performed. When the first camera and the second camera are fixed relative to the respective electronic devices, the relative poses of the first camera and the second camera can be kept unchanged by ensuring that the relative poses of the first electronic device and the second electronic device are not changed.
Optionally, in some embodiments of the present application, the internal reference of the first camera may be the same as or different from the internal reference of the second camera.
Optionally, in some embodiments of the present application, the first camera may be of the same or different camera type than the second camera. Wherein the camera types may be divided differently from different angles. For example, the camera type may be tele, macro, etc.; also for example, the camera type may be black and white, infrared, lidar.
In the illustration of the present application, the camera 1 is referred to as a first camera, and the camera 2 is referred to as a second camera.
S303: it is determined whether a common view image is retrieved.
Determining whether a common view image is retrieved, if yes, performing step S305; if not, go to step S304.
If the first camera and the second camera have a common-view area, a common-view image can be retrieved under a general condition; if the first camera and the second camera do not have a common-view region, a common-view image is generally not retrieved. For example, when the electronic device is a mobile phone, the multiple rear cameras of the mobile phone may be a first camera and a second camera having a common viewing area; for another example, when the electronic device is a mobile phone, the front camera of the mobile phone is a first camera, and the rear camera of the mobile phone is a second camera.
The common view image is a group of images corresponding to the same object in the images captured by different cameras, as shown in fig. 4A and 4B.
Fig. 4A is an exemplary schematic diagram of a common view image provided by the present application.
As shown in fig. 4A, the camera 1 and the camera 2 capture the object 1 from different poses, and if there are some identical or similar images in the images obtained by the camera 1 and the camera 2, the identical or similar images are a common-view image. Wherein the similarity indication image is the same after translation and/or rotation and/or brightness transformation.
Optionally, in some embodiments of the present application, Scale-invariant feature transform (SIFT) extraction may be performed on images acquired by different cameras, and then SIFT features of the images acquired by the different cameras are compared, so as to determine whether the two images have a common-view image, and determine content included in the common-view image.
It is understood that since the internal parameters of different cameras may be different and the poses of different cameras are different, the content of the same object in the pixel coordinate system or the image coordinate system of different cameras may not be completely the same, in which case many features such as SIFT features may be utilized to determine the co-view image and the content included in the co-view image.
Optionally, when retrieving the common-view image, only images acquired by different cameras within the same time may be retrieved.
Fig. 4B is another exemplary diagram of a common view image provided by the present application.
As shown in fig. 4B, in the case where the poses of the camera 1 and the camera 2 are different, comparing the image 11 captured by the camera 1 and the image 21 captured by the camera 2 at the same time, it is possible to determine whether or not there is a common view image and the content included in the common view image.
S304: the electronic device executes the first motion, or the electronic device is reminded of the user to do the first motion to the electronic device.
In step S303, after determining that the common view image is not retrieved, the electronic device makes a first motion, or the electronic device prompts the user to make the first motion to the electronic device. In the embodiment of the application, during the first motion, the shooting ranges of the first camera and the second camera are overlapped, that is, during the first motion, the first camera can acquire a first group of images, the second camera can acquire a second group of images, and the first group of images and the second group of images have a common-view image.
It is worth noting that, unlike in step S303, the common view images in the first set of images and the second set of images may appear at different times. For example, if the shutter speed of the first camera is the same as the shutter speed of the second camera, the first group of images has N images, and the second group of images has N images, the common view image may appear in the I-th image of the first group and in the K-th image of the second group. Wherein I is smaller than N, K is smaller than N, and K is not equal to N.
Alternatively, the shutter speed of the first camera and the shutter speed of the second camera may be different.
The first motion includes rotation, which may be in-situ rotation or rotation around a certain point as a circle center along a circumference, as shown in fig. 5.
Fig. 5 is an exemplary diagram of a first motion provided by an embodiment of the present application.
For example, as shown in fig. 5, the coordinates of the electronic device may be replaced with the coordinates of camera 1, the coordinates of camera 1 being (x 0, y0, z 0). The first motion may be that the electronic device rotates around x = x0 and y = y0, and the first motion may also be that the electronic device performs the option around x =0 and y = 0.
Alternatively, the electronic device may prompt the user to make a first motion to the electronic device, where the prompt may be as shown in fig. 6.
Fig. 6 is an exemplary schematic view of an interface for an electronic device to prompt a user to make a first motion to the electronic device according to an embodiment of the present application.
As shown in fig. 6, the electronic device displays a prompt control 601 on the interface of the camera, where the prompt control 601 is used to prompt the user to make a first motion to the electronic device. For example, the prompt control displays "please rotate the mobile phone".
It should be noted that fig. 6 is only an exemplary schematic diagram of an interface for the electronic device to prompt the user to make a first motion to the electronic device, and does not make any limitation on the shape of the prompt control and the content of the prompt.
It is worth mentioning that the manner of the first motion is related to the relative pose of the first camera and the second camera. For example, when the first camera is a rear camera of a cellular phone and the second camera is a front camera of the cellular phone, the first motion is a plurality of motion patterns as shown in fig. 5.
In the embodiment of the application, in the first motion process, the relative poses of the first camera and the second camera are not changed. And if the relative poses of the first camera and the second camera change, re-executing the step S302 and the step S303.
S305: the coordinates of the co-view image in the world coordinate system are determined.
The electronic device can perform three-dimensional Reconstruction (3D Reconstruction) on the object corresponding to the common-view image according to a plurality of images including the common-view image, and acquire the coordinates of the object corresponding to the common-view image in a world coordinate system and the external parameters of the first camera and the second camera in different poses.
Optionally, in some embodiments of the present application, in the process of performing step S304, the internal reference of the camera may be determined, or may not be determined.
The world coordinate system is a coordinate system established by the electronic equipment and used for describing a relative position relation between different objects in a real space.
The following cases are classified according to whether the internal reference and/or the external reference of the camera are known, and the exemplary description determines the coordinates of the object corresponding to the common view image in the world coordinate system through three-dimensional reconstruction:
the first condition is as follows: the internal and external references of the first camera are known.
The internal parameter of the first camera is
Figure 114725DEST_PATH_IMAGE001
And the external reference of the first camera in the first pose
Figure 133497DEST_PATH_IMAGE002
Then projection matrix
Figure 342761DEST_PATH_IMAGE003
The projection matrix of the first camera is
Figure 105181DEST_PATH_IMAGE004
Also, the coordinates of the common view image in the image coordinate system or the pixel coordinate system are known
Figure 21184DEST_PATH_IMAGE005
Setting the coordinate of the object corresponding to the common view image in the world coordinate system as
Figure 148540DEST_PATH_IMAGE006
. Then the process of the first step is carried out,
Figure 782784DEST_PATH_IMAGE006
and
Figure 145632DEST_PATH_IMAGE005
is as in formula (1):
Figure 916142DEST_PATH_IMAGE007
(1). Wherein i represents the cameras at different poses; j represents different points in the world coordinate system,
Figure 214399DEST_PATH_IMAGE005
coordinates of a common view image in the images are acquired for cameras of different poses,
Figure 335939DEST_PATH_IMAGE008
is a projection matrix of the camera in different postures,
Figure 440161DEST_PATH_IMAGE009
is the external parameter of the camera in different postures.
Further, the formula (2) is determined as follows:
Figure 393074DEST_PATH_IMAGE010
(2)。
according to equation (2), the electronic device may determine coordinates of an object corresponding to the common-view image in the world coordinate system according to the projection matrix and the coordinates of the common-view image in the image coordinate system or the pixel coordinate system.
Case two: the internal and external parameters of the first camera are unknown.
In the case that the internal reference and the external reference of the first camera are unknown, the coordinates of the object corresponding to the co-view image in the world coordinate system may be determined by various methods such as a motion recovery Structure (SFM) or deep learning, which is not limited herein. Or calibrating the internal reference and the external reference of the camera, and determining the coordinates of the object corresponding to the common view image in the world coordinate system according to the first condition after calibration.
In the following, mainly taking SFM as an example, a method for determining coordinates of an object corresponding to a common view image in a world coordinate system is described as an example.
For example, the fundamental matrix F may be solved by an algebraic method, and then the internal reference and the external reference of the camera may be estimated, or the projection matrix of the camera may be directly estimated, and then the coordinates of the object corresponding to the common view image in the world coordinate system may be obtained by minimizing the projection error in the pixel coordinate system or the image coordinate system. The process of solving the coordinates of the object corresponding to the co-view image in the world coordinate system by the algebraic method is described below with reference to fig. 7.
Fig. 7 is an exemplary schematic diagram of a process of solving coordinates of an object corresponding to a co-view image in a world coordinate system by an algebraic method according to an embodiment of the present application.
As shown in fig. 7, the camera 1 acquires images of a photographed object from different poses, such as pose 1, pose 2, and pose 3 in fig. 7. Wherein the camera 1 includes the feature point 1 in both the images taken from the poses 2 and 3. Wherein the coordinate of the characteristic point 1 in the world coordinate system is
Figure 924549DEST_PATH_IMAGE006
Correspondingly, the coordinates of the characteristic point 1 in the image shot at the pose 1 are
Figure 205489DEST_PATH_IMAGE011
The coordinates of the feature point 2 in the image captured at the pose 2 are
Figure 113402DEST_PATH_IMAGE012
When there are at least eight (not shown in fig. 7) feature points similar to feature point 1, the basis matrix F can be found. After the basic matrix F is obtained, the projection matrix of the camera under different poses can be obtained
Figure 920821DEST_PATH_IMAGE013
And
Figure 623198DEST_PATH_IMAGE014
. Then, the coordinates of the object corresponding to the common view image in the world coordinate system are obtained by the method shown in the formula (3), wherein the formula (3) is as follows:
Figure 391434DEST_PATH_IMAGE015
(3)。
wherein the content of the first and second substances,
Figure 837459DEST_PATH_IMAGE016
for calculating
Figure 702647DEST_PATH_IMAGE017
And
Figure 638242DEST_PATH_IMAGE018
such as euclidean distance, etc.
It is worth mentioning that the algebraic method is used to obtain
Figure 956091DEST_PATH_IMAGE006
May have an ambiguity, i.e.
Figure 877910DEST_PATH_IMAGE006
May be inaccurate. In this case, the feature elimination can be achieved by other prior information, such as parallel lines, vertical lines and the like in the world coordinate system
Figure 597604DEST_PATH_IMAGE006
The ambiguity of (a) is not clear.
Optionally, in some embodiments of the present application, after determining the coordinates of the co-view image in the world coordinate system based on the SFM, the internal parameters of the first camera and the second camera may also be determined.
For another example, a model for determining the coordinates of the object corresponding to the co-view image in the world coordinate system may be trained based on deep learning, neural networks, and other methods. The coordinate of the object corresponding to the common-view image under the world coordinate system can be directly determined by using the model, and then the external parameters of the cameras of the first camera and the second camera under different poses are obtained.
It should be noted that, in the embodiment of the present application, determining the coordinates of the object corresponding to the co-view image in the world coordinate system may be implemented by various methods, and is not limited herein.
It should be noted that, if the electronic device executes step S304, the image acquired during the rotation of the electronic device may be an image processed by SFM. If the electronic device does not execute step S304, the electronic device may execute the second motion, or the user is promoted to do the second motion to the electronic device, so that the electronic device may acquire the image required by the SFM. Alternatively, if the electronic device does not execute step S304, the coordinates of the object corresponding to the common view image in the world coordinate system may be determined by another method.
If the electronic device does not execute step S304, the electronic device may execute the second motion, and in the process of executing the second motion, the image group acquired by the first camera is the first image group, and the image group acquired by the second camera is the second image group.
It should be noted that, the coordinate of the object corresponding to the common view image in the world coordinate system may also be determined by performing three-dimensional reconstruction based on the image acquired by the second camera.
It should be noted that, when the first camera and the second camera are respectively located on the electronic device 1 and the electronic device 2, the world coordinate systems of the electronic device 1 and the electronic device 2 are the same.
S306: and determining initial values of the camera external parameters of the first camera and the second camera based on the mapping relation of the common-view images in the camera coordinate systems of different cameras.
In step S303 or step S304, during the electronic device performing the first motion and/or the second motion, the relative poses of the first camera and the second camera do not change, and the external parameters of the first camera and the second camera in different poses can be determined. Then, a mapping relationship of the common view image in the camera coordinate systems of the different cameras can be established as follows.
For example, feature points in a common view image
Figure 907363DEST_PATH_IMAGE006
The coordinates in the camera coordinate system of the pose i of the camera 1 are
Figure 774825DEST_PATH_IMAGE019
Feature points in common view images
Figure 562652DEST_PATH_IMAGE006
The coordinates in the camera coordinate system of the pose i of the camera 2 are
Figure 86255DEST_PATH_IMAGE020
Then, the mapping relationship between the world coordinate system and the camera coordinate system is as follows (4) and (5):
Figure 301336DEST_PATH_IMAGE021
(4)、
Figure 390514DEST_PATH_IMAGE022
(5)。
wherein the content of the first and second substances,
Figure 982033DEST_PATH_IMAGE023
the camera 1 is externally referred to in the pose i;
Figure 348423DEST_PATH_IMAGE024
is an external parameter of the camera 2 in the pose i.
By simultaneously building a plurality of (4) and (5), a vertical (6) can be built:
Figure 999984DEST_PATH_IMAGE025
(6)。
wherein, the first and the second end of the pipe are connected with each other,
Figure 779722DEST_PATH_IMAGE026
can be a relative translation and rotation matrix between different poses of the camera 1; wherein the content of the first and second substances,
Figure 971668DEST_PATH_IMAGE027
may be a relative translation, rotation matrix between the different poses of the camera 2.
The relative translation and rotation matrix between the different poses of the first camera and the second camera is described below in an exemplary manner with reference to fig. 8.
Fig. 8 is an exemplary schematic diagram of rotation of an electronic device according to an embodiment of the present application.
For example, as shown in fig. 8, the relative poses of the camera 1 and the camera 2 do not change during the execution of the first motion and/or the second motion by the electronic device. Wherein the pose change of the camera 1 includes: pose 1, pose 2, pose 3; the pose change of the camera 2 includes pose 1, pose 2, and pose 3. When the pose of the camera 1 is pose 1, the pose of the camera 2 is pose 1; when the pose of the camera 1 is pose 2, the pose of the camera 2 is pose 2; when the pose of the camera 1 is pose 3, the pose of the camera 2 is pose 3.
In order to remember that,
Figure 254882DEST_PATH_IMAGE028
for the external reference of the camera 1 in the pose i,
Figure 15028DEST_PATH_IMAGE029
and (5) the camera 2 is externally referred to in the pose i.
Then, in the above formula (6)
Figure 16482DEST_PATH_IMAGE030
Figure 746541DEST_PATH_IMAGE031
Can be as follows:
Figure 415419DEST_PATH_IMAGE032
Figure 80887DEST_PATH_IMAGE033
Figure 569637DEST_PATH_IMAGE034
Figure 306649DEST_PATH_IMAGE035
. Wherein
Figure 626772DEST_PATH_IMAGE036
Figure 525458DEST_PATH_IMAGE037
Figure 439187DEST_PATH_IMAGE038
Figure 714311DEST_PATH_IMAGE039
Figure 888940DEST_PATH_IMAGE040
Figure 489686DEST_PATH_IMAGE041
Are all the parameters determined in step S304.
Figure 687449DEST_PATH_IMAGE042
A relative translation and rotation matrix of the camera 1 in the pose 1 and the pose 2;
Figure 703946DEST_PATH_IMAGE043
a relative translation and rotation matrix of the camera 1 in the pose 1 and the pose 3;
Figure 936345DEST_PATH_IMAGE044
a relative translation and rotation matrix of the camera 2 in the pose 1 and the pose 2 is obtained;
Figure 239150DEST_PATH_IMAGE045
is a relative translation and rotation matrix of the camera 2 in the pose 1 and the pose 3.
Wherein, the calculation results
Figure 189788DEST_PATH_IMAGE046
Figure 744398DEST_PATH_IMAGE043
Figure 831302DEST_PATH_IMAGE044
Figure 305009DEST_PATH_IMAGE047
Then, can calculate to obtain
Figure 742944DEST_PATH_IMAGE048
Figure 163561DEST_PATH_IMAGE048
The initial value of the camera external parameter of the first camera and the camera external parameter of the second camera are provided by the embodiment of the application.
It should be noted that, not limited to the three pairs of poses shown in fig. 8, there may be more pairs of poses for the camera 1 and the camera 2, and after the initial values of the inter-camera extrinsic parameters are obtained, averaging and weighted averaging processing is performed to reduce the error of the initial values of the inter-camera extrinsic parameters.
Optionally, in some embodiments of the present application, in the process of performing averaging and weighted averaging on the initial values of the inter-camera external parameters obtained for different alignment positions, considering that the inter-camera external parameters are composed of a translation matrix and a rotation matrix, the inter-camera external parameters may be decomposed into the translation matrix and the rotation matrix, the translation matrix is directly subjected to averaging and weighted averaging, the rotation matrix is converted into an euler angle, then averaging and weighted averaging are performed, and after performing averaging and weighted averaging, the euler angle is converted into the rotation matrix, and then the initial values of the inter-camera external parameters after averaging and weighted averaging are obtained.
Wherein, the weight in the weighted average process may be confidence of different poses, wherein the confidence of different poses may be obtained in step S305, wherein the confidence of different poses indicates the accuracy of the external parameters of the camera in different poses.
S307: and establishing a mapping relation from a pixel coordinate system or an image coordinate system of the first camera to a pixel coordinate system or an image coordinate system of the second camera by taking the initial value of the external reference between the cameras as an initial value, and determining a final value of the external reference between the cameras based on the mapping relation.
After determining the initial value of the inter-camera external reference, a mapping relationship from the pixel coordinate system or the image coordinate system of the first camera to the pixel coordinate system or the image coordinate system of the second camera may be established based on the initial value of the inter-camera external reference.
Optionally, in some embodiments of the present application, in step S306, the electronic device may determine the internal reference of the first camera and the internal reference of the second camera. In this case, since the internal reference of the first camera, the internal reference of the second camera, and the inter-camera external reference of the first camera and the second camera are known, the mapping relationship may be established
Figure 42655DEST_PATH_IMAGE049
. Wherein the content of the first and second substances,
Figure 890525DEST_PATH_IMAGE049
representing the relationship between the image in the image coordinate system or pixel coordinate system in the camera 1 and the image in the image coordinate system or pixel coordinate system in the camera 2, as shown in the following equation (7):
Figure 612494DEST_PATH_IMAGE050
(7)。
wherein the content of the first and second substances,
Figure 571222DEST_PATH_IMAGE051
the co-view image coordinates in the image coordinate system or pixel coordinate system of the camera 1,
Figure 898298DEST_PATH_IMAGE052
is the co-view image coordinates in the image coordinate system or pixel coordinate system of the camera 2.
Optionally, in some embodiments of the present application, in step S306, the electronic device may not determine the internal reference of the first camera and the internal reference of the second camera. In this case, since the internal reference of the first camera and the internal reference of the second camera are fixed, the mapping relationship may be established
Figure 589174DEST_PATH_IMAGE053
Where Q is a constant matrix determined by the internal parameters of the first camera and the internal parameters of the second camera. Wherein the content of the first and second substances,
Figure 736121DEST_PATH_IMAGE054
it is also possible to express the relationship between the image in the image coordinate system/pixel coordinate system in the camera 1 and the image in the image coordinate system/pixel coordinate system in the camera 2, as shown in the following equation (8):
Figure 560858DEST_PATH_IMAGE055
(8)。
further, a final value of the inter-camera external parameter is obtained by the following formula (9):
Figure 476861DEST_PATH_IMAGE056
(9)。
optionally, in some embodiments of the present application, the similarity calculation may be performed on the image, so as to further reduce the number of the participatory expression (9)
Figure 604217DEST_PATH_IMAGE057
And
Figure 238461DEST_PATH_IMAGE058
the amount of calculation is reduced.
For example, in the case where the similarity between the image 1 and the image 2 is greater than the threshold, any two images in the first image group, such as the image 1 and the image 2, may be selected to participate in the calculation of formula (9) in step S306; in the case where the similarity between the image 1 and the image 2 is equal to or less than the threshold, the image 1 and the image 2 may be selected to participate in the calculation of formula (9) in step S307.
It is understood that the similar images do not contribute new information, and the calculation accuracy of formula (9) in step S306 is improved, so that the calculation amount can be reduced by the screening.
Optionally, in some embodiments of the present application, the confidence detection may be performed on an image pair including a common view image in the first set of images and the second set of images. In the case that the confidence of the image pair is greater than the threshold, the image pair will participate in the calculation of formula (9) in step S307.
Fig. 9 is an exemplary diagram of image pair confidence detection provided by the embodiments of the present application.
As shown in fig. 9, image 1 and image 2 including an object corresponding to the common view image are an image pair 1. Where image 1 is derived from a first set of images and image 2 is derived from a second set of images.
As shown in fig. 9, image pairs 2, …, other image pairs, may also be included between the first set of images and the second set of images.
Determining whether the confidence of the image pair 1 is greater than a threshold, if the confidence of the image pair 1 is greater than the threshold, the image pair 1 will participate in the calculation of the formula (9) in the step S307; if the confidence of the image pair 1 is less than the threshold, the image pair 1 does not participate in the calculation of formula (9) in step S307.
The confidence of the image pair 1 is used to measure the rotation invariance, the scaling invariance and the brightness invariance of the co-view image in the image pair 1. The higher the confidence of the image pair 1, the better the rotation invariance, scaling invariance and brightness invariance of the features of the common-view image or the common-view image, which is beneficial to improving the accuracy of the calculation result of the formula (9) in the step S307.
It is worth noting that equation (9) is used as a solution to the optimization problem, and the choice of the initial value directly affects whether the optimal value can be converged. In the embodiment of the present application, the electronic device performs step S306, the initial value obtained by the solution is located near the optimal value, and after performing step S307, the initial value can converge to the optimal value, as shown in fig. 11.
Fig. 10 is an exemplary schematic diagram of a relationship between an initial value of an external parameter and a mapping error between cameras according to an embodiment of the present application.
As shown in fig. 10, when the initial value of the inter-camera external parameter is smaller than the initial value 1, the calculation result of the formula (9) in step S307 finally converges to the suboptimal value 1; when the initial value of the inter-camera external parameter is greater than the initial value 1 and less than the initial value 2, the calculation result of the formula (9) in the step S307 finally converges to the suboptimal value 2; when the initial value of the inter-camera external parameter is greater than the initial value 2 and less than the initial value 3, the calculation result of the formula (9) in the step S307 is converged to the suboptimal value 3; when the initial value of the inter-camera external parameter is greater than the initial value 3 and less than the initial value 4, the calculation result of the formula (9) in the step S307 finally converges to the optimal value.
After the electronic device executes step S306, the initial value of the inter-camera external parameter obtained by solving is greater than the initial value 3 and less than the initial value 4, or the initial value answer probability of the inter-camera external parameter obtained by solving by the electronic device is within the range of the initial value 3 and the initial value 4, so that the initial value can be converged to the optimal value to obtain the final value of the inter-camera external parameter.
Optionally, in some embodiments of the present application, the camera internal parameters of the first camera and the camera internal parameters of the second camera may also be solved by equation (9). Optionally, in some embodiments of the present application, the solution quantity in equation (9) may be converted into an inter-camera external parameter, an intra-camera parameter of the first camera, and an intra-camera parameter of the second camera, so as to directly obtain the parameters.
Finally, a hardware architecture and a software architecture of the electronic device provided by the embodiment of the present application are introduced.
Fig. 11 is an exemplary schematic diagram of a hardware architecture of an electronic device according to an embodiment of the present application.
The electronic device may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) device, a Virtual Reality (VR) device, an Artificial Intelligence (AI) device, a wearable device, an in-vehicle device, a smart home device, and/or a smart city device, and the specific type of the electronic device is not particularly limited by the embodiments of the present application.
The electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not limit the electronic device. In other embodiments of the present application, an electronic device may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the interface connection relationship between the modules according to the embodiment of the present invention is only an exemplary illustration, and does not limit the structure of the electronic device. In other embodiments of the present application, the electronic device may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
The wireless communication function of the electronic device may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like. The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device. The modem processor may include a modulator and a demodulator.
The wireless communication module 160 may provide solutions for wireless communication applied to electronic devices, including Wireless Local Area Networks (WLANs) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite Systems (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like.
The electronic device implements the display function through the GPU, the display screen 194, and the application processor, etc. The GPU is a microprocessor for image processing, connected to the display screen 194 and the application processor. The display screen 194 is used to display images, video, and the like.
The ISP is used to process the data fed back by the camera 193. The camera 193 is used to capture still images or video. In some embodiments, the electronic device may include 1 or N cameras 193, N being a positive integer greater than 1. The camera 193 is a first camera and a second camera in the embodiment of the present application.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. Video codecs are used to compress or decompress digital video. The NPU is a neural-network (NN) computing processor, which processes input information quickly by referring to a biological neural network structure, for example, by referring to a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize applications such as intelligent cognition of electronic equipment, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The internal memory 121 may include one or more Random Access Memories (RAMs) and one or more non-volatile memories (NVMs).
The random access memory may include static random-access memory (SRAM), dynamic random-access memory (DRAM), synchronous dynamic random-access memory (SDRAM), double data rate synchronous dynamic random-access memory (DDR SDRAM), such as fifth generation DDR SDRAM generally referred to as DDR5 SDRAM, and the like; the nonvolatile memory may include a magnetic disk storage device, a flash memory (flash memory). The FLASH memory may include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. according to the operation principle, may include single-level cells (SLC), multi-level cells (MLC), three-level cells (TLC), four-level cells (QLC), etc. according to the level order of the memory cells, and may include universal FLASH memory (UFS), embedded multimedia memory cards (eMMC), etc. according to the storage specification. The random access memory may be read and written directly by the processor 110, may be used to store executable programs (e.g., machine instructions) of an operating system or other programs in operation, and may also be used to store data of users and applications, etc. The nonvolatile memory may also store executable programs, data of users and application programs, and the like, and may be loaded into the random access memory in advance for the processor 110 to directly read and write.
The external memory interface 120 may be used to connect an external nonvolatile memory to extend the storage capability of the electronic device. The electronic device may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The speaker 170A, also called a "horn", is used to convert the audio electrical signal into a sound signal. The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. The headphone interface 170D is used to connect a wired headphone.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. The gyro sensor 180B may be used to determine the motion pose of the electronic device. The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor. The acceleration sensor 180E can detect the magnitude of acceleration of the electronic device in various directions (typically three axes). A distance sensor 180F for measuring a distance. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The ambient light sensor 180L is used to sense the ambient light level. The fingerprint sensor 180H is used to collect a fingerprint. The temperature sensor 180J is used to detect a temperature touch sensor 180K, also referred to as a "touch device". The bone conduction sensor 180M may acquire a vibration signal.
The keys 190 include a power-on key, a volume key, and the like.
The motor 191 may generate a vibration cue. Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 195 is used to connect a SIM card.
Fig. 12A is an exemplary schematic diagram of a software architecture of an electronic device according to an embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 12A, the application package may include camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 12A, the application framework layers may include a windows manager, a content provider, a view system, a telephony manager, an explorer, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scrollbar text in a status bar at the top of the system, such as a notification of a running application in the background, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing. The three-dimensional graphics library may be simultaneously located and mapped based on images acquired by the camera.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes an exemplary calibration method for an external reference between cameras according to an embodiment of the present application with reference to fig. 12B.
FIG. 12B is an exemplary diagram of data flow for implementing the calibration method for camera-to-camera external references under the software architecture shown in FIG. 12A.
As shown in fig. 12B, the camera driver acquires an image captured by the camera 1, and transfers the image of the camera 1 to a three-dimensional image processing library in which methods or functions based on the image acquisition positioning and mapping results exist. And the three-dimensional image processing library transmits the positioning and mapping result to an upper layer or performs further processing based on the positioning and mapping result.
There may also be a method or function in the three-dimensional image processing library that determines whether the image satisfies the first condition. After the three-dimensional image processing library determines that the image shot by the camera 1 does not meet the first condition, the accurate positioning and mapping result cannot be determined.
Then, the camera driver can start the camera 2 to collect images, and transfer the images of the camera 1 and the camera 2 to the three-dimensional graphic processing library. The three-dimensional image processing library performs external parameter calibration of the camera 1 and the camera 2, namely, performs steps S303 to S307, and determines a final value of the external parameter between the cameras.
After the three-dimensional image processing library acquires the final values of the external parameters between the cameras, the images of the cameras 1 and the images of the cameras 2 can be processed in a combined mode to determine accurate positioning and mapping results.
In step S304, the three-dimensional image processing library needs to transmit an instruction that the electronic device executes the first motion to another module, and the three-dimensional image processing library needs to transmit an instruction that the electronic device reminds the user of executing the first motion to the electronic device to another module.
As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to determination …" or "in response to detection …", depending on the context. Similarly, depending on the context, the phrase "at the time of determination …" or "if (a stated condition or event) is detected" may be interpreted to mean "if the determination …" or "in response to the determination …" or "upon detection (a stated condition or event)" or "in response to detection (a stated condition or event)".
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application occur, in whole or in part, when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes), optical media (e.g., DVDs), or semiconductor media (e.g., solid state drives), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.

Claims (8)

1. A calibration method of external parameters between cameras is applied to electronic equipment comprising a first camera and a second camera, and comprises the following steps:
turning on the second camera if the image acquired by the first camera does not satisfy a first condition, the first condition including: the texture degree of the image is lower than a texture threshold, the number of the dynamic objects in the image is more than a dynamic object number threshold, and the area of the dynamic objects in the image is more than a dynamic object area threshold;
executing a first motion, or displaying a first notification, wherein the first notification is used for reminding a user of executing the first motion on the electronic equipment;
in the first movement process, acquiring a first image group and a second image group which comprise common-view images, wherein the first image group is acquired by a first camera, the second image group is acquired by a second camera, the relative poses of the first camera and the second camera are unchanged, the first image group comprises at least two images, and the second image group comprises at least two images;
determining the coordinates of the common-view image in a world coordinate system according to the first image group or the second image group; determining a first appearance group according to the first image group, and determining a second appearance group according to the second image group, wherein the first appearance group comprises the camera appearance of the first camera when the first camera acquires the images in the first image group, the first appearance group is used for determining the relative space position change of the first camera when the first image group is shot, the second appearance group comprises the camera appearance of the second camera when the second camera acquires the images in the second image group, and the second appearance group is used for determining the relative space position change of the second camera when the second image group is shot;
determining a first parameter according to the coordinate of the common-view image in a world coordinate system, the first external parameter group and the second external parameter group, wherein the first parameter is an initial value of the external parameter between the cameras of the first camera and the second camera;
and establishing a mapping relation of a common image in the first image group and the second image group, and determining a second parameter according to the mapping relation and the first parameter, wherein the second parameter is a final value of external parameters between cameras of the first camera and the second camera.
2. The method of claim 1, wherein prior to the performing the first motion or displaying the first notification, the method further comprises:
determining that the first camera and the second camera do not have a common view area, the first motion being related to a relative pose of the first camera and the second camera, the first motion being used to cause a plurality of image pairs having the common view image to exist in the first image group and the second image group;
the determining the coordinates of the common-view image in the world coordinate system according to the first image group or the second image group specifically includes: determining the plurality of image pairs with the common-view image in the first image group and the second image group, and determining the coordinate of the common-view image in a world coordinate system according to the plurality of image pairs;
the establishing of the mapping relationship between the common images in the first image group and the second image group specifically includes: and establishing a mapping relation of the common-view images in the plurality of image pairs.
3. The method of claim 1,
determining that the first camera and the second camera have a common view area;
the determining the coordinates of the common-view image in the world coordinate system according to the first image group or the second image group specifically includes: determining a plurality of image pairs with the common-view image in the first image group and the second image group, and determining the coordinate of the common-view image in a world coordinate system according to the plurality of image pairs;
the establishing of the mapping relationship between the common images in the first image group and the second image group specifically includes: and establishing a mapping relation of the common-view images in the plurality of image pairs.
4. The method of claim 2 or 3, wherein the plurality of image pairs comprises a first image pair comprising a first image and a second image, the first image belonging to a first image group and the second image belonging to a second image group, wherein a confidence of the first image pair is greater than a confidence threshold, the confidence describing rotational invariance of co-view images in the image pair.
5. A method for calibrating camera parameters is applied to a system comprising first electronic equipment and second electronic equipment, wherein the first electronic equipment is provided with a first camera, and the second electronic equipment is provided with a second camera, and the method comprises the following steps:
the first electronic device determining that an image acquired by the first camera does not satisfy a first condition; the first condition includes: the texture degree of the image is lower than a texture threshold, the number of the dynamic objects in the image is more than a dynamic object number threshold, and the area of the dynamic objects in the image is more than a dynamic object area threshold;
the first electronic equipment sends a request to the second electronic equipment to acquire an image of a second camera;
the first electronic device and the second electronic device execute a first motion, or the first electronic device and the second electronic device display a first notification, wherein the first notification is used for reminding a user of executing the first motion on the first electronic device and the second electronic device;
in the first movement process, the first electronic device acquires a first image group through the first camera, the second electronic device acquires a second image group through the second camera, the relative poses of the first camera and the second camera are unchanged, the first image group comprises at least two images, and the second image group comprises at least two images;
the first electronic device determines coordinates of a common view image in a world coordinate system, a first external parameter group and a second external parameter group according to the first image group or the second image group, wherein the first external parameter group comprises camera external parameters of the first camera when the first camera acquires images in the first image group, the first external parameter group is used for determining relative space position change of the first camera when the first image group is shot, the second external parameter group comprises camera external parameters of the second camera when the second camera acquires images in the second image group, and the second external parameter group is used for determining relative space position change of the second camera when the second image group is shot;
the first electronic equipment determines the coordinates of the common-view image in a world coordinate system according to the first image group or the second image group; determining a first external reference group according to the first image group, and determining a second external reference group according to the second image group, wherein the first external reference group comprises external reference of the first camera when acquiring images in the first image group, the second external reference group comprises external reference of the second camera when acquiring images in the second image group, and the first electronic device and the second electronic device have the same world coordinate system;
the first electronic equipment determines a first parameter according to the coordinate of the common-view image in a world coordinate system, the first external parameter group and the second external parameter group, wherein the first parameter is an initial value of the external parameter between the cameras of the first camera and the second camera;
and the first electronic equipment establishes a mapping relation of a common view image in the first image group and the second image group, and determines a second parameter according to the mapping relation and the first parameter, wherein the second parameter is a final value of external parameters between cameras of the first camera and the second camera.
6. An electronic device, characterized in that the electronic device comprises: one or more processors and memory; the memory coupled with the one or more processors, the memory to store computer program code, the computer program code comprising computer instructions that the one or more processors invoke to cause the electronic device to perform the method of any of claims 1-4.
7. A chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform the method of any of claims 1-4.
8. A computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-4.
CN202210005580.8A 2022-01-05 2022-01-05 Method for calibrating external parameters between cameras and electronic equipment Active CN114022570B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210005580.8A CN114022570B (en) 2022-01-05 2022-01-05 Method for calibrating external parameters between cameras and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210005580.8A CN114022570B (en) 2022-01-05 2022-01-05 Method for calibrating external parameters between cameras and electronic equipment

Publications (2)

Publication Number Publication Date
CN114022570A CN114022570A (en) 2022-02-08
CN114022570B true CN114022570B (en) 2022-06-17

Family

ID=80069489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210005580.8A Active CN114022570B (en) 2022-01-05 2022-01-05 Method for calibrating external parameters between cameras and electronic equipment

Country Status (1)

Country Link
CN (1) CN114022570B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091619A (en) * 2022-12-27 2023-05-09 北京纳通医用机器人科技有限公司 Calibration method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971378A (en) * 2014-05-29 2014-08-06 福州大学 Three-dimensional reconstruction method of panoramic image in mixed vision system
CN109523597A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 The scaling method and device of Camera extrinsic
CN110473262A (en) * 2019-08-22 2019-11-19 北京双髻鲨科技有限公司 Outer ginseng scaling method, device, storage medium and the electronic equipment of more mesh cameras
CN112689850A (en) * 2020-03-19 2021-04-20 深圳市大疆创新科技有限公司 Image processing method, image processing apparatus, image forming apparatus, removable carrier, and storage medium
CN113450254A (en) * 2021-05-20 2021-09-28 北京城市网邻信息技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2021195939A1 (en) * 2020-03-31 2021-10-07 深圳市大疆创新科技有限公司 Calibrating method for external parameters of binocular photographing device, movable platform and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3629292A1 (en) * 2018-09-27 2020-04-01 Continental Automotive GmbH Reference point selection for extrinsic parameter calibration
CN112767496B (en) * 2021-01-22 2023-04-07 阿里巴巴集团控股有限公司 Calibration method, device and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971378A (en) * 2014-05-29 2014-08-06 福州大学 Three-dimensional reconstruction method of panoramic image in mixed vision system
CN109523597A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 The scaling method and device of Camera extrinsic
CN110473262A (en) * 2019-08-22 2019-11-19 北京双髻鲨科技有限公司 Outer ginseng scaling method, device, storage medium and the electronic equipment of more mesh cameras
CN112689850A (en) * 2020-03-19 2021-04-20 深圳市大疆创新科技有限公司 Image processing method, image processing apparatus, image forming apparatus, removable carrier, and storage medium
WO2021195939A1 (en) * 2020-03-31 2021-10-07 深圳市大疆创新科技有限公司 Calibrating method for external parameters of binocular photographing device, movable platform and system
CN113450254A (en) * 2021-05-20 2021-09-28 北京城市网邻信息技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN114022570A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN110750992A (en) Named entity recognition method, device, electronic equipment and medium
CN110059686B (en) Character recognition method, device, equipment and readable storage medium
US20230345196A1 (en) Augmented reality interaction method and electronic device
CN114371985A (en) Automated testing method, electronic device, and storage medium
CN117078509B (en) Model training method, photo generation method and related equipment
CN116048933B (en) Fluency detection method
US10491884B2 (en) Image processing method and electronic device supporting the same
US20240193945A1 (en) Method for determining recommended scenario and electronic device
US20240187725A1 (en) Photographing method and electronic device
CN114022570B (en) Method for calibrating external parameters between cameras and electronic equipment
CN114140365A (en) Event frame-based feature point matching method and electronic equipment
WO2022194190A1 (en) Method and apparatus for adjusting numerical range of recognition parameter of touch gesture
WO2022068522A1 (en) Target tracking method and electronic device
WO2021218501A1 (en) Method and device for icon style processing
WO2022095640A1 (en) Method for reconstructing tree-shaped tissue in image, and device and storage medium
CN114299563A (en) Method and device for predicting key point coordinates of face image
WO2023216957A1 (en) Target positioning method and system, and electronic device
CN115145436A (en) Icon processing method and electronic equipment
US20230401897A1 (en) Method for preventing hand gesture misrecognition and electronic device
CN112416984B (en) Data processing method and device
WO2023124948A1 (en) Three-dimensional map creation method and electronic device
CN113835948A (en) Temperature detection method, temperature detection device and electronic equipment
WO2022194180A1 (en) Method for recognizing touch-to-read text, and electronic device
CN114170366B (en) Three-dimensional reconstruction method based on dotted line feature fusion and electronic equipment
CN117131213B (en) Image processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230909

Address after: 201306 building C, No. 888, Huanhu West 2nd Road, Lingang New Area, Pudong New Area, Shanghai

Patentee after: Shanghai Glory Smart Technology Development Co.,Ltd.

Address before: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Patentee before: Honor Device Co.,Ltd.