CN115631245A - Correction method, terminal device and storage medium - Google Patents

Correction method, terminal device and storage medium Download PDF

Info

Publication number
CN115631245A
CN115631245A CN202211268088.6A CN202211268088A CN115631245A CN 115631245 A CN115631245 A CN 115631245A CN 202211268088 A CN202211268088 A CN 202211268088A CN 115631245 A CN115631245 A CN 115631245A
Authority
CN
China
Prior art keywords
image
determining
light field
field camera
amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211268088.6A
Other languages
Chinese (zh)
Inventor
周洪宇
于成帅
绕中天
李浩天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yimu Shanghai Technology Co ltd
Original Assignee
Yimu Shanghai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yimu Shanghai Technology Co ltd filed Critical Yimu Shanghai Technology Co ltd
Priority to CN202211268088.6A priority Critical patent/CN115631245A/en
Publication of CN115631245A publication Critical patent/CN115631245A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The embodiment of the application discloses a correction method, terminal equipment and a storage medium, wherein the correction method comprises the following steps: searching a characteristic point on the first image, and determining a first position of the characteristic point on the first image and a first central position of the first image; determining a second position of the feature point on the second image and a second central position of the second image; determining rotation amounts of the light field camera along an x-axis, a y-axis and a z-axis of the augmented reality device according to the first position and the second position; determining a first translation amount of the light field camera along the z-axis of the augmented reality device according to the first position, the second position, the first central position and the second central position; respectively taking area images of four directions from the second image; respectively determining four definition values corresponding to the area images in the four directions; determining a second translation amount of the light field camera along the x axis and the y axis of the augmented reality device according to the four sharpness values; and correcting the pose of the light field camera according to the first translation amount, the second translation amount and the rotation amount.

Description

Correction method, terminal equipment and storage medium
Technical Field
The present application relates to the field of camera pose correction technologies, and in particular, to a correction method, a terminal device, and a storage medium.
Background
With the vigorous development of the meta universe industry, the detection requirements of equipment manufacturers on extended reality equipment products are more and more urgent. The light field camera is used as a unique imaging technology which can rapidly realize high-precision three-dimensional reconstruction through single-frame shooting in the field of three-dimensional detection, and plays an irreplaceable role in the metacorology industry. When a light field camera is used for measuring the distance of a virtual imaging surface of an augmented reality device, detecting imaging defects and the like, the light field camera is generally required to be aligned to a lens of the augmented reality device for shooting. However, due to the imaging characteristics of the augmented reality device, even if the pose of the light field camera is changed only slightly, the shooting results may be greatly different. In order to ensure the accuracy and consistency of detection results when the light field camera degree extended reality equipment is detected, the pose of the light field camera is firstly corrected when the extended reality equipment is detected.
In the prior art, when the pose of the optical field camera is corrected, the pose of the optical field camera is usually adjusted in a mode of observing and judging an image shot by the optical field camera through human eyes, however, a subjective judgment result of a human is deviated from a real shooting result, the subjective judgment result of the human is not highly consistent, large offset is easily caused, and the correction efficiency is low when the camera is corrected through the subjective judgment mode of the human.
Disclosure of Invention
In view of the above, embodiments of the present application are intended to provide a correction method, a terminal device, and a storage medium, which can improve the accuracy and the efficiency of light field camera correction.
In order to achieve the purpose, the technical scheme of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a correction method, where the method includes:
searching a characteristic point on the first image; determining a first position of the feature point on the first image and a first central position of the first image; the first image is an image displayed on a virtual imaging surface of the augmented reality device;
determining a second position of the feature point on the second image and a second central position of the second image; the second image is an image obtained by shooting the first image according to the light field camera;
determining rotation amounts of the light field camera along an x-axis, a y-axis and a z-axis of the augmented reality device according to the first position and the second position;
determining a first translation amount of the light field camera along the z-axis of the augmented reality device according to the first position, the second position, the first central position and the second central position;
respectively taking area images of four directions from the second image; respectively determining four definition values corresponding to the regional images in the four directions;
determining a second translation amount of the light field camera along the x axis and the y axis of the augmented reality device according to the four sharpness values;
and correcting the pose of the light field camera according to the first translation amount, the second translation amount and the rotation amount.
In a second aspect, an embodiment of the present application provides a terminal device, where the terminal device includes:
the searching unit is used for searching the characteristic points on the first image;
a determining unit, configured to determine a first position of the feature point on the first image and a first center position of the first image; the first image is an image displayed on a virtual imaging surface of the augmented reality device; determining a second position of the feature point on the second image and a second center position of the second image; the second image is an image obtained by shooting the first image according to the light field camera; determining rotation amounts of the light field camera along an x-axis, a y-axis and a z-axis of the augmented reality device according to the first position and the second position; determining a first translation amount of the light field camera along the z-axis of the augmented reality device according to the first position, the second position, the first central position and the second central position; respectively taking area images of four directions from the second image; respectively determining four definition values corresponding to the regional images in the four directions; determining a second amount of translation of the light field camera along an x-axis and a y-axis of the augmented reality device based on the four sharpness values;
and the correction unit is used for correcting the pose of the light field camera according to the first translation amount, the second translation amount and the rotation amount.
In a third aspect, an embodiment of the present application provides a terminal device, where the terminal device includes: a processor, a memory, and a communication bus; the processor implements the above-described correction method when executing the execution program stored in the memory.
In a fourth aspect, embodiments of the present application provide a storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the above-mentioned correction method.
The embodiment of the application provides a correction method, terminal equipment and a storage medium, wherein the method comprises the following steps: searching a characteristic point on the first image; determining a first position of the feature point on the first image and a first central position of the first image; the first image is an image displayed on a virtual imaging surface of the augmented reality device; determining a second position of the feature point on the second image and a second central position of the second image; the second image is an image obtained by shooting the first image according to the light field camera; determining rotation amounts of the light field camera along an x-axis, a y-axis and a z-axis of the augmented reality device according to the first position and the second position; determining a first translation amount of the light field camera along the z-axis of the augmented reality device according to the first position, the second position, the first central position and the second central position; respectively taking area images of four directions from the second image; respectively determining four definition values corresponding to the area images in the four directions; determining a second amount of translation of the light field camera along an x-axis and a y-axis of the augmented reality device based on the four sharpness values; and correcting the pose of the light field camera according to the first translation amount, the second translation amount and the rotation amount. By adopting the implementation scheme, in the process of correcting the pose of the light field camera, the light field camera is used for shooting the first image on the augmented reality device to obtain the second image, the rotation amount of the light field camera along the x axis, the y axis and the z axis of the augmented reality device and the translation amount of the light field camera along the z axis of the augmented reality device can be rapidly calculated according to the relation between the first position and the second position determined by the corresponding characteristic points extracted from the first image and the second image and the determined central position on the second image, the translation amounts of the light field camera along the x axis and the y axis of the augmented reality device can be rapidly determined through the definition values of the area images in the four directions respectively corresponding to the second image, the pose of the light field camera can be automatically corrected by using the determined rotation amount or the offset amount respectively corresponding to the six degrees of freedom of the augmented reality device, the inaccuracy of the light field camera can be avoided, the accuracy of manual visual inspection can be greatly improved, and the calibration efficiency and the pose of the light field camera can be greatly improved.
Drawings
Fig. 1 is a first flowchart of a calibration method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an exemplary first image provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of an exemplary second image provided by an embodiment of the present application;
fig. 4 is a schematic diagram of exemplary feature points extracted from a second image according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating an exemplary rotation of a geomagnetic field camera along a y-axis of an augmented reality device, according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a translation of an exemplary geomagnetic field camera along an x-axis of an augmented reality device, according to an embodiment of the present application;
fig. 7 is a graph illustrating a relationship between a Sobel sharpness difference value corresponding to a left region image and a right region image and an offset of a light field camera along an x-axis direction of an augmented reality device according to an exemplary embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a second image captured by a light field camera when the light field camera is translated along an x-axis of an augmented reality device according to an embodiment of the present application;
fig. 9 is a second flowchart of a calibration method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal device 1 according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal device 1 according to an embodiment of the present application.
Detailed Description
So that the manner in which the features and elements of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict. It should also be noted that reference to the terms "first/second/third" in the embodiments of the present application is only used for distinguishing similar objects and does not denote any particular order or importance to the objects, and it should be understood that "first/second/third" may be interchanged with a particular order or sequence where permissible to enable the embodiments of the present application described herein to be practiced in an order other than that shown or described herein.
As shown in fig. 1, the correction method provided in the embodiment of the present application may include:
s101, searching a feature point on a first image; determining a first position of the feature point on the first image and a first central position of the first image; the first image is an image displayed on a virtual imaging surface of the augmented reality device.
In the embodiment of the present application, in order to overcome the problems of inaccuracy of manual visual observation and low calibration efficiency in the prior art that when a light field camera is used to shoot a virtual imaging surface on an extended reality device to calibrate the pose of the light field camera, in the embodiment of the present application, a calibration method is provided, and specifically, the pose of the light field camera is calibrated by calculating the rotation amount of the light field camera corresponding to each of x, y, and z axes of the extended reality device and the translation amount of the light field camera corresponding to each of x, y, and z axes of the extended reality device.
It should be noted that the z-axis refers to an optical axis of imaging by the augmented reality device, the x-axis refers to an axis which is located in a horizontal direction of the augmented reality device and is perpendicular to the z-axis, and the y-axis refers to an axis which is located in a vertical direction of the augmented reality device and is perpendicular to the z-axis.
In the embodiment of the present application, the first image is an image displayed on a virtual imaging plane of the augmented reality device, and the displayed image is an image in which each region of the image has a relatively obvious feature, such as a checkerboard image shown in fig. 2.
It should be noted that the more obvious features in the checkerboard may be corner points of a black and white checkerboard and the dots 1, 2, and 3 shown in fig. 2, specifically, the feature may be selected according to the actual situation of the first image, and the feature is not specifically limited in this application.
In the embodiment of the present application, the feature point on the first image displayed on the augmented reality device is searched, and the position of the searched feature point on the first image is determined, specifically, a feature point extraction algorithm may be used, for example, a network scale invariant feature transform SIFT algorithm, a fast feature point extraction algorithm ORB, a feature point extraction algorithm akage, and the like, or a custom matching template.
The feature point searching method is not limited to the feature point searching method in the present application, and may be specifically selected according to actual situations, and is not specifically limited in the present application.
In this embodiment of the present application, after the feature point in the first image is extracted by using the feature point extraction algorithm and the first position corresponding to the feature point in the first image is determined, the center position point in the first image on the first image needs to be determined and the first center position corresponding to the center position point is obtained.
It should be noted that the first center position of the first image may be obtained from positions corresponding to the searched feature points, that is, the first center position may also be one of the searched feature points, and the first center position point may also be calculated from the first image, specifically, a manner of obtaining the first center position may be selected according to an actual situation, and is not specifically limited in this application.
It should be noted that the feature points found on the first image all have corresponding positions in the first image.
S102, determining a second position of the feature point on the second image and a second central position of the second image; the second image is an image obtained from the light field camera taking the first image.
In the embodiment of the present application, the second image is obtained by shooting the first image displayed on the virtual imaging plane of the augmented reality device with a light field camera, and the second image shown in fig. 3 is obtained by shooting the example first image with the light field camera.
The second image may be specifically a second image obtained by further processing the first image, and the second image is a central view image decoded from the light field original.
In the embodiment of the present application, as shown in fig. 4, the feature points in the second image may be the dots 4, 5, 6, and 7 in fig. 4, that is, the midpoint of the hypotenuse of the right triangle in the circle, and three right-angle vertexes may all be the feature points.
In the embodiment of the application, after a light field camera is used for shooting a first image to obtain a second image, feature points in the second image are extracted by using a feature point extraction algorithm, the feature points in the first image are matched with the feature points in the second image, feature points on the second image, which are correspondingly matched with the feature points on the first image, are determined, and a second position of the feature points on the second image, which are correspondingly matched with the feature points on the first image, is determined.
It should be noted that the feature points searched for in the first image correspond to the feature points in the second image one to one, and the first position corresponding to the feature point in the first image also corresponds to the second position corresponding to the feature point in the second image one to one.
It should be noted that the feature point extraction algorithm may be the feature point extraction algorithm mentioned in step S101, or may be the feature point extraction algorithm mentioned in step S101, and specifically, the feature point extraction algorithm may be selected according to an actual situation, which is not specifically limited in this application.
In the embodiment of the present application, after determining the second position in the second image corresponding to the first position in the first image, it is also necessary to determine a second center position in the second image.
The second center position may be determined by calculating a second center point of the second image from the second image, and determining a position corresponding to the second center point.
It should be noted that the second central position may belong to one of the second positions.
And S103, determining the rotation amount of the light field camera along the x axis, the y axis and the z axis of the augmented reality device according to the first position and the second position.
In embodiments of the application, the rotation amounts include a first rotation amount of the light field camera along x and y axes of the augmented reality device and a second rotation amount of the light field camera along z axis of the augmented reality device.
In the embodiment of the application, when a first rotation amount of the light field camera along the x-axis and the y-axis of the augmented reality device and a second rotation amount of the light field camera along the z-axis of the augmented reality device are calculated, a distance of a connecting line between the first position and the second position and a first included angle between the connecting line and the z-axis of the light field camera may be determined; determining a first rotation amount according to the distance of the connecting line; and determining a second rotation amount according to the first included angle.
In the embodiment of the present application, taking the rotation of the light field camera along the y-axis of the augmented reality device as an example, the first rotation amount of the light field camera along the y-axis of the augmented reality device is calculated.
In the embodiment of the application, after the light field camera rotates along the y-axis of the augmented reality device, as shown in the figure5, when the rotation angle θ is small, the actual shift amount O of the center of the image 1 O 3 Can be approximately expressed as the working distance OO of the camera 1 The product of the tangent value with the rotation angle θ, as shown in the following equations 1 to 3:
O 1 O 3 ≈O 1 O 2 (1)
O 1 O 2 =OO 1 ×tanθ (2)
O 1 O 3 ≈OO 1 ×tanθ (3)
wherein O is 1 O 2 Can be quantitatively expressed by using O 1 O 2 May be given a value of 1 O 3 An approximate representation is performed.
It should be noted that O1 and O3 refer to feature points, which may be the center points of the first image and the second image, i.e., the middle points on the hypotenuses of the isosceles right triangle formed by the three positioning circles in fig. 4.
Note that the camera working distance OO 1 The calibration method can be calibrated for the scale calibration of the light field camera, obtained by measurement, or calculated, and specifically, can be selected according to actual conditions, and is not specifically limited in the application.
In the embodiment of the present application, from equation (3), a solution of the rotation amount θ can be derived, as shown in equation (4):
Figure BDA0003893886370000081
in the embodiment of the present application, when calculating the first rotation amount θ, the camera working distance of the light field camera may be obtained first; and determining the first rotation amount by using the camera working distance and the distance of the connecting line.
In the embodiment of the application, the center of the image of which the characteristic point in the second image is equivalent to the corresponding characteristic point of the first image is calculatedActual offset O of 1 O 3 Value of (a), i.e. O 1 O 3 Distance of the connecting line and obtaining the working distance OO of the camera 1 The value of (b) is substituted into the formula (4), i.e. the rotation amount θ of the light field camera rotating along the y-axis of the augmented reality device can be calculated.
In the embodiment of the present application, the actual offset O of the center of the image 1 O 3 The value of (d) can be calculated by dividing the pixel shift by the lens magnification.
The pixel shift amount is a distance between coordinates of a center point of an image obtained from the second image and the first image, and the lens magnification is inherent to the light field camera system and can be determined in advance according to parameters of the light field camera.
It should be noted that the calculation method of the rotation amount θ of the light field camera along the x axis of the augmented reality device is the same as the calculation method of the rotation amount θ of the light field camera along the y axis of the augmented reality device in the present application, specifically, the calculation method of the rotation amount θ of the light field camera along the y axis of the augmented reality device may be referred to, and is not described herein again.
In this embodiment of the application, the rotation amount of the light field camera along the z-axis of the augmented reality device may be calculated by determining an included angle θ between a connection line of corresponding feature points in the first image and the second image and the z-axis of the augmented reality device after determining the feature points of the first image and the second image, and determining the obtained included angle θ as the second rotation amount of the light field camera along the z-axis of the augmented reality device.
And S104, determining a first translation amount of the light field camera along the z axis of the augmented reality device according to the first position, the second position, the first center position and the second center position.
In an embodiment of the present application, determining a first amount of translation of the light field camera along the z-axis of the augmented reality device according to the first position, the second position, the first center position, and the second center position may be determining a first distance between the first position and the first center position; determining a second distance between the second location and the second center location; determining a first scaling amount according to the first distance and the second distance; and determining a first translation amount corresponding to the first scaling amount according to the functional relation between the scaling amount and the translation amount.
In this embodiment, the first position is a position of a feature point on a first image, the second position is a position of a feature point on a second image, the first center position is a center point position of the first image, and the second center position is a center point position of the second image.
In this embodiment of the application, when calculating the first amount of translation of the light field camera along the z-axis of the augmented reality device, the first position and the second position may be multiple, that is, multiple pairs of corresponding feature points are taken from the first image and the second image, the first position and the second position corresponding to the multiple pairs of feature points are determined, after the first central position of the first image and the second central position of the second image are determined, the euclidean distance between each feature point in the first image and the first central position is calculated, and the euclidean distance between each feature point in the second image and the second central position is calculated.
In this embodiment of the present application, after the euclidean distance between each feature point in the first image and the first center position and the euclidean distance between each feature point in the second image and the second center position are respectively calculated, the euclidean distances between the feature points in the first image and the first center position and the euclidean distances between the feature points in the second image and the second center position are respectively divided by the euclidean distances between the feature points in the first image and the second center position, and an averaging operation is performed, so as to obtain the relative scaling amount between the first image and the second image.
When the division is performed, the euclidean distance corresponding to the feature point is divided, for example, three feature points a, B, and C are present in the first image, the feature point corresponding to the first image is D, E, and F in the second image, three distances a, B, and C between the three feature points of the first image and the first center position in the first image and three distances D, E, and F between the three feature points of the second image and the second center position in the second image are calculated, a is divided by D, B is divided by E, and C is divided by F to obtain three values, and then the average of the three values is calculated, which is the relative zoom amount between the first image and the second image.
It should be noted that the manner of calculating the distance between the first image feature point and the first center position and the distance between the second image feature point and the second center position is not limited to the euclidean distance, and may be specifically selected according to the actual situation, and is not specifically limited in this application.
In the embodiment of the present application, according to the imaging model of the light field camera, when the light field camera shoots objects at different distances, the size of the objects may change. Therefore, the above-mentioned relative zoom amount between the first image and the second image has a high correlation with the translation of the light field camera in the z-axis direction of the augmented reality device. Therefore, the light field camera can be fixed on a mobile device with a known moving distance to perform shooting and calculation for multiple times, and a functional relation between the translation distance of the light field camera along the z-axis direction of the augmented reality device and the relative scaling amount between the first image and the second image is established, so that the first translation amount of the light field camera along the z-axis direction of the augmented reality device can be calculated in real time according to the known translation distance of the light field camera in the actual shooting process of the light field camera.
S105, respectively taking area images of four directions from the second image; and respectively determining four definition values corresponding to the area images in the four directions.
In this embodiment, the second image is obtained by taking the first image by the light field camera, where when the region images in the four directions of the second image are obtained, the four directions may include four directions, namely, up, down, left, and right.
It should be noted that the upper and lower directions correspond to each other, and the left and right directions correspond to each other.
In the embodiment of the present application, when determining four sharpness values corresponding to the area images in the four directions, the upper area image, the lower area image, the left area image, and the right area image may be respectively taken from the second image; and respectively determining a first definition value corresponding to the upper region image, a second definition value corresponding to the lower region image, a third definition value corresponding to the left region and a fourth definition value corresponding to the right region.
In the embodiment of the application, as shown in fig. 6, the amount of movement OO' of the light field camera along the x-axis of the augmented reality device and the corresponding image center offset O 1 O 2 Is equal to, wherein, O 1 O 2 The characteristic points on the first image and the second image are provided with an offset O when the image center is shifted 1 O 2 When the image quality is less than the object resolution of the test system, that is, when the pixel movement amount of the first image to the feature point in the second image is less than 1, the offset of the light field camera cannot be obtained through simple image processing.
In the embodiments of the present application, the clear imaging area of the augmented reality device is extremely small, typically only a few millimeters wide area in front of its lens. Therefore, when there is displacement of the light field camera along the x-axis and y-axis directions of the augmented reality device, since the corresponding portion of the lens of the light field camera is not in the optimal imaging area of the lens of the augmented reality device, the edge area of the image captured by the light field camera may become blurred, and the blurring degree and the offset degree thereof are positively correlated. Thus, the amount of shift of the light field camera in the x-axis and y-axis directions of the augmented reality device can be calculated by quantifying the degree of blur of the edge regions of the image.
In the embodiment of the present application, taking the light field camera translating along the x-axis direction of the extended display device as an example, in the second image, a left area image and a right area image in the second image are taken, where the left area image and the right area image are equal in size, symmetrical in position, and similar in image.
It should be noted that the image similarity is to avoid being affected by the pattern when calculating the image sharpness difference between the left and right regions.
In the embodiment of the present application, after the left region image and the right region image in the second image are acquired, the third sharpness value of the image in the left region image and the fourth sharpness value of the image in the right region image are respectively obtained.
In this embodiment of the present application, the manner of calculating the sharpness value may be any algorithm that reflects the sharpness of the image, for example, a Sobel gradient algorithm that calculates the sharpness of the whole area of the image and averages the gradients, or a Laplacian gradient algorithm that calculates the sharpness of the whole area of the image and averages the gradients.
The method for calculating the image sharpness value is not limited to the calculation method in the present application, and may be specifically selected according to the actual situation, and is not specifically limited in the present application.
It should be noted that, when the light field camera translates along the y-axis direction of the extended display device, the first definition value and the second definition value corresponding to the upper region image and the lower region image in the second image region are calculated, which may refer to the above-mentioned manner of calculating the third definition value and the fourth definition value, and are not described herein again.
In this embodiment of the application, the light field camera obtained through the above calculation method may respectively correspond to the third sharpness value and the fourth sharpness value from the left region image and the right region image taken in the second image when the light field camera is translated along the x-axis of the augmented reality device, and respectively correspond to the first sharpness value and the second sharpness value from the upper region image and the lower region image taken in the second image when the light field camera is translated along the y-axis of the augmented reality device.
And S106, determining a second translation amount of the light field camera along the x axis and the y axis of the augmented reality device according to the four definition values.
In an embodiment of the application, the second translation amount includes a first sub-translation amount of the light field camera along the y-axis of the augmented reality device and a second sub-translation amount of the light field camera along the x-axis of the augmented reality device.
In an embodiment of the present application, the determining of the first sub-translation amount of the light field camera along the y-axis of the augmented reality device may be determining the first sub-translation amount according to the first sharpness value and the second sharpness value.
In the embodiment of the present application, a specific implementation manner of determining the first sub-translation amount may be to determine a first difference between the first sharpness value and the second sharpness value; and determining a first sub-translation amount corresponding to the first difference value from the corresponding relation of the definition difference value and the translation amount.
In an embodiment of the present application, the determining of the second sub-translation amount of the light field camera along the x-axis of the augmented reality device may be determining the second sub-translation amount according to the third sharpness value and the fourth sharpness value.
In the embodiment of the present application, a specific implementation manner of determining the second sub-translation amount may be determining a second difference between the third clarity value and the fourth clarity value; and determining a second sub-translation amount corresponding to the second difference value from the corresponding relation between the definition difference value and the translation amount.
In the embodiment of the application, each displacement amount corresponds to a definition difference value, the translation amount and the definition difference value are modeled to obtain a relation curve between the definition difference value and the translation amount, and in actual operation, after the obtained first difference value or second difference value, the translation amount of the light field camera along the x axis or the y axis of the augmented reality device can be determined according to the relation curve between the definition difference value and the offset.
Exemplarily, the relationship between the Sobel sharpness difference corresponding to the left region image and the right region image and the offset of the light field camera along the x-axis direction of the augmented reality device is shown in fig. 7, the horizontal axis is the offset of the light field camera along the x-axis direction of the augmented reality device and can be controlled by the electric displacement table, and the vertical axis is the sharpness difference of the left region and the right region of the image, which can be seen that there is an approximately linear relationship between the two.
It should be noted that negative values in fig. 7 represent the light field camera moving to the left along the x-axis of the augmented reality device, and positive values represent the light field camera moving to the right along the x-axis of the augmented reality device.
It should be noted that, for the offset of the light field camera along the y-axis direction of the augmented reality device, the algorithm is similar to that of the light field camera along the x-axis direction of the augmented reality device, and the direction of calculating the definition only needs to be changed from left to right to up and down.
It should be noted that when the first difference or the second difference approaches 0, it represents that the light field camera has moved into the optimal imaging area.
In particular, in a camera or a camera system such as a light field camera that can provide a plurality of viewing angles, the sharpness calculation can be performed by replacing left and right regions of the same picture with left and right viewing angle pictures thereof.
For example, taking the light field camera translating along the x-axis of the augmented reality device as an example, as shown in fig. 8, it can be seen that the degree of blur on the right side is significantly higher than that on the left side, and therefore the calculated sharpness of the right side region is lower than that on the left side, it can be determined that the camera has a current rightward shift, i.e., moves along the x-axis of the augmented reality device, resulting in a right-side blur.
And S107, carrying out pose correction on the light field camera according to the first translation amount, the second translation amount and the rotation amount.
In the embodiment of the present application, the first translation amount is a translation amount of the light field camera along a z-axis of the augmented reality device, the second translation amount is a translation amount of the light field camera along an x-axis and a y-axis of the augmented reality device, and the rotation amount is a rotation amount of the light field camera along the x-axis, the y-axis and the z-axis of the augmented reality device, so that when the augmented reality device is shot by the light field camera, a pose transformation amount along 6 degrees of freedom of the augmented reality device can be determined by one shot of the light field camera, and the pose of the light field camera is corrected according to the first translation amount, the second translation amount and the rotation amount corresponding to the transformation amount, so as to detect the three-dimensional device by the light field camera after the pose correction.
It can be understood that, in the correction method provided in the embodiment of the present application, in the process of correcting the pose of the optical field camera, the optical field camera is used to shoot the first image on the augmented reality device to obtain the second image, and according to the relationship between the first position and the second position determined by the corresponding feature points extracted from the first image and the second image, and the determined center position on the second image, the rotation amounts of the optical field camera along the x axis, the y axis, and the z axis of the augmented reality device and the translation amounts of the optical field camera along the z axis of the augmented reality device can be quickly calculated, and through the sharpness values of the area images in the four directions respectively corresponding to the second image, the translation amounts of the optical field camera along the x axis and the y axis of the augmented reality device can be quickly determined, and the pose of the optical field camera can be calibrated by using the determined rotation amounts or offset amounts respectively corresponding to the six degrees of freedom of the optical field camera along the augmented reality device, and by using the determined rotation amounts or offset amounts of the optical field camera, the pose of the optical field camera can be automatically calibrated, so that the pose of the optical field camera can be calibrated, and the pose calibration efficiency of the optical field camera can be greatly improved.
Based on the foregoing embodiments, a calibration method provided in the present application, as shown in fig. 9, specifically includes the following steps:
step 1, searching a characteristic point on a first image; determining a first position of the characteristic point on the first image and a first central position of the first image;
step 2, determining a second position of the feature point on the second image and a second central position of the second image;
step 3, determining the distance of a connecting line between the first position and the second position and a first included angle between the connecting line and the z axis of the light field camera;
step 4, determining a first rotation amount according to the distance of the connecting line;
step 5, determining a second rotation amount according to the first included angle;
step 6, determining a first distance between the first position and the first center position; determining a second distance between the second location and the second center location;
step 7, determining a first scaling amount according to the first distance and the second distance;
step 8, determining a first translation amount corresponding to the first scaling amount according to the functional relation between the scaling amount and the translation amount;
step 9, respectively taking an upper area image, a lower area image, a left area image and a right area image from the second image;
step 10, respectively determining a first definition value corresponding to an upper region image, a second definition value corresponding to a lower region image, a third definition value corresponding to a left region and a fourth definition value corresponding to a right region;
step 11, determining a first difference value between the first definition value and the second definition value; determining a first sub-translation amount corresponding to the first difference value from the corresponding relation between the definition difference value and the translation amount;
step 12, determining a second difference value between the third definition value and the fourth definition value; determining a second sub-translation amount corresponding to the second difference value from the corresponding relation between the definition difference value and the translation amount;
and step 13, correcting the position and the attitude of the light field camera according to the first translation amount, the second translation amount and the rotation amount.
It should be noted that, in a specific calculation process, the execution sequence of the above steps is not limited to the execution sequence in the present application, and specifically, the execution sequence of the steps may be adjusted according to actual situations, which is not specifically limited in the present application.
Based on the above-mentioned embodiment, in another embodiment of the present application, there is provided a terminal device 1, as shown in fig. 10, the terminal device 1 includes:
a finding unit 10 for finding the feature points on the first image.
A determining unit 11, configured to determine a first position of the feature point on the first image and a first center position of the first image; the first image is an image displayed on a virtual imaging surface of the augmented reality device; determining a second position of the feature point on the second image and a second central position of the second image; the second image is an image obtained by shooting the first image according to the light field camera; determining rotation amounts of the light field camera along an x-axis, a y-axis and a z-axis of the augmented reality device according to the first position and the second position; determining a first translation amount of the light field camera along the z-axis of the augmented reality device according to the first position, the second position, the first central position and the second central position; respectively taking area images of four directions from the second image; respectively determining four definition values corresponding to the regional images in the four directions; from the four sharpness values, a second amount of translation of the light field camera along the x-axis and the y-axis of the augmented reality device is determined.
And the correcting unit 12 is used for correcting the pose of the light field camera according to the first translation amount, the second translation amount and the rotation amount.
Optionally, the rotation amounts comprise a first rotation amount of the light field camera along x and y axes of the augmented reality device and a second rotation amount of the light field camera along z axis of the augmented reality device.
Optionally, the determining unit 11 is further configured to determine a distance between a connection line between the first position and the second position and a first included angle between the connection line and the z-axis of the light field camera; determining a first rotation amount according to the distance of the connecting line; a second amount of rotation is determined based on the first angle.
Optionally, the terminal device may further include: an acquisition unit for acquiring the data of the received signal,
an acquisition unit for acquiring a camera working distance of the light field camera.
Optionally, the determining unit 11 is further configured to determine the first rotation amount by using the camera working distance and the distance of the connecting line.
Optionally, the determining unit 11 is further configured to determine a first distance between the first position and the first center position; determining a second distance between the second location and the second center location; determining a first scaling amount according to the first distance and the second distance; and determining a first translation amount corresponding to the first scaling amount according to the functional relation between the scaling amount and the translation amount.
Optionally, the determining unit 11 is further configured to take an upper area image, a lower area image, a left area image, and a right area image from the second image, respectively; and respectively determining a first definition value corresponding to the upper region image, a second definition value corresponding to the lower region image, a third definition value corresponding to the left region and a fourth definition value corresponding to the right region.
Optionally, the second amount of translation comprises: a first sub-translation of the light field camera along a y-axis of the augmented reality device and a second sub-translation of the light field camera along an x-axis of the augmented reality device.
Optionally, the determining unit 11 is further configured to determine a first sub-translation amount according to the first definition value and the second definition value; and determining the second sub-translation amount according to the third definition value and the fourth definition value.
Optionally, the determining unit 11 is further configured to determine a first difference between the first definition value and the second definition value; and determining a first sub-translation amount corresponding to the first difference value from the corresponding relation of the definition difference value and the translation amount.
Optionally, the determining unit 11 is further configured to determine a second difference between the third sharpness value and the fourth sharpness value; and determining a second sub-translation amount corresponding to the second difference value from the corresponding relation between the definition difference value and the translation amount.
The embodiment of the application provides a terminal device, which searches for a characteristic point on a first image; determining a first position of the feature point on the first image and a first central position of the first image; the first image is an image displayed on a virtual imaging surface of the augmented reality device; determining a second position of the feature point on the second image and a second central position of the second image; the second image is an image obtained by shooting the first image according to the light field camera; determining rotation amounts of the light field camera along an x-axis, a y-axis and a z-axis of the augmented reality device according to the first position and the second position; determining a first translation amount of the light field camera along the z-axis of the augmented reality device according to the first position, the second position, the first central position and the second central position; respectively taking area images of four directions from the second image; respectively determining four definition values corresponding to the area images in the four directions; determining a second amount of translation of the light field camera along an x-axis and a y-axis of the augmented reality device based on the four sharpness values; and correcting the pose of the light field camera according to the first translation amount, the second translation amount and the rotation amount. Therefore, according to the terminal device provided by the embodiment of the application, in the process of correcting the pose of the light field camera, the light field camera is used for shooting the first image on the augmented reality device to obtain the second image, the rotation amounts of the light field camera along the x axis, the y axis and the z axis of the augmented reality device and the translation amount along the z axis of the augmented reality device can be rapidly calculated according to the relation between the first position and the second position determined by the corresponding feature points extracted from the first image and the second image and the central position on the determined second image, the translation amounts of the light field camera along the x axis and the y axis of the augmented reality device can be rapidly determined through the definition values of the area images in the four directions respectively corresponding to the second image, the pose of the light field camera along the x axis and the y axis of the augmented reality device can be calibrated through the determined rotation amounts or the offset amounts respectively corresponding to the six degrees of freedom of the light field camera along the augmented reality device, and by the mode, the pose of the light field camera can be automatically calibrated, the accuracy of manual visual inspection can be avoided, and in the process of the light field camera calibration, the pose of the light field camera can be greatly improved.
Fig. 11 is a schematic diagram of a composition structure of a terminal device 1 according to an embodiment of the present application, and in practical applications, based on the same public concept of the foregoing embodiment, as shown in fig. 11, the terminal device 1 of the present embodiment includes a processor 13, a memory 14, and a communication bus 15.
In a Specific embodiment, the searching unit 10, the determining unit 11, the correcting unit 12, and the obtaining unit may be implemented by a Processor 13 located on the terminal Device 1, and the Processor 13 may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic image Processing Device (PLD), a Field Programmable Gate Array (FPGA), a CPU, a controller, a microcontroller, and a microprocessor. It is understood that the electronic device for implementing the above-mentioned processor function may be other devices, and the embodiment is not limited in particular.
In the embodiment of the present application, the communication bus 15 is used for realizing connection communication between the processor 13 and the memory 14; the processor 13 implements the following correction method when executing the execution program stored in the memory 14:
searching a characteristic point on the first image; determining a first position of the feature point on the first image and a first central position of the first image; the first image is an image displayed on a virtual imaging surface of the augmented reality device; determining a second position of the feature point on the second image and a second center position of the second image; the second image is an image obtained by shooting the first image according to the light field camera; determining rotation amounts of the light field camera along an x-axis, a y-axis and a z-axis of the augmented reality device according to the first position and the second position; determining a first translation amount of the light field camera along the z-axis of the augmented reality device according to the first position, the second position, the first central position and the second central position; respectively taking area images of four directions from the second image; respectively determining four definition values corresponding to the area images in the four directions; determining a second translation amount of the light field camera along the x axis and the y axis of the augmented reality device according to the four sharpness values; and correcting the pose of the light field camera according to the first translation amount, the second translation amount and the rotation amount.
Further, the rotation amounts include a first rotation amount of the light field camera along x and y axes of the augmented reality device and a second rotation amount of the light field camera along z axis of the augmented reality device.
Further, the processor 13 is further configured to determine a distance between a connection line between the first position and the second position and a first included angle between the connection line and the z-axis of the light field camera; determining a first rotation amount according to the distance of the connecting line; and determining a second rotation amount according to the first included angle.
Further, the processor 13 is further configured to obtain a camera working distance of the light field camera; and determining the first rotation amount by using the working distance of the camera and the distance of the connecting line.
Further, the processor 13 is further configured to determine a first distance between the first position and the first center position; determining a second distance between the second location and the second center location; determining a first scaling amount according to the first distance and the second distance; and determining a first translation amount corresponding to the first scaling amount according to the functional relation between the scaling amount and the translation amount.
Further, the processor 13 is further configured to take an upper area image, a lower area image, a left area image, and a right area image from the second image; and respectively determining a first definition value corresponding to the upper region image, a second definition value corresponding to the lower region image, a third definition value corresponding to the left region and a fourth definition value corresponding to the right region.
Further, the second amount of translation includes: a first sub-translation of the light field camera along a y-axis of the augmented reality device and a second sub-translation of the light field camera along an x-axis of the augmented reality device.
Further, the processor 13 is further configured to determine a first sub-translation amount according to the first sharpness value and the second sharpness value; and determining a second sub-translation amount according to the third definition value and the fourth definition value.
Further, the processor 13 is further configured to determine a first difference between the first sharpness value and the second sharpness value; and determining a first sub-translation amount corresponding to the first difference value from the corresponding relation between the definition difference value and the translation amount.
Further, the processor 13 is further configured to determine a second difference between the third sharpness value and the fourth sharpness value; and determining a second sub-translation amount corresponding to the second difference value from the corresponding relation between the definition difference value and the translation amount.
Based on the foregoing embodiments, the present application provides a storage medium, on which a computer program is stored, where the computer readable storage medium stores one or more programs, and the one or more programs are executable by one or more processors and applied to a terminal device, and the computer program implements the correction method as described above.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium (such as a ROM/RAM, a magnetic disk, and an optical disk), and includes instructions for enabling an image display device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to perform the methods according to the embodiments of the present disclosure.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of calibration, the method comprising:
searching a characteristic point on the first image; determining a first position of the feature point on the first image and a first central position of the first image; the first image is an image displayed on a virtual imaging surface of the augmented reality device;
determining a second position of the feature point on a second image and a second center position of the second image; the second image is an image obtained by shooting the first image according to a light field camera;
determining, from the first and second positions, an amount of rotation of the light field camera along x, y, and z axes of the augmented reality device;
determining a first amount of translation of the light field camera along the z-axis of the augmented reality device from the first location, the second location, the first center location, and the second center location;
respectively taking area images of four directions from the second image; respectively determining four definition values corresponding to the regional images in the four directions;
determining a second amount of translation of the light field camera along the x-axis and the y-axis of the augmented reality device according to the four sharpness values;
and correcting the pose of the light field camera according to the first translation amount, the second translation amount and the rotation amount.
2. The method of claim 1, wherein the rotation amounts comprise a first rotation amount of the light field camera along x and y axes of the augmented reality device and a second rotation amount of the light field camera along z axis of the augmented reality device; the determining, from the first position and the second position, an amount of rotation of the light field camera along an x-axis, a y-axis, and a z-axis of the augmented reality device, comprising:
determining a distance of a connecting line between the first position and the second position and a first included angle between the connecting line and the z axis of the light field camera;
determining the first rotation amount according to the distance of the connecting line;
and determining the second rotation amount according to the first included angle.
3. The method of claim 2, wherein determining the first rotation amount according to the distance of the connection line comprises:
acquiring a camera working distance of a light field camera;
and determining the first rotation amount by using the camera working distance and the distance of the connecting line.
4. The method of claim 1, wherein determining a first amount of translation of the light field camera along the z-axis of the augmented reality device from the first location, the second location, a first center location, and a second center location comprises:
determining a first distance between the first location and the first center location;
determining a second distance between the second location and the second center location;
determining a first scaling amount according to the first distance and the second distance;
and determining the first translation amount corresponding to the first scaling amount according to the functional relation between the scaling amount and the translation amount.
5. The method of claim 1, wherein the four orientations include up, down, left, right; the area images of four directions are respectively taken from the second image; and respectively determining four definition values corresponding to the area images in the four directions, including:
respectively taking an upper area image, a lower area image, a left area image and a right area image from the second image;
and respectively determining a first definition value corresponding to the upper area image, a second definition value corresponding to the lower area image, a third definition value corresponding to the left area and a fourth definition value corresponding to the right area.
6. The method of claim 5, wherein the second amount of translation comprises: a first sub-translation of the light field camera along the y-axis of the augmented reality device and a second sub-translation of the light field camera along the x-axis of the augmented reality device; determining a second amount of translation of the light field camera along the x-axis and y-axis of the augmented reality device from the four sharpness values, comprising:
determining the first sub-translation amount according to the first definition value and the second definition value;
and determining the second sub-translation amount according to the third definition value and the fourth definition value.
7. The method of claim 6, wherein determining the first sub-translation amount from the first sharpness value and the second sharpness value comprises:
determining a first difference between the first sharpness value and the second sharpness value;
determining the first sub-translation amount corresponding to the first difference value from the corresponding relation between the definition difference value and the translation amount;
correspondingly, the determining the second sub-translation amount according to the third sharpness value and the fourth sharpness value includes:
determining a second difference between the third sharpness value and the fourth sharpness value;
and determining the second sub-translation amount corresponding to the second difference value from the corresponding relation between the definition difference value and the translation amount.
8. A terminal device, characterized in that the terminal device comprises:
the searching unit is used for searching the characteristic points on the first image;
a determining unit, configured to determine a first position of the feature point on the first image and a first center position of the first image; the first image is an image displayed on a virtual imaging surface of the augmented reality device; determining a second position of the feature point on a second image and a second center position of the second image; the second image is an image obtained by shooting the first image according to a light field camera; determining, from the first and second positions, an amount of rotation of the light field camera along x, y, and z axes of the augmented reality device; determining a first amount of translation of the light field camera along the z-axis of the augmented reality device from the first location, the second location, the first center location, and the second center location; respectively taking area images of four directions from the second image; respectively determining four definition values corresponding to the regional images in the four directions; determining a second amount of translation of the light field camera along the x-axis and y-axis of the augmented reality device based on the four sharpness values;
and the correcting unit is used for correcting the pose of the light field camera according to the first translation amount, the second translation amount and the rotation amount.
9. A terminal device, characterized in that the terminal device comprises: a processor, a memory, and a communication bus; the processor, when executing the execution program stored in the memory, implements the method of any of claims 1-7.
10. A storage medium on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202211268088.6A 2022-10-17 2022-10-17 Correction method, terminal device and storage medium Pending CN115631245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211268088.6A CN115631245A (en) 2022-10-17 2022-10-17 Correction method, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211268088.6A CN115631245A (en) 2022-10-17 2022-10-17 Correction method, terminal device and storage medium

Publications (1)

Publication Number Publication Date
CN115631245A true CN115631245A (en) 2023-01-20

Family

ID=84903768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211268088.6A Pending CN115631245A (en) 2022-10-17 2022-10-17 Correction method, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN115631245A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579813A (en) * 2024-01-16 2024-02-20 四川新视创伟超高清科技有限公司 Focal depth region imaging chip pose angle correction method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579813A (en) * 2024-01-16 2024-02-20 四川新视创伟超高清科技有限公司 Focal depth region imaging chip pose angle correction method and system
CN117579813B (en) * 2024-01-16 2024-04-02 四川新视创伟超高清科技有限公司 Focal depth region imaging chip pose angle correction method and system

Similar Documents

Publication Publication Date Title
CN109035320B (en) Monocular vision-based depth extraction method
CN109146980B (en) Monocular vision based optimized depth extraction and passive distance measurement method
CN111179358B (en) Calibration method, device, equipment and storage medium
CN108230397B (en) Multi-view camera calibration and correction method and apparatus, device, program and medium
CN110135455B (en) Image matching method, device and computer readable storage medium
Wöhler 3D computer vision: efficient methods and applications
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110809786B (en) Calibration device, calibration chart, chart pattern generation device, and calibration method
JP6363863B2 (en) Information processing apparatus and information processing method
CN107424196B (en) Stereo matching method, device and system based on weak calibration multi-view camera
CN109479082B (en) Image processing method and apparatus
CN111263142B (en) Method, device, equipment and medium for testing optical anti-shake of camera module
WO2019050417A1 (en) Stereoscopic system calibration and method
CN109427046B (en) Distortion correction method and device for three-dimensional measurement and computer readable storage medium
KR20180105875A (en) Camera calibration method using single image and apparatus therefor
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
CN115564842A (en) Parameter calibration method, device, equipment and storage medium for binocular fisheye camera
CN115830103A (en) Monocular color-based transparent object positioning method and device and storage medium
CN111681186A (en) Image processing method and device, electronic equipment and readable storage medium
CN114299156A (en) Method for calibrating and unifying coordinates of multiple cameras in non-overlapping area
JP2016218815A (en) Calibration device and method for line sensor camera
CN115631245A (en) Correction method, terminal device and storage medium
JP3696336B2 (en) How to calibrate the camera
CN116380918A (en) Defect detection method, device and equipment
CN115239801B (en) Object positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 200240 102, 1/F, Building 98, 1441 Humin Road, Minhang District, Shanghai 302, 3/F, Building 98, 402, 4/F, Building 98

Applicant after: Yimu (Shanghai) Technology Co.,Ltd.

Address before: 200240 room 1206, building 1, No. 951, Jianchuan Road, Minhang District, Shanghai

Applicant before: Yimu (Shanghai) Technology Co.,Ltd.

CB02 Change of applicant information