CN114862960A - Multi-camera calibrated image ground leveling method and device, electronic equipment and medium - Google Patents
Multi-camera calibrated image ground leveling method and device, electronic equipment and medium Download PDFInfo
- Publication number
- CN114862960A CN114862960A CN202210369700.2A CN202210369700A CN114862960A CN 114862960 A CN114862960 A CN 114862960A CN 202210369700 A CN202210369700 A CN 202210369700A CN 114862960 A CN114862960 A CN 114862960A
- Authority
- CN
- China
- Prior art keywords
- human body
- key point
- image
- point information
- mapping relation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The application discloses a multi-camera calibration image ground leveling method, a multi-camera calibration image ground leveling device, electronic equipment and a medium. The method comprises the following steps: establishing a mapping relation from an image coordinate to a world coordinate in a multi-camera calibration process; acquiring human body image information, and detecting human face key points in the human body image information to obtain human body eye key point information and head posture auxiliary key point information; calculating to obtain 3D key points under a world coordinate system based on the mapping relation, the human eye key point information and the head posture auxiliary key point information; acquiring key points of two eyes or one eye of a human body with consistent postures to construct a fitting plane; and calculating a rotation matrix from the xoy plane to the fitting plane in the world coordinate system, wherein the rotation matrix is multiplied by the mapping relation to be the mapping relation from the image coordinate to the leveled world coordinate. The method does not use extra calibration objects and manual marks, carries out automatic ground leveling after camera calibration through human body key point detection, is simple and feasible, and is suitable for various working environments.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of vision measurement, in particular to a method and a device for leveling an image ground calibrated by multiple cameras, electronic equipment and a medium.
Background
With the development of computer vision technology, the application of the computer vision technology is more and more extensive. Generally, in computer vision applications, it is necessary to determine the correlation between the three-dimensional geometric position of a certain point of a space object and its corresponding point in an image, i.e. to establish a mapping relationship between image coordinates and space position coordinates. The solving process of the mapping relation is camera calibration. In computer vision application, the calibration of camera parameters is a very critical link, and the accuracy of the result generated by computer vision is directly influenced by the precision of the calibration result and the stability of the algorithm. Therefore, the condition that camera calibration is well done is the premise that follow-up work is well done. At present, the traditional camera calibration methods include a direct linear transformation calibration method, a Zhang Zhengyou calibration method and the like. In the traditional camera calibration, a calibration object with a known size is used, and internal and external parameters of a camera model are obtained by an algorithm through establishing correspondence between a known coordinate point on the calibration object and an image point.
The traditional camera calibration method can establish the mapping relation between pixel coordinates and world coordinates, but the world coordinates and space position coordinates are not necessarily consistent because of different selection modes of the world coordinate system. A typical problem is that if the world coordinate system xoy plane and the ground are not parallel, the z-coordinate calculated in computer vision applications is not height information. In some practical applications, for example, when the standing posture and the sitting posture are judged to be straight in human posture estimation, the judgment is inaccurate. Specifically, when a three-dimensional calibration object, such as a three-dimensional calibration frame, is used, the plane of the lowermost 4 calibration balls is generally used as the plane of the world coordinate system xoy. If the manufacturing process is slightly inaccurate or the frame deforms due to long service time, the xoy plane can be deviated. When the planar calibration object is used, the world coordinate system takes the calibration plate as a reference, so that the calibration plate needs to be parallel to or perpendicular to the ground for the first calibration, the perpendicularity is difficult to guarantee, and when the camera is in a flat view, if the calibration plate is placed on the ground, the inclination angle of the calibration plate in an image is large and is difficult to detect.
In addition, to obtain the plane where the ground is located, at least 3 position coordinate points on the ground need to be found, and at least 3 feature points need to be manually marked or detected in the images of the multiple cameras in the camera calibration process. This requires, firstly, additional calibration objects on the ground for manual labeling or algorithm detection, and the feature points in the images of the multiple cameras are unique for matching. The manual labeling in this way is time-consuming, labor-consuming and has large errors, the algorithm detection is greatly influenced by the environment, the view angle and the like, and special detection and matching algorithms are needed.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a method, an apparatus, an electronic device, and a medium for leveling an image ground calibrated by multiple cameras, so as to solve the technical problems of a method for leveling an image ground calibrated by multiple cameras by using an additional calibration object in the prior art, such as large error, low precision, being easily affected by factors such as environment and viewing angle, and requiring a special detection and matching algorithm.
In view of the above-mentioned drawbacks of the prior art, the technical solution of the present application is based on the idea that the human body itself can be used as a calibration object without an external calibration object. Because ground leveling does not require that the feature point size be known, but rather only parallel to the ground. The key points of the human body have uniqueness, the algorithm is mature and easy to detect, and only the design rule needs to select the key points parallel to the ground. With the development of deep learning technology, the human body key point detection technology is more mature, and the detection accuracy of some key points such as human face key points is not even inferior to that of human beings. The difficulty degree of detection of different key points of a human body is different, the detection of the key points of the waist and the legs is obviously difficult to detect the key points of the human face, the key points are wide in area, the same feature point is difficult to determine in a multi-camera, and the detection precision is difficult to guarantee. Therefore, the inventors selected human eyes as feature points. Because the human eye area is small and the characteristics are obvious, the detection is easy and the detection precision is high. When a person stands upright, the heights of the two eyes from the ground are basically consistent; considering that the head pose of a human may change, the head pose is more consistent in multiple detections, and two eyes or a single eye is used as a feature point, and a plurality of feature points are fitted to a plane and can be approximately considered to be parallel to the ground. And then, a rotation matrix from the xoy plane to the fitting plane can be obtained, so that the world coordinate system is transformed.
In order to achieve the above object, in a first aspect of the embodiments of the present disclosure, there is provided a multi-camera calibrated image ground leveling method, including: establishing a mapping relation from an image coordinate to a world coordinate in a multi-camera calibration process;
acquiring human body image information, and detecting human face key points in the human body image information to obtain human body eye key point information and head posture auxiliary key point information;
calculating to obtain 3D key points in a world coordinate system based on the mapping relation, the human eye key point information and the head posture auxiliary key point information;
acquiring key points of two eyes or one eye of a human body with consistent postures to construct a fitting plane;
and calculating a rotation matrix from the xoy plane to the fitting plane in the world coordinate system, wherein the rotation matrix is multiplied by the mapping relation to obtain the mapping relation from the image coordinate to the leveled world coordinate.
In a second aspect of the embodiments of the present disclosure, there is provided a multi-camera calibrated image ground leveling device, including:
the establishing unit is configured to establish a mapping relation from image coordinates to world coordinates in a multi-camera calibration process;
the human body image processing device comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is configured to acquire human body image information and detect human face key points in the human body image information to obtain human body eye key point information and head posture auxiliary key point information;
a first calculation unit configured to calculate a 3D key point in a world coordinate system based on the mapping relationship, the human eye key point information, and the head pose auxiliary key point information;
the second acquisition unit is configured to acquire key points of both eyes or one eye of the human body with consistent postures to construct a fitting plane;
and the second calculation unit is configured to calculate a rotation matrix from the xoy plane to the fitting plane in the world coordinate system, and the rotation matrix is multiplied by the mapping relation, so that the mapping relation from the image coordinate to the leveled world coordinate is obtained.
In a third aspect of the embodiments of the present disclosure, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.
In a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor, implements the steps of the above-mentioned method.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: the image ground leveling method calibrated by the multiple cameras is suitable for various camera calibration methods. The world coordinate system is consistent with the ground, and subsequent evaluation is convenient to perform. The method does not use extra calibration objects or need manual marking, carries out automatic ground leveling after camera calibration through human body key point detection, is simple and easy to implement, and is suitable for various working environments.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a schematic flow diagram of some embodiments of a multi-camera calibrated image ground leveling method according to the present disclosure;
FIG. 2 is a schematic block diagram of some embodiments of a multi-camera calibrated image ground leveling method apparatus according to the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
A multi-camera calibrated image ground leveling method according to an embodiment of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of an image ground leveling method calibrated by multiple cameras according to an embodiment of the present disclosure. As shown in fig. 1, the multi-camera calibrated image ground leveling method includes the following steps:
step S101, establishing a mapping relation from an image coordinate to a world coordinate in a multi-camera calibration process;
step S102, obtaining human body image information, and detecting human face key points in the human body image information to obtain human body eye key point information and head posture auxiliary key point information.
In some embodiments, the acquiring of the human body image information is performed by a plurality of cameras in a synchronous timing manner within a preset interval time period. The human body image information comprises image information of a human body at different positions and in a plurality of different time periods; the number of the plurality of different time periods is at least three. The head pose auxiliary key point information includes auxiliary key point information, such as nose key points, used for head pose estimation in all preset time periods.
And S103, calculating to obtain 3D key points in a world coordinate system based on the mapping relation, the human eye key point information and the head posture auxiliary key point information.
Step S104, acquiring key points of both eyes or a single eye of the human body with consistent postures to construct a fitting plane;
in some embodiments, the consistent postures include time points in which the normal vector directions of planes constructed by the key points of the eyes and the auxiliary key points of the human body are consistent in all the preset time periods.
And S105, calculating a rotation matrix from the xoy plane to the fitting plane in the world coordinate system, wherein the rotation matrix is multiplied by the mapping relation, and the mapping relation from the image coordinate to the leveled world coordinate is obtained.
As an example, in some embodiments, multiple camera calibration is performed (N number of cameras), and a mapping relationship M:2D of image coordinates to world coordinates is established 1 …2D N → 3D. The multiple cameras are arranged to take pictures regularly and synchronously at intervals, and take pictures I of people standing at different positions t,n Where t is the time and n is the camera number. In picture I t,n In the method, human face key point detection is carried out to obtain human eye key points and auxiliary key points for head pose estimation, such as LEYE _2D t,n ,REYE_2D t,n And NOSE _2D t,n . Calibrating mapping relationships M and 2D according to camera t,1 ...2D t,N Obtaining 3D key points in world coordinate system, such as LEYE _3D t ,REYE_3D t And NOSE _3D t . Estimating the head pose according to the face key points to obtain HEADPLE t From all the time, k with the same posture is taken, k > -3 and is marked as t1 … tk. Taking binocular or monocular key points with consistent postures, e.g. LEYE _3D t1 ...LEYE_3D tk And fitting a plane LEYE _ SURFACCE. And calculating a rotation matrix R from the xoy plane to the eye fitting plane LEYE _ SURFACCE under the original world coordinate system, wherein R & ltM & gt is the mapping relation from the image coordinates to the leveled world coordinates.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 2 is a schematic view of a multi-camera calibrated image ground leveling device provided by the embodiment of the disclosure. As shown in fig. 2, the multi-camera calibrated image ground leveling device comprises: the device comprises a establishing unit 201, a first acquiring unit 202, a first calculating unit 203, a second acquiring unit 204 and a second calculating unit 205. The establishing unit 201 is configured to establish a mapping relation from image coordinates to world coordinates in a multi-camera calibration process; a first obtaining unit 202, configured to obtain human body image information and perform human face key point detection in the human body image information to obtain human body eye key point information and head posture auxiliary key point information; a first calculating unit 203, configured to calculate 3D key points in a world coordinate system based on the mapping relationship, the human eye key point information, and the head pose auxiliary key point information; the second acquisition unit 204 is configured to acquire key points of both eyes or a single eye of the human body with consistent postures to construct a fitting plane; a second calculating unit 205, configured to calculate a rotation matrix from the xoy plane to the fitting plane in the world coordinate system, where the rotation matrix multiplied by the mapping relationship is a mapping relationship from the image coordinate to the leveled world coordinate. It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
Fig. 3 is a schematic diagram of a computer device 3 provided by the embodiment of the present disclosure. As shown in fig. 3, the computer device 3 of this embodiment includes: a processor 301, a memory 302, and a computer program 303 stored in the memory 302 and operable on the processor 301. The steps in the various method embodiments described above are implemented when the processor 301 executes the computer program 303. Alternatively, the processor 301 implements the functions of the modules/units in the above-described device embodiments when executing the computer program 303.
Illustratively, the computer program 303 may be partitioned into one or more modules/units, which are stored in the memory 302 and executed by the processor 301 to accomplish the present disclosure. One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 303 in the computer device 3.
The computer device 3 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computer devices. The computer device 3 may include, but is not limited to, a processor 301 and a memory 302. Those skilled in the art will appreciate that fig. 3 is merely an example of a computer device 3 and is not intended to limit the computer device 3 and may include more or fewer components than shown, or some of the components may be combined, or different components, e.g., the computer device may also include input output devices, network access devices, buses, etc.
The Processor 301 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 302 may be an internal storage unit of the computer device 3, for example, a hard disk or a memory of the computer device 3. The memory 302 may also be an external storage device of the computer device 3, such as a plug-in hard disk provided on the computer device 3, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 302 may also include both an internal storage unit of the computer device 3 and an external storage device. The memory 302 is used for storing computer programs and other programs and data required by the computer device. The storage 302 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/computer device and method may be implemented in other ways. For example, the above-described apparatus/computer device embodiments are merely illustrative, and for example, a division of modules or units, a division of logical functions only, an additional division may be made in actual implementation, multiple units or components may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, the present disclosure may implement all or part of the flow of the method in the above embodiments, and may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the above methods and embodiments. The computer program may comprise computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain suitable additions or additions that may be required in accordance with legislative and patent practices within the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals or telecommunications signals in accordance with legislative and patent practices.
The above examples are only intended to illustrate the technical solutions of the present disclosure, not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present disclosure, and are intended to be included within the scope of the present disclosure.
Claims (8)
1. An image ground leveling method calibrated by multiple cameras is characterized by comprising the following steps:
establishing a mapping relation from an image coordinate to a world coordinate in a multi-camera calibration process;
acquiring human body image information, and detecting human face key points in the human body image information to obtain human body eye key point information and head posture auxiliary key point information;
calculating to obtain 3D key points in a world coordinate system based on the mapping relation, the human eye key point information and the head posture auxiliary key point information;
acquiring key points of two eyes or one eye of a human body with consistent postures to construct a fitting plane;
and calculating a rotation matrix from the xoy plane to the fitting plane in the world coordinate system, wherein the rotation matrix is multiplied by the mapping relation to obtain the mapping relation from the image coordinate to the leveled world coordinate.
2. The multi-camera calibrated image ground leveling method according to claim 1, wherein the acquiring of the human body image information is performed by a plurality of cameras synchronously and periodically taking pictures within a preset interval time period.
3. The multi-camera calibrated image ground leveling method according to claim 1, wherein the human body image information comprises image information of a human body at different positions and in a plurality of different time periods.
4. The multi-camera calibrated image ground leveling method according to claim 1, wherein the head pose auxiliary key point information comprises head pose auxiliary key point information of the human body pose consistency in all preset time periods.
5. The multi-camera calibrated image ground leveling method according to claim 3, wherein the number of the plurality of different time periods is at least three.
6. An image ground leveling device calibrated by multiple cameras, comprising:
the establishing unit is configured to establish a mapping relation from image coordinates to world coordinates in a multi-camera calibration process;
the human body image processing device comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is configured to acquire human body image information and detect human face key points in the human body image information to obtain human body eye key point information and head posture auxiliary key point information;
a first calculation unit configured to calculate a 3D key point in a world coordinate system based on the mapping relationship, the human eye key point information, and the head pose auxiliary key point information;
the second acquisition unit is configured to acquire key points of both eyes or one eye of the human body with consistent postures to construct a fitting plane;
and the second calculation unit is configured to calculate a rotation matrix from the xoy plane to the fitting plane in the world coordinate system, and the rotation matrix is multiplied by the mapping relation to obtain a mapping relation from the image coordinate to the leveled world coordinate.
7. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when executing the computer program.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210369700.2A CN114862960A (en) | 2022-04-08 | 2022-04-08 | Multi-camera calibrated image ground leveling method and device, electronic equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210369700.2A CN114862960A (en) | 2022-04-08 | 2022-04-08 | Multi-camera calibrated image ground leveling method and device, electronic equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114862960A true CN114862960A (en) | 2022-08-05 |
Family
ID=82629556
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210369700.2A Pending CN114862960A (en) | 2022-04-08 | 2022-04-08 | Multi-camera calibrated image ground leveling method and device, electronic equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114862960A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116222516A (en) * | 2022-12-30 | 2023-06-06 | 北京元客视界科技有限公司 | Method and device for setting optical system coordinate system, electronic equipment and storage medium |
-
2022
- 2022-04-08 CN CN202210369700.2A patent/CN114862960A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116222516A (en) * | 2022-12-30 | 2023-06-06 | 北京元客视界科技有限公司 | Method and device for setting optical system coordinate system, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109949899B (en) | Image three-dimensional measurement method, electronic device, storage medium, and program product | |
CN108734744B (en) | Long-distance large-view-field binocular calibration method based on total station | |
JP6573419B1 (en) | Positioning method, robot and computer storage medium | |
CN109544628B (en) | Accurate reading identification system and method for pointer instrument | |
CN110163898B (en) | Depth information registration method, device, system, equipment and storage medium | |
CN106361345A (en) | System and method for measuring height of human body in video image based on camera calibration | |
CN107481288A (en) | The inside and outside ginseng of binocular camera determines method and apparatus | |
CN109272555B (en) | External parameter obtaining and calibrating method for RGB-D camera | |
WO2023201578A1 (en) | Extrinsic parameter calibration method and device for monocular laser speckle projection system | |
CN111882608A (en) | Pose estimation method between augmented reality glasses tracking camera and human eyes | |
CN112212788A (en) | Visual space point three-dimensional coordinate measuring method based on multiple mobile phones | |
WO2022257794A1 (en) | Method and apparatus for processing visible light image and infrared image | |
CN112229323A (en) | Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method | |
CN114792345B (en) | Calibration method based on monocular structured light system | |
CN110414101B (en) | Simulation scene measurement method, accuracy measurement method and system | |
CN114862960A (en) | Multi-camera calibrated image ground leveling method and device, electronic equipment and medium | |
CN111127560B (en) | Calibration method and system for three-dimensional reconstruction binocular vision system | |
US20240159621A1 (en) | Calibration method of a portable electronic device | |
CN113034615B (en) | Equipment calibration method and related device for multi-source data fusion | |
CN115375773A (en) | External parameter calibration method and related device for monocular laser speckle projection system | |
Karan | Accuracy improvements of consumer-grade 3D sensors for robotic applications | |
CN113610086A (en) | Reading correction method and device for vertical-scale pointer instrument and terminal equipment | |
Wang et al. | An algorithm of pose estimation based on conic correspondences | |
CN116233392B (en) | Calibration method and device of virtual shooting system, electronic equipment and storage medium | |
Sun et al. | 3D Estimation of Single Image based on Homography Transformation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |