CN113870361A - Calibration method, device and equipment of depth camera and storage medium - Google Patents

Calibration method, device and equipment of depth camera and storage medium Download PDF

Info

Publication number
CN113870361A
CN113870361A CN202111152027.9A CN202111152027A CN113870361A CN 113870361 A CN113870361 A CN 113870361A CN 202111152027 A CN202111152027 A CN 202111152027A CN 113870361 A CN113870361 A CN 113870361A
Authority
CN
China
Prior art keywords
depth
point information
feature point
calibration
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111152027.9A
Other languages
Chinese (zh)
Inventor
高文昭
程京
刘鑫
焦少慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202111152027.9A priority Critical patent/CN113870361A/en
Publication of CN113870361A publication Critical patent/CN113870361A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Abstract

The embodiment of the disclosure discloses a method, a device and equipment for calibrating a depth camera and a storage medium. The method comprises the following steps: controlling a depth camera to be calibrated to shoot a 3D calibration plate from a plurality of angles and/or a plurality of distances to obtain an RGB (red, green and blue) image and a depth image; extracting first characteristic point information in the RGB image and second characteristic point information in the depth image; wherein the first feature point information includes a first feature point coordinate and a first depth; the second feature point information comprises a second feature point coordinate and a second depth; and training a set neural network model based on the first characteristic point information and the second characteristic point information to obtain a calibration model of the depth camera. The calibration method for the depth camera provided by the embodiment of the disclosure trains the set neural network model based on the first feature point information in the RGB diagram and the second feature point information in the depth diagram to obtain the calibration model of the depth camera, and can improve the precision and efficiency of the depth camera calibration.

Description

Calibration method, device and equipment of depth camera and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of computer vision, in particular to a method, a device, equipment and a storage medium for calibrating a depth camera.
Background
The traditional way of marking depth cameras is based on camera calibration of planar markers. For example, a checkerboard is printed and attached to a flat surface as a calibration object. Shooting photos in different directions for the calibration object by adjusting the direction of the calibration object or the camera; extracting feature points from the picture; and estimating internal and external parameters under the ideal distortion-free condition by adopting a least square method or a maximum likelihood method. The existing calibration method has the problems of poor precision and low efficiency.
Disclosure of Invention
The embodiment of the disclosure provides a calibration method, a calibration device, a calibration apparatus and a storage medium for a depth camera, which can improve the precision and efficiency of depth camera calibration.
In a first aspect, an embodiment of the present disclosure provides a calibration method for a depth camera, including:
controlling a depth camera to be calibrated to shoot a 3D calibration plate from a plurality of angles and/or a plurality of distances to obtain an RGB (red, green and blue) image and a depth image;
extracting first characteristic point information in the RGB image and second characteristic point information in the depth image; wherein the first feature point information includes a first feature point coordinate and a first depth; the second feature point information comprises a second feature point coordinate and a second depth;
and training a set neural network model based on the first characteristic point information and the second characteristic point information to obtain a calibration model of the depth camera.
In a second aspect, an embodiment of the present disclosure further provides a calibration apparatus for a depth camera, including:
the device comprises an RGB (red, green, blue) image and depth image acquisition module, a depth image processing module and a depth image processing module, wherein the RGB image and depth image acquisition module is used for controlling a depth camera to be calibrated to shoot a 3D calibration plate from multiple angles and/or multiple distances so as to obtain an RGB image and a depth image;
the feature point information extraction module is used for extracting first feature point information in the RGB image and second feature point information in the depth image; wherein the first feature point information includes a first feature point coordinate and a first depth; the second feature point information comprises a second feature point coordinate and a second depth;
and the calibration model acquisition module is used for training a set neural network model based on the first characteristic point information and the second characteristic point information to acquire a calibration model of the depth camera.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processing devices;
storage means for storing one or more programs;
when executed by the one or more processing devices, the one or more programs cause the one or more processing devices to implement a method of calibrating a depth camera according to an embodiment of the disclosure.
In a fourth aspect, the disclosed embodiment further provides a computer readable medium, on which a computer program is stored, where the computer program, when executed by a processing device, implements the calibration method of the depth camera according to the disclosed embodiment.
The embodiment of the disclosure discloses a method, a device and equipment for calibrating a depth camera and a storage medium. Controlling a depth camera to be calibrated to shoot a 3D calibration plate from a plurality of angles and/or a plurality of distances to obtain an RGB (red, green and blue) image and a depth image; extracting first characteristic point information in the RGB image and second characteristic point information in the depth image; the first feature point information comprises a first feature point coordinate and a first depth; the second feature point information comprises a second feature point coordinate and a second depth; and training the set neural network model based on the first characteristic point information and the second characteristic point information to obtain a calibration model of the depth camera. The calibration method for the depth camera provided by the embodiment of the disclosure trains the set neural network model based on the first feature point information in the RGB diagram and the second feature point information in the depth diagram to obtain the calibration model of the depth camera, and can improve the precision and efficiency of the depth camera calibration.
Drawings
Fig. 1 is a flowchart of a calibration method of a depth camera according to an embodiment of the present disclosure;
FIG. 2a is a schematic diagram of a 3D calibration board being photographed by a depth camera provided by an embodiment of the present disclosure;
FIG. 2b is a front view of a calibration block provided by embodiments of the present disclosure;
FIG. 2c is a top view of a calibration block provided by embodiments of the present disclosure;
FIG. 2d is a side view of a calibration block provided by embodiments of the present disclosure;
FIG. 2e is an exemplary view of a 3D calibration plate taken from multiple distances by a depth camera;
FIG. 2f is an exemplary view of a depth camera taking a 3D calibration plate from multiple positions;
FIG. 2g is a diagram illustrating the correspondence between the RGB map and the depth map before calibration;
FIG. 2h is a diagram illustrating the correspondence between the calibrated RGB map and the depth map;
fig. 3 is a schematic structural diagram of a calibration apparatus of a depth camera provided in an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Depth camera technology has recently become more sophisticated and can obtain accurate depth information in a shooting scene at low cost, which can be subsequently used in a variety of services, such as three-dimensional data acquisition of Virtual Reality (VR) real estate, and for next generation 3D video, and the like. The traditional way of marking depth cameras is based on camera calibration of planar markers. For example, a checkerboard is printed and attached to a flat surface as a calibration object. Shooting photos in different directions for the calibration object by adjusting the direction of the calibration object or the camera; extracting feature points from the picture; and estimating internal and external parameters under the ideal distortion-free condition by adopting a least square method or a maximum likelihood method. The existing calibration method has the problems of poor precision and low efficiency. To address this problem, an embodiment of the present disclosure provides a calibration method for a depth camera.
Fig. 1 is a flowchart of a calibration method for a depth camera according to an embodiment of the present disclosure, where the present embodiment is applicable to a case of performing calibration using a depth camera, and the method may be executed by a calibration apparatus for a depth camera, where the apparatus may be composed of hardware and/or software, and may be generally integrated in a device having a calibration function for a depth camera, where the device may be an electronic device such as a server, a mobile terminal, or a server cluster. As shown in fig. 1, the method specifically includes the following steps:
and step 110, controlling the depth camera to be calibrated to shoot the 3D calibration board from a plurality of angles and/or a plurality of distances to obtain an RGB (red, green and blue) image and a depth image.
The method comprises the steps of obtaining internal and external parameters of a camera model by utilizing a certain algorithm by establishing correspondence between a point with known coordinates on a calibration object and an image point of the calibration object according to the calibration object with known size, and is called calibration. Before the depth camera is used, camera internal and external parameters of the depth camera need to be calculated, and image coordinates are mapped to real world coordinates through a set of internal and external camera parameters. The depth camera to be calibrated can be specifically understood as a depth camera to be used for acquiring internal and external parameters.
Fig. 2a is a schematic diagram of a 3D calibration board shot by a depth camera provided in the embodiment of the present disclosure. As shown in fig. 2a, a plurality of calibration blocks with different heights are arranged on the 3D calibration plate, and each calibration block is printed with a black-and-white pattern. The 3D calibration board differs from the 2D calibration board in that the calibration blocks of the 3D calibration board are cubes with different sizes and different heights, and the heights are known, and different depths can be simulated. The 3D calibration plate needs to satisfy: the surface of the calibration plate is flat, and the material does not absorb light; the calibration blocks are asymmetric and irregularly distributed on the surface of the calibration plate. The black-and-white pattern may be a planar pattern or a three-dimensional pattern.
Preferably, the black-and-white pattern is a solid pattern. For better feature point extraction, two-dimensional codes can be printed on different calibration blocks on the 3D calibration plate, for example: AprilCode is used for facilitating feature point matching; meanwhile, the method supports three-dimensional printing of April code, namely, the cross section of the calibration block is a pattern (pattern) corresponding to April code, and the calibration block is provided with a three-dimensional figure with certain height/depth.
Specifically, the depth camera to be calibrated is controlled to shoot the 3D calibration plate from multiple angles and/or multiple distances. Illustratively, the shooting can be performed from a plurality of angles of front view, top view and side view, and a front view, a top view and a side view are obtained. For example, fig. 2b is a front view of a calibration block provided by an embodiment of the present disclosure;
FIG. 2c is a top view of a calibration block provided by embodiments of the present disclosure; fig. 2d is a side view of a calibration block provided by an embodiment of the present disclosure. In the depth direction, within the range of 80cm-150cm from the plane of the 3D calibration plate, data is collected every 10cm, and the distance between the plane of the calibration plate and the camera is recorded as Ddistance. In the horizontal and vertical directions, the calibration plate is moved to capture data at different positions. For example, FIG. 2e is an exemplary view of a 3D calibration plate taken from multiple distances by a depth camera; FIG. 2f is an exemplary view of a depth camera taking a 3D calibration plate from multiple positions.
And 120, extracting first characteristic point information in the RGB map and second characteristic point information in the depth map.
The first feature point information comprises a first feature point coordinate and a first depth; the second feature point information includes second feature point coordinates and a second depth.
In this embodiment, since the depth map and the RGB map are captured by the same camera, the depth map can be corrected by using the RGB map.
Specifically, the manner of extracting the first feature point information in the RGB map and the second feature point information in the depth map may be: and extracting first characteristic point information of each calibration block in the RGB image and second characteristic point information of each calibration block in the depth image. The first characteristic point and the second characteristic point are corner points of the black-white pattern. In this embodiment, the calibration block is provided with a black-and-white pattern, and an angular point of the black-and-white pattern can be used as a characteristic point of the calibration block.
For example, the first feature point coordinate may be represented as PRGB, the second feature point coordinate may be represented as PIR, and the second depth may be extracted from the depth map as d.
In this embodiment, the manner of extracting the first depth of each calibration block in the RGB map may be: determining the distance between the optical center of the camera to be calibrated and the 3D calibration plate; a first depth is determined based on the distance and the height of the calibration block.
The height of each corner point on the calibration block relative to the plane of the calibration plate can be represented as D (if the black-and-white pattern is two-dimensional, the height is equivalent to the height of the calibration block, and if the black-and-white pattern is three-dimensional, the height is equivalent to the sum of the height of the calibration block and the height of the black-and-white pattern). Specifically, the distance between the optical center of the camera to be calibrated and the 3D calibration plate is measured and can be expressed as DdistanceThe height of each angular point on the calibration block relative to the plane of the calibration plate is known and can be represented as D, and the first depth is the difference between the distance between the optical center of the camera to be calibrated and the 3D calibration plate and the actual height of the calibration block and can be represented as (D)distance-D)。
And step 130, training the set neural network model based on the first characteristic point information and the second characteristic point information to obtain a calibration model of the depth camera.
The set neural network model may be a multi-layer perceptron model, and may be implemented to map an input data set to an output data set.
In this embodiment, the calibration model of the depth camera is a model for correcting the coordinate information and the depth information of each pixel point of the depth map.
Specifically, the second feature point information is input into a set neural network model to obtain predicted feature point information, the predicted feature point information is compared with the first feature point information to obtain a difference, and each parameter of the set neural network model is adjusted according to the difference to obtain a calibration model of the depth camera.
Further, training the set neural network model based on the first feature point information and the second feature point information, and obtaining the calibration model of the depth camera may be as follows:
a1) and inputting the second characteristic point information into a set neural network model to obtain the predicted characteristic point information.
Specifically, the second feature point information includes second feature point coordinates and second depth extracted from the RGB map, and the second feature point coordinates and the second depth information are combined into a set of data to be input into the set neural network model, so that the coordinates and depth information of the predicted feature points can be obtained.
Illustratively, the second feature point information is represented as (PIR, d), where PIR is a second feature point coordinate, and d is a second depth, and the (PIR, d) is input into the set neural network model to obtain the output predicted feature point information, which may be represented as (PIR, d), where PIR is the coordinate information of the predicted feature point, and d is the depth information of the predicted feature point.
b1) A loss function between the predicted feature point information and the first feature point information is determined.
The loss function may also be referred to as a cost function, and may be specifically understood as a function representing a difference between the predicted feature point information and the first feature point information.
For example, as described in the foregoing example, the predicted feature point information may be represented as (PIR, d), where PIR is coordinate information of the predicted feature point, and d is depth information of the predicted feature point; the first characteristic point information may be represented as (PRGB, (D)distance-D)), wherein PRGB is first feature point coordinate information, (D)distance-D) is the first depth. Calculating predicted feature point information (PRGB, (D) by a preset loss functiondistance-D)) and the predicted characteristic point information (PIR, D).
c1) And training the set neural network model based on the loss function to obtain a calibration model of the depth camera.
Specifically, parameters of the neural network model are adjusted and set according to the loss function until the loss function meets the set conditions, and then the calibration model training of the depth camera is completed.
Further, after obtaining the calibration model of the depth camera, the method further includes:
a2) and controlling the calibrated depth camera to shoot the target scene to obtain an RGB (red, green and blue) image and an initial depth image.
The method comprises the steps that first feature point information of each calibration block can be extracted from an RGB (red, green and blue) image, and the first feature point information comprises first feature point coordinates and first depth; second feature point information of each calibration block can be extracted from the initial depth map, including second feature point coordinates and a second depth.
b2) And inputting the initial depth map into a calibration model to obtain a corrected depth map.
The initial depth map comprises feature point coordinate information and depth information; the corrected depth map includes corrected feature point coordinate information and depth information.
c2) And aligning the RGB image and the corrected depth image to obtain an RGBD image corresponding to the target scene.
Specifically, the RGBD diagram corresponding to the target scene may be obtained by aligning pixels between the RGB diagram and the corrected depth diagram.
For example, fig. 2g is a schematic diagram of a corresponding relationship between an RGB map and a depth map before calibration, and pixel shift can be clearly seen; fig. 2h is a schematic diagram of the correspondence relationship between the calibrated RGB map and the depth map, and it can be seen that the pixels of the RGB map and the depth map are aligned.
The embodiment of the disclosure discloses a method, a device and equipment for calibrating a depth camera and a storage medium. Controlling a depth camera to be calibrated to shoot a 3D calibration plate from a plurality of angles and/or a plurality of distances to obtain an RGB (red, green and blue) image and a depth image; extracting first characteristic point information in the RGB image and second characteristic point information in the depth image; the first feature point information comprises a first feature point coordinate and a first depth; the second feature point information comprises a second feature point coordinate and a second depth; and training the set neural network model based on the first characteristic point information and the second characteristic point information to obtain a calibration model of the depth camera. The calibration method for the depth camera provided by the embodiment of the disclosure trains the set neural network model based on the first feature point information in the RGB diagram and the second feature point information in the depth diagram to obtain the calibration model of the depth camera, and can improve the precision and efficiency of the depth camera calibration.
Fig. 3 is a schematic structural diagram of a calibration apparatus of a depth camera according to an embodiment of the present disclosure. As shown in fig. 3, the apparatus includes:
an RGB map and depth map obtaining module 210, configured to control the depth camera to be calibrated to shoot the 3D calibration board from multiple angles and/or multiple distances, so as to obtain an RGB map and a depth map;
the feature point information extraction module 220 is configured to extract first feature point information in the RGB map and second feature point information in the depth map; the first feature point information comprises a first feature point coordinate and a first depth; the second feature point information comprises a second feature point coordinate and a second depth;
and a calibration model obtaining module 230, configured to train the set neural network model based on the first feature point information and the second feature point information, so as to obtain a calibration model of the depth camera.
Optionally, the 3D calibration board is provided with a plurality of calibration blocks with different heights, and each calibration block is printed with a black-and-white pattern.
Optionally, the black-and-white pattern is a stereo pattern.
Optionally, the feature point information extracting module 220 includes:
the calibration block feature point information extraction unit is used for extracting first feature point information of each calibration block in the RGB image and second feature point information of each calibration block in the depth image; the first characteristic point and the second characteristic point are corner points of the black-white pattern.
Optionally, each calibration block feature point information extraction unit is specifically configured to:
determining the distance between the optical center of the camera to be calibrated and the 3D calibration plate;
a first depth is determined based on the distance and the height of the calibration block.
Optionally, the calibration model obtaining module 230 is specifically configured to:
inputting the second characteristic point information into a set neural network to obtain predicted characteristic point information;
determining a loss function between the predicted characteristic point information and the first characteristic point information;
and training the set neural network model based on the loss function to obtain a calibration model of the depth camera.
Optionally, the apparatus further comprises:
the RGB image and initial depth image acquisition module is used for controlling the calibrated depth camera to shoot a target scene to obtain an RGB image and an initial depth image;
the corrected depth map acquisition module is used for inputting the initial depth map into the calibration model to obtain a corrected depth map;
and the RGBD image acquisition module is used for aligning the RGB image and the corrected depth image to acquire an RGBD image corresponding to the target scene.
The device can execute the methods provided by all the embodiments of the disclosure, and has corresponding functional modules and beneficial effects for executing the methods. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in all the foregoing embodiments of the disclosure.
Referring now to FIG. 4, a block diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like, or various forms of servers such as a stand-alone server or a server cluster. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 300 may include a processing means (e.g., central processing unit, graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a read-only memory device (ROM)302 or a program loaded from a storage device 305 into a random access memory device (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program containing program code for performing a method for recommending words. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 305, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: controlling a depth camera to be calibrated to shoot a 3D calibration plate from a plurality of angles and/or a plurality of distances to obtain an RGB (red, green and blue) image and a depth image; extracting first characteristic point information in the RGB image and second characteristic point information in the depth image; wherein the first feature point information includes a first feature point coordinate and a first depth; the second feature point information comprises a second feature point coordinate and a second depth; and training a set neural network model based on the first characteristic point information and the second characteristic point information to obtain a calibration model of the depth camera.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the disclosed embodiments, the disclosed embodiments disclose a calibration method of a depth camera, including:
controlling a depth camera to be calibrated to shoot a 3D calibration plate from a plurality of angles and/or a plurality of distances to obtain an RGB (red, green and blue) image and a depth image;
extracting first characteristic point information in the RGB image and second characteristic point information in the depth image; wherein the first feature point information includes a first feature point coordinate and a first depth; the second feature point information comprises a second feature point coordinate and a second depth;
and training a set neural network model based on the first characteristic point information and the second characteristic point information to obtain a calibration model of the depth camera.
Furthermore, a plurality of calibration blocks with different heights are arranged on the 3D calibration plate, and black and white patterns are printed on each calibration block.
Further, the black-and-white pattern is a stereoscopic pattern.
Further, extracting first feature point information in the RGB map and second feature point information in the depth map includes:
extracting first characteristic point information of each calibration block in the RGB image and second characteristic point information of each calibration block in the depth image; and the first characteristic point and the second characteristic point are corner points of the black-and-white pattern.
Further, extracting the first depth of each calibration block in the RGB map includes:
determining the distance between the optical center of the camera to be calibrated and the 3D calibration plate;
determining a first depth from the distance and the height of the calibration block.
Further, training a set neural network model based on the first feature point information and the second feature point information to obtain a calibration model of the depth camera, including:
inputting the second feature point information into the set neural network to obtain predicted feature point information;
determining a loss function between the predicted feature point information and the first feature point information;
and training a set neural network model based on the loss function to obtain a calibration model of the depth camera.
Further, after obtaining the calibration model of the depth camera, the method further includes:
controlling the calibrated depth camera to shoot a target scene to obtain an RGB (red, green and blue) image and an initial depth image;
inputting the initial depth map into the calibration model to obtain a corrected depth map;
aligning the RGB image and the corrected depth image to obtain an RGBD image corresponding to the target scene.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present disclosure and the technical principles employed. Those skilled in the art will appreciate that the present disclosure is not limited to the specific embodiments illustrated herein and that various obvious changes, adaptations, and substitutions are possible, without departing from the scope of the present disclosure. Therefore, although the present disclosure has been described in greater detail with reference to the above embodiments, the present disclosure is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present disclosure, the scope of which is determined by the scope of the appended claims.

Claims (10)

1. A method for calibrating a depth camera, comprising:
controlling a depth camera to be calibrated to shoot a 3D calibration plate from a plurality of angles and/or a plurality of distances to obtain an RGB (red, green and blue) image and a depth image;
extracting first characteristic point information in the RGB image and second characteristic point information in the depth image; wherein the first feature point information includes a first feature point coordinate and a first depth; the second feature point information comprises a second feature point coordinate and a second depth;
and training a set neural network model based on the first characteristic point information and the second characteristic point information to obtain a calibration model of the depth camera.
2. The method according to claim 1, wherein a plurality of calibration blocks having different heights are disposed on the 3D calibration plate, and each calibration block has a black-and-white pattern printed thereon.
3. The method of claim 2, wherein the black and white pattern is a solid pattern.
4. The method according to claim 2 or 3, wherein extracting first feature point information in the RGB map and second feature point information in the depth map comprises:
extracting first characteristic point information of each calibration block in the RGB image and second characteristic point information of each calibration block in the depth image; and the first characteristic point and the second characteristic point are corner points of the black-and-white pattern.
5. The method of claim 4, wherein extracting the first depth of each scaling block in the RGB map comprises:
determining the distance between the optical center of the camera to be calibrated and the 3D calibration plate;
determining a first depth from the distance and the height of the calibration block.
6. The method of claim 1, wherein training a set neural network model based on the first feature point information and the second feature point information to obtain a calibration model of the depth camera comprises:
inputting the second feature point information into the set neural network model to obtain predicted feature point information;
determining a loss function between the predicted feature point information and the first feature point information;
and training a set neural network model based on the loss function to obtain a calibration model of the depth camera.
7. The method of claim 1, after obtaining the calibration model of the depth camera, further comprising:
controlling the calibrated depth camera to shoot a target scene to obtain an RGB (red, green and blue) image and an initial depth image;
inputting the initial depth map into the calibration model to obtain a corrected depth map;
aligning the RGB image and the corrected depth image to obtain an RGBD image corresponding to the target scene.
8. A calibration apparatus for a depth camera, comprising:
the device comprises an RGB (red, green, blue) image and depth image acquisition module, a depth image processing module and a depth image processing module, wherein the RGB image and depth image acquisition module is used for controlling a depth camera to be calibrated to shoot a 3D calibration plate from multiple angles and/or multiple distances so as to obtain an RGB image and a depth image;
the feature point information extraction module is used for extracting first feature point information in the RGB image and second feature point information in the depth image; wherein the first feature point information includes a first feature point coordinate and a first depth; the second feature point information comprises a second feature point coordinate and a second depth;
and the calibration model acquisition module is used for training a set neural network model based on the first characteristic point information and the second characteristic point information to acquire a calibration model of the depth camera.
9. An electronic device, characterized in that the electronic device comprises:
one or more processing devices;
storage means for storing one or more programs;
when executed by the one or more processing devices, cause the one or more processing devices to implement a method of calibrating a depth camera as claimed in any one of claims 1 to 7.
10. A computer-readable medium, on which a computer program is stored which, when being executed by processing means, carries out a method for calibration of a depth camera according to any one of claims 1 to 7.
CN202111152027.9A 2021-09-29 2021-09-29 Calibration method, device and equipment of depth camera and storage medium Pending CN113870361A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111152027.9A CN113870361A (en) 2021-09-29 2021-09-29 Calibration method, device and equipment of depth camera and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111152027.9A CN113870361A (en) 2021-09-29 2021-09-29 Calibration method, device and equipment of depth camera and storage medium

Publications (1)

Publication Number Publication Date
CN113870361A true CN113870361A (en) 2021-12-31

Family

ID=79000497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111152027.9A Pending CN113870361A (en) 2021-09-29 2021-09-29 Calibration method, device and equipment of depth camera and storage medium

Country Status (1)

Country Link
CN (1) CN113870361A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708333A (en) * 2022-03-08 2022-07-05 智道网联科技(北京)有限公司 Method and device for generating external reference model of automatic calibration camera

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708333A (en) * 2022-03-08 2022-07-05 智道网联科技(北京)有限公司 Method and device for generating external reference model of automatic calibration camera

Similar Documents

Publication Publication Date Title
CN108932051B (en) Augmented reality image processing method, apparatus and storage medium
US10593014B2 (en) Image processing apparatus, image processing system, image capturing system, image processing method
CN110300292B (en) Projection distortion correction method, device, system and storage medium
CN109474780B (en) Method and device for image processing
US11282232B2 (en) Camera calibration using depth data
WO2018153313A1 (en) Stereoscopic camera and height acquisition method therefor and height acquisition system
EP4270315A1 (en) Method and device for processing three-dimensional video, and storage medium
CN103078924A (en) Visual field sharing method and equipment
CN110858414A (en) Image processing method and device, readable storage medium and augmented reality system
WO2019076027A1 (en) White balance information synchronization method and device, and computer readable medium
CN112085775A (en) Image processing method, device, terminal and storage medium
CN110796664A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111383254A (en) Depth information acquisition method and system and terminal equipment
CN114187366A (en) Camera installation correction method and device, electronic equipment and storage medium
WO2022166868A1 (en) Walkthrough view generation method, apparatus and device, and storage medium
CN113870361A (en) Calibration method, device and equipment of depth camera and storage medium
CN109034214B (en) Method and apparatus for generating a mark
JP2017229067A (en) Method and apparatus for creating pair of stereoscopic images using at least one lightfield camera
CN111385481A (en) Image processing method and device, electronic device and storage medium
WO2021208630A1 (en) Calibration method, calibration apparatus and electronic device using same
JP7225016B2 (en) AR Spatial Image Projection System, AR Spatial Image Projection Method, and User Terminal
CN114140771A (en) Automatic annotation method and system for image depth data set
CN112291445A (en) Image processing method, device, equipment and storage medium
CN114093020A (en) Motion capture method, motion capture device, electronic device and storage medium
CN112308809A (en) Image synthesis method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination