CN117788546A - Image depth complement method, device, computer equipment and storage medium - Google Patents

Image depth complement method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117788546A
CN117788546A CN202311849042.8A CN202311849042A CN117788546A CN 117788546 A CN117788546 A CN 117788546A CN 202311849042 A CN202311849042 A CN 202311849042A CN 117788546 A CN117788546 A CN 117788546A
Authority
CN
China
Prior art keywords
depth
depth map
actual
map
estimated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311849042.8A
Other languages
Chinese (zh)
Inventor
高鸣岐
董培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202311849042.8A priority Critical patent/CN117788546A/en
Publication of CN117788546A publication Critical patent/CN117788546A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the field of image processing, and discloses an image depth complementing method, an image depth complementing device, computer equipment and a storage medium, wherein the image depth complementing method comprises the following steps: acquiring an actual depth map obtained from a current scene; estimating an estimated depth map of the current scene through a pre-trained depth estimation model; and determining a depth missing region of the actual depth map according to the estimated depth map, and complementing the actual depth map. And an estimated depth map obtained through the depth estimation model ensures that no omission exists in full complement, and the full complement precision is higher.

Description

Image depth complement method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image depth supplementing method, an image depth supplementing device, a computer device, and a storage medium.
Background
In the computer and industrial fields, depth information plays an important role. The method for acquiring the depth information basically depends on the depth camera, but the depth camera can only obtain a sparse depth map, namely, a great number of depth defects exist on the depth map, and particularly when the object reflects light and the surface of the object is darker (such as black), the depth values often exist in a missing way. The lack of depth may result in some tasks not being performed, such as robotic arm grabbing tasks based on point cloud information. Aiming at the problem, the existing method often adopts a Depth complement algorithm based on deep learning, but the method has the problems of low speed and difficult acquisition of training data, real, paired and aligned RGB and Depth graphs need to be acquired, and meanwhile, a model trained on a public data set is difficult to be directly applied to a specific scene. Meanwhile, the depth of the surface reflective object and the dark material object is difficult to complement by the method, and most of depth values of the part in the training set are missing.
Disclosure of Invention
The application provides an image depth complement method for solving the problem of depth deficiency, which comprises the following steps:
acquiring an actual depth map obtained from a current scene;
estimating an estimated depth map of the current scene through a pre-trained depth estimation model;
and determining a depth missing region of the actual depth map according to the estimated depth map, and complementing the actual depth map.
Further, the determining the depth missing region of the actual depth map according to the estimated depth map, and complementing the actual depth map, includes:
comparing the actual depth map with the estimated depth map, and determining a depth missing region in the actual depth map;
determining an intersection region with depth values of the actual depth map on the estimated depth map, and analyzing the depth values in the intersection region to obtain a mapping relation between the depth values in the estimated depth map and the depth values of the actual depth map;
and complementing the depth value in the depth missing region according to the mapping relation.
Further, the analyzing the depth value in the intersection area to obtain a mapping relationship between the depth value in the estimated depth map and the depth value of the actual depth map includes:
dividing the depth values of all pixels in the intersection region of the estimated depth map into sections according to preset resolution, and determining the section where each depth value is located;
and acquiring the actual depth map, calculating the average value of the depth values in each divided interval, taking each average value as a representative value of each interval, and mapping the depth value of the pixel point corresponding to the estimated depth map according to the interval and the representative value to obtain a mapping relation.
Further, the complementing the depth value in the depth missing region according to the mapping relationship includes:
acquiring depth values of pixels in a region corresponding to the depth missing region in the estimated depth map;
and inquiring real depth values corresponding to the depth values according to the mapping relation, and covering the real depth values on the corresponding pixels in the depth missing region.
Further, the comparing the actual depth map with the estimated depth map, and determining the depth missing region in the actual depth map includes:
and comparing the depth values of pixels at the same position of the actual depth map and the estimated depth map, and taking a region with depth values in the estimated depth map and without depth values in the actual depth map as a depth missing region.
Further, the training method of the depth estimation model comprises the following steps:
constructing simulation scenes, and acquiring RGB pictures in a plurality of simulation scenes and depth information corresponding to the RGB pictures through a camera in the simulation scenes to obtain training set data;
pre-training a depth estimation backbone network of a U-Net structure through general data to obtain a pre-training model, and then inputting the training set data into the pre-training model for training;
and (3) carrying out loss calculation on the depth information recorded in the training set data and the depth information output during training, and adjusting parameters of the pre-training model to obtain a depth estimation model.
Further, the expression of the loss function is:
d i =log y i -log y i *
wherein L is a loss value, i is a pixel number, y i And y i * And representing paired gray values in the training set data and the corresponding predicted depth map, wherein n is the total pixel number in the depth map.
In a second aspect, the present application further provides an image depth supplementing apparatus, including:
the shooting module is used for acquiring an actual depth map obtained from the current scene;
the estimating module is used for estimating an estimated depth map of the current scene through a pre-trained depth estimation model;
and the complementing module is used for determining a depth missing region of the actual depth map according to the estimated depth map and complementing the actual depth map.
In a third aspect, the present application also provides a computer device comprising a processor and a memory, the memory storing a computer program which, when run on the processor, performs the image depth complementing method.
In a fourth aspect, the present application also provides a readable storage medium storing a computer program which, when run on a processor, performs the image depth complement method.
The invention discloses an image depth completion method, an image depth completion device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring an actual depth map obtained from a current scene; estimating an estimated depth map of the current scene through a pre-trained depth estimation model; and determining a depth missing region of the actual depth map according to the estimated depth map, and complementing the actual depth map. And an estimated depth map obtained through the depth estimation model ensures that no omission exists in full complement, and the full complement precision is higher.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are required for the embodiments will be briefly described, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope of the present invention. Like elements are numbered alike in the various figures.
FIG. 1 is a schematic flow chart of an image depth completion method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a training process of a depth estimation model according to an embodiment of the present application;
fig. 3 shows a schematic structural diagram of an image depth complementing device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
The terms "comprises," "comprising," "including," or any other variation thereof, are intended to cover a specific feature, number, step, operation, element, component, or combination of the foregoing, which may be used in various embodiments of the present invention, and are not intended to first exclude the presence of or increase the likelihood of one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the invention belong. The terms (such as those defined in commonly used dictionaries) will be interpreted as having a meaning that is the same as the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in connection with the various embodiments of the invention.
The technical scheme is applied to the completion of depth images, for example, in scenes such as logistics sorting and stacking, the recognition of the image depth is needed, and the actual depth map obtained by acquiring the current scene is obtained; estimating an estimated depth map of the current scene through a pre-trained depth estimation model; and determining a depth missing region of the actual depth map according to the estimated depth map, and complementing the actual depth map. The completion operation of missing depth is realized through two steps of depth estimation and depth completion based on depth priori, and the machine vision scene required by factories, laboratories, production lines and the like is greatly optimized, so that the acquired image effect is better.
The technical scheme of the application is described in the following specific embodiments.
Example 1
As shown in fig. 1, the image depth complement method of the present embodiment includes:
step S100, obtaining an actual depth map obtained by the current scene.
The actual depth map obtained by shooting in the present embodiment refers to an image containing depth information captured by a depth camera, and the actual depth map does not refer to that the depth value of the image is a real depth, but refers to that the depth value in the depth map is a measured value obtained by actual detection by an instrument.
It will be appreciated that there is a problem of depth loss due to the fact that objects reflect light and the surface of objects are darker, and that such loss is present in the actual depth map and requires complementation.
Step S200, estimating an estimated depth map of the current scene through a pre-trained depth estimation model.
In this embodiment, the actual depth map is estimated by a pre-trained depth estimation model to obtain an estimated depth map, specifically, an mxn×3 RGB map is input, and the model outputs an estimated depth map with a size of mxn. The estimated depth map obtained through the model is not affected by factors such as reflection, and the obtained depth map is denser than the depth map obtained through actual shooting, namely, the situation of depth missing does not exist, so that the actual depth map can be subjected to subsequent complement operation through the estimated depth map.
As shown in fig. 2, the training process of the depth estimation model includes:
step S210, constructing a simulation scene, and obtaining RGB pictures in a plurality of simulation scenes and depth information corresponding to the RGB pictures through a camera in the simulation scene to obtain training set data.
Before training a depth estimation model, training set data for training needs to be acquired, and considering that the model needs customer service due to depth loss caused by reflection and the like, the embodiment builds a scene similar to an actual working scene in a simulation scene, and then a camera is arranged in the simulation scene to shoot and acquire RGB pictures. Meanwhile, because the training set is in the simulation scene, the depth information corresponding to each RGB picture can be directly read from the simulation scene, so that the training set data can be obtained.
The simulation scene is randomized, variables such as the number, the position, the size (length, width and height), the surface texture pattern, the placement mode (whether stacked) and the like of the random express boxes are randomized, and illumination variables such as illumination (brightness) of the scene, light source types (point light sources and parallel light sources), light source positions and orientations are randomized. Finally, the position and the orientation of the camera are randomized, so that the image data shot by the camera is data containing various scenes, and the trained model can cope with the various situations. The trained model can give corresponding depth values under the conditions of reflecting light, darker color of the surface of an object and the like.
Step S220, pre-training the depth estimation backbone network of the U-Net structure through general data to obtain a pre-training model, and then inputting the training set into the training model for training.
The model of this embodiment adopts a depth estimation backbone network with a U-Net structure for training, and before training is directly performed by using the training set data, a pre-training model is obtained by pre-training on some general data, such as an NYUv2 indoor data set (a data set of a database). Training by using the pre-training model can make the training process smoother.
When training, the RGB image is input into the pre-training model, an output predicted depth map can be obtained, depth information corresponding to the RGB image exists in training set data, and corresponding training operation is carried out according to the depth information.
And step S230, carrying out loss calculation on the depth information recorded in the training set data and the depth information output during training, and adjusting parameters of the pre-training model to obtain a depth estimation model.
The present embodiment calculates the loss of depth information and depth information output at the time of training by the following loss function.
d i =log y i -log y i *
Wherein L is a loss value, i is a pixel number, y i And y i * And representing paired gray values in the training set data and the corresponding predicted depth map, wherein n is the total pixel number in the depth map.
The equation is used as a loss value by calculating the difference between the estimated depth and the actual sampling.
And changing the parameters of the pre-training model according to the calculated loss value, and completing training when the loss value L is smaller than a certain value or when the iteration algebra of training reaches a preset algebra to obtain an estimation model which can be practically used.
According to the estimation model obtained through the steps, in actual use, depth loss caused by object reflection and darker color of the object surface can be overcome, a denser depth image is generated, and no depth loss in the depth image is ensured.
And step S300, determining a depth missing region of the actual depth map according to the estimated depth map, and complementing the actual depth map.
Because the estimated depth map is generated by a model, the image without depth deletion is considered as the image with depth deletion, and the actual depth map is the image with depth deletion, the actual depth map and the estimated depth map are compared, and the depth deletion area in the coefficient depth map is determined.
Specifically, the depth values of the corresponding pixels of the two images may be compared, and if there is a depth value in the estimated depth map and the actual depth map is not, the region is a depth missing region.
If there is no depth value in the estimated depth map and there is no actual depth map, this indicates that this partial region may actually have no depth value, and the missing region is not determined.
And determining an intersection region with depth values identical to the actual depth map on the estimated depth map, and analyzing the depth values in the intersection region to obtain a mapping relation between the depth values in the estimated depth map and the depth values of the actual depth map.
In this embodiment, the depth values of the pixels in the intersection region of the estimated depth map are divided according to a preset resolution, and the interval where each depth value is located is determined. If the depth value of one pixel in the estimated depth map represents a depth of 100.5 cm and the resolution is 1 cm, the depth value is divided into the 100 th interval 100, 101.
The interval division is carried out through the resolution, so that the depth values of different values can be divided into a unified value, and data integration to a certain extent is realized.
And calculating the average value of the depth values of all the pixel points in the intersection area of the actual depth map, and mapping the average value of the depth values of all the pixel points with the interval to obtain a mapping relation.
The intersection region is a region in which depth values exist in both the estimated depth map and the actual depth map, and in these regions, each pixel point corresponds to one another, so that there are paired depth values, where these depth values all fall in a certain section divided at the preset resolution. And determining the representative value of the interval where the depth value of the corresponding pixel point on the estimated depth map is located by calculating the average value of the depth values of the corresponding pixel point.
For example, the pixel P1 on the estimated depth map and the pixel P'1 on the actual depth map are pixels corresponding to each other, and the depth values thereof are 100.3 and 100.9, respectively, and it can be known that 100.3 falls in the 100 th section, and there are a plurality of similar point pairs, so that the point pairs can be in one-to-one correspondence, and a plurality of depth values can be divided into a plurality of sections according to the section division manner, so that a plurality of depth values exist in one section.
Calculating the average value of all depth values falling within the same interval, the average value will be taken as the representative value of the interval, for example, if only the two depth values falling within the 100 th interval currently exist, the average value will be 100.6, so 100.6 can be taken as the representative value of the 100 th interval, that is, the value falling within the interval will be regarded as 100.6. Similarly, a representative value of each interval can be determined, and a mapping relationship from the depth value in the estimated depth map to the depth value of the actual depth map can be obtained according to the representative value and the value of the estimated depth map.
It should be noted that, in the actual depth map, the obtained depth value is considered to be correct, the actual depth map has only the problem of depth deficiency, and the estimated depth map does not have the problem of depth deficiency, but the depth value is not necessarily as accurate as the measured value, so that the mapping is established by dividing the interval, that is, the corresponding relation from the estimated value to the measured value is obtained by combining the estimated value and the measured value, so that the relation is valid for the region with depth deficiency, and the filled depth value can be sufficiently correct.
And complementing the depth value in the depth missing region according to the mapping relation.
Then, the depth value of each pixel in the depth missing region in the estimated depth map can be obtained, and the depth interval to which each depth value belongs is determined; and inquiring real depth values corresponding to each depth interval according to the mapping relation, and covering the real depth values on corresponding pixels in the depth missing area. Thus, the completion of the depth value is realized.
According to the image depth completion method, through training a model, the RGB image is subjected to depth estimation to obtain an estimated depth image without depth deletion, a depth deletion area in an actual depth image is determined based on the estimated depth image, then areas with depth data in the two depth images are analyzed, the mapping relation between the estimated depth and the actual depth is obtained, then completion operation is performed, and the situation under different scenes is considered in the whole process, so that the completion effect is better. And because the training set data is acquired by using the simulated scene, a large amount of data acquisition and labeling are not needed, a large amount of cost is saved, the depth map with dark light absorption materials and surface light reflection materials can be effectively complemented, and the follow-up task is convenient.
Example 2
As shown in fig. 3, the present application further provides an image depth supplementing apparatus, including:
the shooting module 10 is used for acquiring an actual depth map obtained by the current scene;
an estimation module 20, configured to estimate an estimated depth map of the current scene through a pre-trained depth estimation model;
and the complement module 30 is configured to determine a depth missing region of the actual depth map according to the estimated depth map, and complement the actual depth map.
The application also provides a computer device comprising a processor and a memory, the memory storing a computer program which, when run on the processor, performs the image depth complement method.
The present application also provides a readable storage medium storing a computer program which, when run on a processor, performs the image depth completion method.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, of the flow diagrams and block diagrams in the figures, which illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules or units in various embodiments of the invention may be integrated together to form a single part, or the modules may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a smart phone, a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention.

Claims (10)

1. An image depth completion method, comprising:
acquiring an actual depth map obtained from a current scene;
estimating an estimated depth map of the current scene through a pre-trained depth estimation model;
and determining a depth missing region of the actual depth map according to the estimated depth map, and complementing the actual depth map.
2. The image depth completion method of claim 1, wherein determining a depth missing region of the actual depth map from the estimated depth map and completing the actual depth map comprises:
comparing the actual depth map with the estimated depth map, and determining a depth missing region in the actual depth map;
determining an intersection region with depth values of the actual depth map on the estimated depth map, and analyzing the depth values in the intersection region to obtain a mapping relation between the depth values in the estimated depth map and the depth values of the actual depth map;
and complementing the depth value in the depth missing region according to the mapping relation.
3. The image depth completion method according to claim 2, wherein the analyzing the depth values in the intersection region to obtain a mapping relationship between the depth values in the estimated depth map and the depth values of the actual depth map includes:
dividing the depth values of all pixels in the intersection region of the estimated depth map into sections according to preset resolution, and determining the section where each depth value is located;
and acquiring the actual depth map, calculating the average value of the depth values in each divided interval, taking each average value as a representative value of each interval, and mapping the depth value of the pixel point corresponding to the estimated depth map according to the interval and the representative value to obtain a mapping relation.
4. The image depth completion method according to claim 2, wherein the completion of the depth value in the depth missing region according to the mapping relation comprises:
acquiring depth values of pixels in a region corresponding to the depth missing region in the estimated depth map;
and inquiring real depth values corresponding to the depth values according to the mapping relation, and covering the real depth values on the corresponding pixels in the depth missing region.
5. The image depth completion method of claim 2, wherein the comparing the actual depth map and the estimated depth map to determine a depth missing region in the actual depth map comprises:
and comparing the depth values of pixels at the same position of the actual depth map and the estimated depth map, and taking a region with depth values in the estimated depth map and without depth values in the actual depth map as a depth missing region.
6. The image depth completion method of claim 1, wherein the training method of the depth estimation model comprises:
constructing simulation scenes, and acquiring RGB pictures in a plurality of simulation scenes and depth information corresponding to the RGB pictures through a camera in the simulation scenes to obtain training set data;
pre-training a depth estimation backbone network of a U-Net structure through general data to obtain a pre-training model, and then inputting the training set data into the pre-training model for training;
and (3) carrying out loss calculation on the depth information recorded in the training set data and the depth information output during training, and adjusting parameters of the pre-training model to obtain a depth estimation model.
7. The image depth completion method of claim 6, wherein the loss function is expressed as:
wherein L is a loss value, i is a pixel number, y i And y i * And representing paired gray values in the training set data and the corresponding predicted depth map, wherein n is the total pixel number in the depth map.
8. An image depth supplementing apparatus, comprising:
the shooting module is used for acquiring an actual depth map obtained from the current scene;
the estimating module is used for estimating an estimated depth map of the current scene through a pre-trained depth estimation model;
and the complementing module is used for determining a depth missing region of the actual depth map according to the estimated depth map and complementing the actual depth map.
9. A computer device comprising a processor and a memory, the memory storing a computer program that, when run on the processor, performs the image depth complementing method of any one of claims 1 to 7.
10. A readable storage medium, characterized in that it stores a computer program which, when run on a processor, performs the image depth complement method of any one of claims 1 to 7.
CN202311849042.8A 2023-12-28 2023-12-28 Image depth complement method, device, computer equipment and storage medium Pending CN117788546A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311849042.8A CN117788546A (en) 2023-12-28 2023-12-28 Image depth complement method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311849042.8A CN117788546A (en) 2023-12-28 2023-12-28 Image depth complement method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117788546A true CN117788546A (en) 2024-03-29

Family

ID=90401569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311849042.8A Pending CN117788546A (en) 2023-12-28 2023-12-28 Image depth complement method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117788546A (en)

Similar Documents

Publication Publication Date Title
EP3712841A1 (en) Image processing method, image processing apparatus, and computer-readable recording medium
CN109871895B (en) Method and device for detecting defects of circuit board
US20100201880A1 (en) Shot size identifying apparatus and method, electronic apparatus, and computer program
EP3352138A1 (en) Method and apparatus for processing a 3d scene
CN110197180B (en) Character defect detection method, device and equipment
CN112330593A (en) Building surface crack detection method based on deep learning network
US11651581B2 (en) System and method for correspondence map determination
CN111242026A (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN111723634A (en) Image detection method and device, electronic equipment and storage medium
CN113808121A (en) Yarn sub-pixel level diameter measurement method and system
CN113192013A (en) Method and system for detecting defects of light-reflecting surface and electronic equipment
CN114078127B (en) Object defect detection and counting method, device, equipment and storage medium
CN116228684A (en) Battery shell appearance defect image processing method and device
CN112204957A (en) White balance processing method and device, movable platform and camera
CN113435296A (en) Method, system, storage medium and elevator for detecting foreign matters based on rotated-yolov5
CN112348762A (en) Single image rain removing method for generating confrontation network based on multi-scale fusion
CN115294035B (en) Bright spot positioning method, bright spot positioning device, electronic equipment and storage medium
CN117036442A (en) Robust monocular depth completion method, system and storage medium
CN117788546A (en) Image depth complement method, device, computer equipment and storage medium
CN116664829A (en) RGB-T semantic segmentation method, system, device and storage medium
CN114022367A (en) Image quality adjusting method, device, electronic equipment and medium
CN111598943B (en) Book in-place detection method, device and equipment based on book auxiliary reading equipment
CN110619677B (en) Method and device for reconstructing particles in three-dimensional flow field, electronic equipment and storage medium
CN111353991A (en) Target detection method and device, electronic equipment and storage medium
CN105635596A (en) System for controlling exposure of camera and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination