CN113837943A - Image processing method and device, electronic equipment and readable storage medium - Google Patents
Image processing method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN113837943A CN113837943A CN202111144237.3A CN202111144237A CN113837943A CN 113837943 A CN113837943 A CN 113837943A CN 202111144237 A CN202111144237 A CN 202111144237A CN 113837943 A CN113837943 A CN 113837943A
- Authority
- CN
- China
- Prior art keywords
- target
- processed
- depth information
- determining
- interpolation model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 32
- 239000011800 void material Substances 0.000 claims abstract description 37
- 238000000034 method Methods 0.000 claims abstract description 33
- 238000012545 processing Methods 0.000 claims abstract description 32
- 230000008859 change Effects 0.000 claims description 20
- 238000005457 optimization Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 9
- 230000008439 repair process Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 230000000295 complement effect Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000010146 3D printing Methods 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003238 somatosensory effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides an image processing method, an image processing device, an electronic device and a readable storage medium, wherein the method comprises the following steps: determining a to-be-processed void region in the target depth map and depth information of target pixel points corresponding to the to-be-processed void region; wherein, each pixel point in the hole area to be processed has no depth information; the target pixel point is adjacent to the edge pixel point of the cavity area to be processed; determining whether a target interpolation model corresponding to the void region to be processed is a linear interpolation model or a nonlinear interpolation model according to the depth difference value between target pixel points; and determining depth information corresponding to each pixel point based on the determined target interpolation model. According to the method and the device, the scene difference of the representation of the cavity area is considered, different interpolation modes are selected according to the depth difference value, the point cloud cavity can be filled, point cloud loss caused by the shielding problem is repaired, and elevation information is supplemented, so that the method and the device can be flexibly applied to plane scenes and scenes with large elevation fluctuation.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a readable storage medium.
Background
In recent years, with the great development of three-dimensional image scanning technology, the advantages of the point cloud model are more and more obvious. Graphics taking point clouds as research objects are receiving more and more attention, and the method is widely applied to the fields of medicine, modern satellite remote sensing measurement, multimedia, machine vision, intelligent monitoring, three-dimensional reconstruction, somatosensory interaction, 3D printing and the like.
In the visual three-dimensional dense reconstruction, due to the problems of shielding, weak texture and repeated texture of the measured object, a point cloud cavity exists in the point cloud data of the measured object. For point cloud cavities, the current solution is to fill the point cloud cavity in the gap by using a linear interpolation method. However, the linear interpolation method has a good effect only on a planar scene, and has no great effect on a region with more elevation fluctuation, and if a tree scene, a telegraph pole and the like cannot be completed.
Therefore, it is a problem to be solved urgently to provide an interpolation scheme flexibly applicable to different scenes to repair a point cloud cavity.
Disclosure of Invention
The invention aims to provide an image processing method, an image processing device, electronic equipment and a readable storage medium, which are used for realizing the effect of flexibly selecting different interpolation modes for different scenes to repair point cloud cavities.
Embodiments of the invention may be implemented as follows:
in a first aspect, the present invention provides an image processing method, including: determining a to-be-processed void region in a target depth map and depth information of target pixel points corresponding to the to-be-processed void region; each pixel point in the hole area to be processed has no depth information; the target pixel point is adjacent to the edge pixel point of the to-be-processed cavity area; determining whether a target interpolation model corresponding to the to-be-processed cavity region is a linear interpolation model or a nonlinear interpolation model according to the depth difference value between the target pixel points; and determining depth information corresponding to each pixel point based on the determined target interpolation model.
In a first aspect, the present invention provides an image processing apparatus, including a determining module, configured to determine a to-be-processed cavity region in a target depth map and depth information of a target pixel point corresponding to the to-be-processed cavity region; wherein, the pixel points in the hole area to be processed have no depth information; the target pixel point is adjacent to the edge pixel point of the to-be-processed cavity area; the determining module is further configured to determine, according to the depth difference value between the target pixel points, that a target interpolation model corresponding to the to-be-processed void region is a linear interpolation model or a nonlinear interpolation model; and the interpolation module is used for determining depth information corresponding to each pixel point in the to-be-processed cavity region based on the determined target interpolation model.
In a third aspect, the present invention provides an electronic device comprising a processor and a memory storing computer readable instructions, the processor being configured to perform the image processing method according to the first aspect when executing the computer readable instructions.
In a fourth aspect, the present invention provides a readable storage medium having stored thereon computer readable instructions executable by a processor to implement the image processing method according to the first aspect.
The invention provides an image processing method, an image processing device, an electronic device and a readable storage medium, wherein the method comprises the following steps: firstly, determining a hole area to be processed in a target depth map, further determining whether the hole area to be processed is suitable for a linear interpolation model or a nonlinear interpolation model according to a depth difference value between target pixel points corresponding to the hole area to be processed, further determining depth information of each pixel point in the hole area to be processed according to the determined target interpolation model, wherein the difference from the prior art is that the prior art only adopts a linear interpolation mode to complement the depth information of the hole area, neglects the scene difference of the representation of the hole area, so that the linear interpolation mode only has a better interpolation effect on a plane scene, but is not suitable for representing a scene with large elevation fluctuation in the target depth map, and the application considers the scene difference of the representation of the hole area, and further can flexibly select different interpolation modes according to the depth difference value, therefore, the method can be flexibly applied to image processing methods of plane scenes and scenes with large elevation fluctuation, can fill point cloud cavities, repair point cloud loss caused by shielding problems, complete elevation information and increase ground object recovery integrity.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of an image processing method provided in an embodiment of the present application;
FIG. 2 is a diagram illustrating an example of a target depth map according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart diagram of one implementation of step S106 provided by an embodiment of the present application;
FIG. 4 is a coordinate system provided by an embodiment of the present application;
fig. 5 is a schematic flowchart of an implementation manner of step S109 provided in an embodiment of the present application;
fig. 6 is a schematic flowchart of an implementation manner of step S103 provided in an embodiment of the present application;
FIG. 7 is a schematic flow chart diagram of another image processing method provided in the embodiments of the present application;
fig. 8 is a scene schematic diagram for obtaining a target depth map according to an embodiment of the present disclosure;
fig. 9 is a functional block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 10 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that if the terms "upper", "lower", "inside", "outside", etc. indicate an orientation or a positional relationship based on that shown in the drawings or that the product of the present invention is used as it is, this is only for convenience of description and simplification of the description, and it does not indicate or imply that the device or the element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present invention.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
Point Cloud (Point Cloud), which is a collection of a large number of points of the surface characteristics of a target, can be obtained by a scanner or a three-dimensional measurement method, in recent years, graphics taking the Point Cloud as a research object are receiving more and more attention, and the Point Cloud is widely applied to the fields of medicine, modern satellite remote sensing measurement, multimedia, machine vision, automatic driving, intelligent monitoring, three-dimensional reconstruction, somatosensory interaction, 3D printing and the like. In the actual measurement and data acquisition process, the acquired three-dimensional point cloud has noise, or due to local shielding of the sample, damage of the sample, weak texture, repeated texture and limitation of measurement means, the acquired point cloud data is lost, the depth information is incomplete, some holes exist, and great difficulty is brought to later-stage data processing, point cloud modeling, structural information extraction and the like.
Therefore, point cloud hole repairing is an important link in point cloud data preprocessing, and how to provide a hole repairing method with a repairing surface closest to an actual point cloud surface is a hot problem of current research. Because the point cloud data and the depth image can be converted, in the scheme of repairing the point cloud hole, the point cloud hole is adjusted and compensated by adopting a linear interpolation mode aiming at the depth image in the related technology, namely, the depth information at the hole is compensated by determining a linear equation, so that the purpose of filling the hole is achieved.
Therefore, the embodiment of the invention provides an image processing method which can be flexibly suitable for a plane scene and a scene with large elevation fluctuation, can fill in point cloud holes, repair point cloud loss caused by a shielding problem, complete elevation information and increase ground object recovery integrity.
Referring to fig. 1, fig. 1 is a diagram illustrating an image processing method according to an embodiment of the present application, where the method includes:
s103, determining a to-be-processed void region in the target depth map and depth information of target pixel points corresponding to the to-be-processed void region.
And each pixel point in the to-be-processed cavity area has no depth information, and the target pixel point is adjacent to the edge pixel point of the to-be-processed cavity area.
In this embodiment, the target depth map may be a depth image stored in advance, or a depth map obtained by processing an image of an object or an area acquired in any scene, where the scene may be, but is not limited to, automatic driving, unmanned aerial vehicle aerial survey, Virtual Reality (Virtual Reality), and the like, and the target depth map may also be a depth image generated at random, which is not limited herein.
It can be understood that, in order to avoid the precision loss of the point cloud data due to overfitting, the hole region to be processed in this embodiment is a hole region that satisfies the processing condition, that is, there may be many hole regions in the target depth map, but only the hole region that satisfies the processing condition in this embodiment will be subjected to depth information completion.
In an embodiment, the processing condition may be determined according to a depth difference between target pixels corresponding to the cavity region, where the target pixels are two pixels adjacent to two end points of the cavity region to be processed, and in a possible implementation manner, when a depth difference value of the two target pixels is greater than 3 meters, depth information completion is not performed on the cavity region, and if the depth difference value of the two target pixels is less than 3 meters, depth information completion is performed on the cavity region.
It is understood that the above 3 meters of the processing conditions are only given as an example, and the user can define a threshold according to his/her needs to obtain the hole area to be processed.
And S106, determining whether the target interpolation model corresponding to the cavity area to be processed is a linear interpolation model or a nonlinear interpolation model according to the depth difference value between the target pixel points.
In this embodiment, in order to flexibly fill up the depth information in the cavity region in different manners according to different scenes, after the depth information of the target pixel point of the cavity region to be processed is determined, a depth difference value may be calculated, and it is determined whether the cavity region to be processed adopts a linear interpolation manner or a nonlinear interpolation manner according to the depth difference value.
It can be understood that the linear interpolation mode is applicable to a planar scene, and the nonlinear interpolation mode is applicable to a scene with large elevation fluctuation, so in this embodiment, whether the current to-be-processed cavity region corresponds to a planar scene or a scene with large elevation fluctuation can be represented by the depth difference value, and then depth information completion is performed by using one of the interpolation modes, thereby achieving an effect of flexibly completing the depth information in the cavity region by using different modes according to different scenes.
In an embodiment, the user may define the condition of the depth difference value corresponding to the planar scene or the scene with large elevation fluctuation according to an empirical value or a requirement, which is not limited herein.
And S109, determining depth information corresponding to each pixel point based on the determined target interpolation model.
In this embodiment, if the determined target interpolation type is a linear interpolation model, the depth information of each pixel point in the cavity region to be processed is determined according to the linear interpolation mode, and if the determined target interpolation type is a nonlinear interpolation model, the depth information of each pixel point in the cavity region to be processed is determined according to the nonlinear interpolation mode. That is to say, for a target depth map, it may be that the hole repair may be implemented only by using a linear interpolation model, or it may be implemented only by using a nonlinear interpolation model, or it may be that both the linear interpolation model and the nonlinear interpolation model need to be used to implement the hole repair.
For convenience of understanding the foregoing implementation process, please refer to fig. 2, and fig. 2 is a scene example diagram of a target depth map provided in an embodiment of the present application, in fig. 2, each grid represents a pixel point, where the pixel point represented by a gray grid has depth information, the pixel point represented by a white grid has no depth information, a (u, v) represents a position coordinate of the pixel point in the target depth map, and d is depth information corresponding to the pixel.
Assuming that a region formed by two pixel points (u2, v2), (u3, v3) in a dashed line frame is a to-be-processed hole region, a target pixel point is a pixel point (u0, v0) and a pixel point (u1, v1), assuming that the depth information of the pixel point (u0, v0) is d0 and the depth information of the pixel point (u1, v1) is d1, determining whether the to-be-processed hole region adopts a linear interpolation model or a non-linear interpolation model according to the difference between d0 and d 1. In one example, assuming that the void area to be processed corresponds to a planar scene, the target interpolation model of the void area to be processed is a linear interpolation model, and if the void area to be processed corresponds to a scene with large elevation fluctuation, the target interpolation model of the void area to be processed is a nonlinear interpolation model. Furthermore, the depth information of the pixels (u2, v2), (u3, v3) can be determined according to the target interpolation model, so that the effect of the depth information can be flexibly supplemented.
The image processing method provided by the embodiment of the invention comprises the steps of firstly determining a hole area to be processed in a target depth map, further determining whether the hole area to be processed is suitable for a linear interpolation model or a nonlinear interpolation model according to the depth difference value between target pixel points corresponding to the hole area to be processed, further determining the depth information of each pixel point in the hole area to be processed according to the determined target interpolation model, and is different from the prior art in that the prior art only adopts a linear interpolation mode to complement the depth information of the hole area, ignores the scene difference of the representation of the hole area, leads the linear interpolation mode to have a better interpolation effect only on a plane scene, but is not suitable for a scene with large elevation fluctuation of the representation in the target depth map, considers the scene difference of the representation of the hole area, and further flexibly selects different interpolation modes according to the depth difference value, therefore, the method can be flexibly applied to image processing methods of plane scenes and scenes with large elevation fluctuation, can fill point cloud cavities, repair point cloud loss caused by shielding problems, complete elevation information and increase ground object recovery integrity.
It should be further noted that, since there may be a plurality of void regions to be processed in the target depth map, the image processing method in this embodiment may be performed in the following manner, namely the first manner: namely, after determining a hole area to be processed, performing depth information completion on the hole area to be processed, then continuing to determine the next hole area to be processed, and so on until no depth area to be processed exists in the target depth map, in a second way: all to-be-processed cavity regions in the target depth map can be determined, and then depth information completion is carried out on each to-be-processed cavity region.
Optionally, for the target interpolation type that is applicable to the determination of the hole region to be processed in step S106, this embodiment provides a possible implementation manner, please refer to fig. 3, and fig. 3 is a schematic flowchart of an implementation manner of step S106 provided in this embodiment of the present application, where step S106 may include the following sub-steps:
s106-1, if the depth difference value is within a first preset range, determining that the target interpolation model is a linear interpolation model.
And S106-2, if the depth difference value is larger than the upper limit value of the second preset range, determining that the target interpolation model is a nonlinear interpolation model.
The upper limit value of the first preset range is the lower limit value of the second preset range.
In this embodiment, the first preset range and the second preset range may be determined according to an empirical value, or randomly defined by a user, in an embodiment, the first preset range may be 0 meter to 0.5 meter, and the second preset range is 0.5 meter to 3 meters, that is, when the depth difference value is between 0 meter and 0.5 meter, including 0 meter and 0.5 meter, the scene corresponding to the to-be-processed cavity region may be regarded as a planar scene, which is suitable for a linear interpolation model, and when the depth difference value is greater than 0.5 meter and less than or equal to 3 meters, the scene corresponding to the to-be-processed cavity region may be regarded as a scene with large elevation fluctuation, which is suitable for a nonlinear interpolation model.
Aiming at the step S106-1, after the target interpolation model is determined to be the linear interpolation model, the linear interpolation model can be constructed according to the position information and the depth information of the target pixel point.
In this embodiment, the linear interpolation model may be a first-order bezier curve, and the above-mentioned manner of constructing the linear interpolation model may be to construct the linear interpolation model according to the position information and the depth information of the target pixel point.
Continuing to take the scene graph shown in fig. 2 as an example, a two-dimensional cartesian coordinate system is first constructed according to the abscissa direction (the same principle of ordinate) of the pixel points, a linear interpolation model is set as a first-order bezier curve, the pixel points in the to-be-processed cavity region are all distributed in the first-order bezier curve, wherein the depth information is taken as the Y axis, and the abscissa or the ordinate of the pixel points is taken as the X axis.
Setting the position information and the depth information of a target pixel band point as (u0, v0), d0 and (u1, v1) respectively, and setting the pixel point in the cavity region to be processed as (u, v) respectively, and then constructing a relation model for obtaining the change rate t of the (u, v) and the depth information according to the first-order Belleville curve characteristic as follows:
the method comprises the steps that u and d represent abscissa and depth information of pixel points in a to-be-hollow area respectively, u0 and d0 are abscissa and depth information of target pixel points (u0 and v0), and u1 and d1 are abscissa and depth information of target pixel points (u1 and v 1); u0, d0, u1 and d1 are known, and after the depth information change rate corresponding to each pixel point is determined, the abscissa and the depth information change rate of each pixel point are brought into the relationship model, so that the depth information d corresponding to each pixel point can be obtained.
It should be noted that the above bellsai curve can be characterized by: and the depth information change rate representation can represent the moving step path corresponding to a certain pixel point moving to the cavity area to be processed.
Similarly, for step S106-2, after the target interpolation model is determined to be the linear interpolation model, the linear interpolation model may be constructed according to the position information and the depth information of the target pixel point.
In this embodiment, the nonlinear interpolation model may be a second-order bezier curve, and the nonlinear interpolation model may be constructed by determining position information and depth information of a reference coordinate point according to position information and depth information of one of the target pixel points; and constructing a nonlinear interpolation model based on the respective position information and depth information of the target pixel point and the reference coordinate point.
Continuing with the scene graph shown in fig. 2 as an example, a two-dimensional cartesian coordinate system is first constructed according to the abscissa direction or the ordinate direction of the pixel point, as shown in fig. 4, fig. 4 is a coordinate system provided in the embodiment of the present application, where the depth information is used as the Y axis, and the abscissa or the ordinate of the pixel is used as the X axis.
And (3) setting the nonlinear interpolation model as a second-order Bessel curve, wherein the position information and the depth information of the target pixel strip point are respectively coordinate points P0(u0, d0) corresponding to the target pixel point (u0, V0), coordinate points P1(u1, d1) corresponding to the target pixel point (u1, V1), reference coordinate point hypotheses (u3, V3) are obtained, the depth information is d3, and the corresponding coordinate points P3(u3, d3) are obtained. Each pixel point in the to-be-processed cavity region is distributed in a second-order bezier curve formed by P0, P1 and P3, for example, a pixel point (u2, v2) in the to-be-processed cavity region corresponds to a coordinate point P2(u2, d 2).
According to the characteristics of the second-order Belleville curve, a relation model about (u, v) and the depth information change rate t can be constructed and obtained as follows:
the method comprises the steps that u and d represent abscissa and depth information of pixel points in a to-be-hollow area respectively, u0 and d0 are abscissa and depth information of target pixel points (u0 and v0), and u1 and d1 are abscissa and depth information of target pixel points (u1 and v 1); u3 and d3 are abscissa and depth information of the reference coordinate points; u0, d0, u1, d1, u3 and d3 are known, and after the depth information change rate corresponding to each pixel point is determined, the abscissa and the depth information change rate of each pixel point are brought into the relation model, so that the depth information d corresponding to each pixel point can be obtained.
In an embodiment, the reference coordinate point determination may be performed according to a required curvature of a curve, and since a point with a larger depth value among the target pixel points is selected as a reference to determine the depth information of the reference coordinate point in order to better adapt to restoration of ground features such as telegraph poles, trees, and the like, for example, if d0 is smaller than d1, u 3-u 0-1 is determined according to an abscissa u0 of the target pixel point, d 3-d 1-1 is determined according to the depth information d1 of the target pixel point u1, and (u3, d3) may be further adjusted according to an empirical value.
Optionally, a non-linear interpolation model is obtained by combining the above-mentioned constructed linear interpolation model, and the depth information of each pixel point in the to-be-hollow region is obtained, which may be executed according to the steps shown in fig. 5, please refer to fig. 5, fig. 5 is a schematic flowchart of an implementation manner of step S109 provided in this embodiment of the present application, and step S109 may include:
s109-1, determining the depth information change rate corresponding to each pixel point; all the depth information change rates are distributed in a preset interval.
In combination with the above embodiment of constructing the model, it can be seen that the depth information change rate is t in the model, and t is uniformly distributed in a preset interval, that is, is distributed between 0 and 1, in a possible implementation manner, the manner of determining the depth information change rate corresponding to each pixel point may be: the number of pixel points in a hole area to be processed is determined, and then the number between 0 and 1 is divided according to the determined number of the pixel points in proportion, so that each pixel point corresponds to a value of a depth information change rate.
For example, if the number of pixel points in the hole region to be processed is 3, the number of pixel points divided into 3 values from 0 to 1 according to the equal proportion may be 0.25,0.5, or 0.75. These 3 values correspond in sequence to the values of t for the 3 pixels in the void region to be processed.
S109-2, aiming at each pixel point, inputting the position information and the depth information change rate of the pixel point into a target interpolation model to obtain the depth information of each pixel point.
For example, with reference to fig. 2, assuming that the target interpolation model is a linear model, the abscissa of the pixel point (u2, V2) in the to-be-processed cavity region is u2 (known), and the value of the corresponding depth information change rate t is 0.333, then u2 (known) and 0.333 are brought into the linear interpolation model, and the obtained d is the depth information of (u2, V2), and further, the abscissa u7 (known) of the pixel point (u7, V7) and the value of the corresponding t are brought into the linear interpolation model, so as to obtain the depth information of the pixel point (u7, V7). If the target interpolation model is a non-linear model, the depth information of the pixels (u2, V2), (u7, V7) is obtained in the same manner as the above process, and will not be described herein again.
Optionally, as can be seen from the above, there may be a plurality of void regions that do not satisfy the processing condition in the target depth map, so an implementation manner of how to screen out a void region to be processed in the target depth map is further provided below, please refer to fig. 6, where fig. 6 is a schematic flowchart of an implementation manner of step S103 provided in this embodiment of the present application, and step S103 may include:
s103-1, traversing the target depth map, and determining all the hole areas and the depth information of the target pixel point corresponding to each hole area.
S103-2, aiming at the target cavity area, if the depth difference value of target pixel points corresponding to the target cavity area is smaller than or equal to the upper limit value of a second preset range, determining the target cavity area as a cavity area to be processed; the target cavity area is any one of all the cavity areas;
s103-3, traversing all the hole areas to obtain all the hole areas to be processed.
In this embodiment, for S103-1, the following may be implemented:
And 2, traversing the target depth map according to columns, determining a region formed by connecting pixel points with vacant depth information in each column as the cavity region, and determining adjacent pixel points of edge pixel points in the cavity region along the column direction as the target pixel points.
For example, continuing to take the scene graph shown in fig. 2 as an example, the target depth map has 4 rows and 5 columns, the target depth map is traversed by rows, a void region does not exist in the traversal result of the first row, a region formed by the pixel points (u2, V2), (u7, V7) in the second row is a void region, and the corresponding target pixel points are pixel points (u0, V0) adjacent to the pixel points (u2, V2) and pixel points (u1, V1) adjacent to the pixel points (u7, V7); in the third row, the pixel (u8, V8) is a hollow region, the adjacent target pixel is the pixel (u5, V5) and the pixel (u6, V6) adjacent to the pixel (u8, V8), and the rest rows are similar. Then, the target depth map is traversed according to columns, and the principle of determining the hollow area and the target pixel point is the same as that of traversing according to rows, which is not described herein again.
It should be noted that, the step 1 and the step 2 are not executed in sequence, and the step 1 may be executed first and then the step 2 is executed, or the step 2 may be executed first and then the step 1 is executed.
It should be further noted that, if there is a hole region that does not satisfy the processing condition in the process of traversing by rows, then in the process of traversing by columns, a situation may occur in which the hole region also satisfies the processing condition, because the target pixel point determined in the process of traversing by rows in the same hole region is different from the target pixel point determined in the process of traversing by columns, so that the above situation may occur, and then the hole region may still be determined as a hole region to be processed, and it is necessary to perform depth information completion on the hole region.
For example, continuing to take the scene shown in fig. 2 as an example, the hole region corresponds to the pixel (u8, V8), in the process of traversing by rows, the target pixel corresponding to the hole region is (u5, V5) and (u6, V6), the depth information is d5 and d6, if the depth difference value between d5 and d6 is greater than 3 meters, the depth information will not be completed for the hole region, but in the process of traversing by columns, the target pixel corresponding to the hole region is (u0, V0) and (u9, V9), the depth information is d0 and d9, and if the depth difference value between d9 and d9 is less than 3 meters, the depth information will still be completed for the hole region.
The hole areas to be processed in the target depth map are determined in a row-by-row and column-by-column traversal mode, so that the phenomenon of area omission caused by improper threshold setting can be prevented, the depth information completion of each hole area to be processed can be guaranteed, and the ground object recovery integrity is improved.
Optionally, before performing the above steps S103 to S109, this embodiment further provides an implementation manner of obtaining a target depth map, please refer to fig. 7, where fig. 7 is a schematic flowchart of another image processing method provided in this embodiment of the present application, and the method may further include:
s100, controlling the unmanned aerial vehicle to fly according to a flight path, and executing image acquisition operation on the detected object at a plurality of waypoint positions to obtain a plurality of images to be processed, wherein the object overlapping degree between each image to be processed is greater than a preset threshold value.
And S102, carrying out depth information optimization processing on the multiple images to be processed to obtain a target depth map.
The unmanned aerial vehicle can be but not limited to a remote sensing unmanned aerial vehicle, the flight route can be pre-stored in a memory of the unmanned aerial vehicle, the unmanned aerial vehicle can also be transmitted to the unmanned aerial vehicle by other equipment before a flight task is executed, the unmanned aerial vehicle can also be a route generated in real time according to a measured object, and the route is not limited here.
The shooting scene can be as shown in fig. 8, please refer to a scene diagram for obtaining a target depth map as shown in fig. 8. According to the route shown in fig. 8, in the process that the unmanned aerial vehicle flies from point a to point D, data acquisition is performed on the measured object at each waypoint position, the waypoint position is set to ensure that the unmanned aerial vehicle shoots a plurality of images with certain overlapping degree, then the depth map of each frame of image is restored by using a visual positioning algorithm and a binocular stereo matching algorithm, then denoising processing is performed on the obtained depth map to obtain a final target depth map, and often the target depth maps have holes.
Optionally, in a possible implementation, the interpolation-processed target depth map is reconstructed in three dimensions to obtain point cloud data.
It can be understood that in the embodiment of the invention, the point cloud data obtained based on the target depth map after interpolation processing does not have a cavity region any more, so that the difficulties of post data processing, point cloud modeling, structural information extraction and the like are reduced, and the wide application of the point cloud data in various fields is facilitated.
Referring to fig. 9, fig. 9 is a functional block diagram of an image processing apparatus according to an embodiment of the present application, it should be noted that the basic principle and the technical effects of the image processing apparatus according to the embodiment are the same as those of the embodiment, and for brief description, corresponding contents in the embodiment may be referred to where this embodiment is not mentioned. The image processing apparatus 20 includes:
the determining module 201 is configured to determine a to-be-processed void region in a target depth map and depth information of a target pixel point corresponding to the to-be-processed void region;
wherein, the pixel points in the hole area to be processed have no depth information; the target pixel point is adjacent to the edge pixel point of the to-be-processed cavity area;
the determining module 201 is further configured to determine, according to the depth difference value between the target pixel points, that a target interpolation model corresponding to the to-be-processed void region is a linear interpolation model or a nonlinear interpolation model;
and the interpolation module 202 is configured to determine depth information corresponding to each pixel point in the to-be-processed cavity region based on the determined target interpolation model.
Optionally, the determining module 201 is specifically configured to determine that the target interpolation model is the linear interpolation model if the depth difference value is within a preset range; and if the depth difference value is larger than the upper limit value of the preset range, determining the target interpolation model as the nonlinear interpolation model.
Optionally, the interpolation module 202 is specifically configured to determine a depth information change rate corresponding to each pixel point; all the depth information change rates are distributed in a preset interval; and inputting the position information and the depth information change rate of each pixel point into the target interpolation model to obtain the depth information of each pixel point.
Optionally, the image processing apparatus 20 further includes a building module, configured to build the linear interpolation model according to the position information and the depth information of the target pixel. The system comprises a target pixel point, a reference coordinate point and a display unit, wherein the target pixel point is used for displaying position information and depth information of the target pixel point; and constructing the nonlinear interpolation model based on the respective position information and depth information of the target pixel point and the reference coordinate point.
Optionally, the determining module 201 is further specifically configured to traverse the target depth map, and determine all the hole regions and depth information of the target pixel point corresponding to each hole region; for a target cavity region, if the depth difference value of a target pixel point corresponding to the target cavity region is smaller than a preset threshold value, determining the target cavity region as a cavity region to be processed; the target cavity area is any one of all the cavity areas; and traversing all the hole areas to obtain all the hole areas to be processed.
Optionally, the determining module 201 is further specifically configured to traverse the target depth map row by row, determine a region formed by connecting pixel points of depth information vacancies in each row as the void region, and determine adjacent pixel points of edge pixel points in the row direction of the void region as the target pixel points; traversing the target depth map in rows, determining a region formed by connecting pixel points of the depth information vacancy in each row as the void region, and determining adjacent pixel points of edge pixel points of the void region along the row direction as the target pixel points.
Optionally, the image processing apparatus 20 further includes an obtaining module, configured to control the unmanned aerial vehicle to fly according to a flight path, and perform an image obtaining operation on the object to be tested at a plurality of waypoint positions to obtain a plurality of images to be processed, where an object overlap degree between each image to be processed is greater than a preset threshold; and performing depth information optimization processing on the multiple images to be processed to obtain the target depth map.
The image processing device provided by the embodiment of the invention comprises a determining module and an interpolation module, wherein a to-be-processed cavity area in a target depth map is determined through the determining module, and then according to a depth difference value between target pixel points corresponding to the to-be-processed cavity area, whether the to-be-processed cavity area is suitable for a linear interpolation model or a non-linear interpolation model is determined, and further the depth information of each pixel point in the to-be-processed cavity area can be determined through the interpolation module according to the determined target interpolation model, and the difference from the prior art is that the prior art only adopts a linear interpolation mode to complement the depth information of the cavity area, ignores the scene difference represented by the cavity area, leads the linear interpolation mode to have a better interpolation effect only on a plane scene, but is not suitable for representing a scene with a larger elevation in the target depth map, and the application considers the scene difference represented by the cavity area, and then different interpolation modes can be flexibly selected according to the depth difference value, so that the method can be flexibly applied to an image processing method of a plane scene and a scene with large elevation fluctuation, can fill in point cloud holes, repair point cloud loss caused by a shielding problem, complete elevation information and increase the completeness of ground feature reply.
An embodiment of the present invention further provides an electronic device, as shown in fig. 10, and fig. 10 is a block diagram of a structure of an electronic device according to an embodiment of the present invention. The electronic device may be, but is not limited to, an aircraft, an unmanned aerial vehicle, other electronic device with data analysis functionality.
The electronic device 80 comprises a communication interface 81, a processor 82 and a memory 83. The processor 82, memory 83 and communication interface 81 are electrically connected to each other, directly or indirectly, to enable the transfer or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 83 may be used for storing software programs and modules, such as program instructions/modules corresponding to the image processing method provided by the embodiment of the present invention, and the processor 82 executes various functional applications and data processing by executing the software programs and modules stored in the memory 83. The communication interface 81 can be used for communicating signaling or data with other node devices. The electronic device 80 may have a plurality of communication interfaces 81 in the present invention.
The Memory 83 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 82 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), etc.; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc.
Alternatively, the modules may be stored in the form of software or Firmware (Firmware) in the memory shown in fig. 8 or be fixed in an Operating System (OS) of the electronic device, and may be executed by the processor in fig. 8. Meanwhile, data, codes of programs, and the like required to execute the above modules may be stored in the memory.
An embodiment of the present invention provides a readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the image processing method according to any one of the foregoing embodiments. The readable storage medium can be, but is not limited to, various media that can store program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a PROM, an EPROM, an EEPROM, a magnetic or optical disk, etc.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part thereof, which essentially contributes to the prior art, can be embodied in the form of a software product, which is stored in a readable storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned readable storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (12)
1. An image processing method, characterized in that the method comprises:
determining a to-be-processed void region in a target depth map and depth information of target pixel points corresponding to the to-be-processed void region;
each pixel point in the hole area to be processed has no depth information; the target pixel point is adjacent to the edge pixel point of the to-be-processed cavity area;
determining whether a target interpolation model corresponding to the to-be-processed cavity region is a linear interpolation model or a nonlinear interpolation model according to the depth difference value between the target pixel points;
and determining depth information corresponding to each pixel point based on the determined target interpolation model.
2. The image processing method according to claim 1, wherein determining whether the target interpolation model corresponding to the void region to be processed is a linear interpolation model or a non-linear interpolation model according to the depth difference value between the target pixel points comprises:
if the depth difference value is within a first preset range, determining the target interpolation model as the linear interpolation model;
and if the depth difference value is within a second preset range, determining that the target interpolation model is the nonlinear interpolation model, wherein the upper limit value of the first preset range is the lower limit value of the second preset range.
3. The image processing method according to claim 1 or 2, wherein determining the depth information corresponding to each pixel point according to the target interpolation model comprises:
determining the depth information change rate corresponding to each pixel point; all the depth information change rates are distributed in a preset interval;
and inputting the position information and the depth information change rate of each pixel point into the target interpolation model to obtain the depth information of each pixel point.
4. The image processing method according to claim 2, wherein after determining the target interpolation model as the linear interpolation model, the method further comprises:
and constructing the linear interpolation model according to the position information and the depth information of the target pixel point.
5. The image processing method according to claim 2, wherein after determining that the target interpolation model is the non-linear interpolation model, the method further comprises:
determining the position information and the depth information of a reference coordinate point according to the position information and the depth information of the target pixel point;
and constructing the nonlinear interpolation model based on the respective position information and depth information of the target pixel point and the reference coordinate point.
6. The image processing method of claim 2, wherein determining the hole region to be processed in the target depth map comprises:
traversing the target depth map, and determining all the hole areas and the depth information of the target pixel points corresponding to each hole area;
for a target cavity region, if the depth difference value of a target pixel point corresponding to the target cavity region is smaller than or equal to the upper limit value of the second preset range, determining the target cavity region as a cavity region to be processed; the target cavity area is any one of all the cavity areas;
and traversing all the hole areas to obtain all the hole areas to be processed.
7. The image processing method of claim 6, wherein traversing the target depth map to determine depth information of all the hole regions and the target pixel corresponding to each hole region comprises:
traversing the target depth map row by row, determining a region formed by connecting pixel points with depth information vacancy in each row as the void region, and determining adjacent pixel points of edge pixel points in the row direction of the void region as the target pixel points;
traversing the target depth map in rows, determining a region formed by connecting pixel points of the depth information vacancy in each row as the void region, and determining adjacent pixel points of edge pixel points of the void region along the row direction as the target pixel points.
8. The image processing method according to claim 1, wherein before determining the depth information of the to-be-processed hole region in the target depth map and the target pixel point corresponding to the to-be-processed hole region, the method further comprises:
controlling an unmanned aerial vehicle to fly according to a flight route, and executing image acquisition operation on a detected object at a plurality of waypoint positions to obtain a plurality of images to be processed, wherein the object overlapping degree between each image to be processed is greater than a preset threshold value;
and performing depth information optimization processing on the multiple images to be processed to obtain the target depth map.
9. The image processing method according to claim 1, characterized in that the method further comprises:
and performing three-dimensional reconstruction on the target depth map subjected to interpolation processing to obtain point cloud data.
10. An image processing apparatus characterized by comprising:
the determining module is used for determining a to-be-processed void region in a target depth map and depth information of a target pixel point corresponding to the to-be-processed void region;
wherein, the pixel points in the hole area to be processed have no depth information; the target pixel point is adjacent to the edge pixel point of the to-be-processed cavity area;
the determining module is further configured to determine, according to the depth difference value between the target pixel points, that a target interpolation model corresponding to the to-be-processed void region is a linear interpolation model or a nonlinear interpolation model;
and the interpolation module is used for determining depth information corresponding to each pixel point in the to-be-processed cavity region based on the determined target interpolation model.
11. An electronic device comprising a processor and a memory having computer readable instructions stored thereon, the processor being configured to perform the image processing method of any of claims 1 to 9 when executing the computer readable instructions.
12. A readable storage medium having stored thereon computer readable instructions executable by a processor to implement the image processing method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111144237.3A CN113837943A (en) | 2021-09-28 | 2021-09-28 | Image processing method and device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111144237.3A CN113837943A (en) | 2021-09-28 | 2021-09-28 | Image processing method and device, electronic equipment and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113837943A true CN113837943A (en) | 2021-12-24 |
Family
ID=78967091
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111144237.3A Pending CN113837943A (en) | 2021-09-28 | 2021-09-28 | Image processing method and device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113837943A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114594798A (en) * | 2022-03-18 | 2022-06-07 | 广州极飞科技股份有限公司 | Map construction method, path planning method and device and electronic equipment |
CN115439543A (en) * | 2022-09-02 | 2022-12-06 | 北京百度网讯科技有限公司 | Method for determining hole position and method for generating three-dimensional model in metauniverse |
CN115984105A (en) * | 2022-12-07 | 2023-04-18 | 深圳大学 | Method and device for optimizing hole convolution, computer equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05297921A (en) * | 1992-04-24 | 1993-11-12 | Makino Milling Mach Co Ltd | Shape data production method |
US20100315412A1 (en) * | 2009-06-15 | 2010-12-16 | Microsoft Corporation | Piecewise planar reconstruction of three-dimensional scenes |
WO2016029555A1 (en) * | 2014-08-25 | 2016-03-03 | 京东方科技集团股份有限公司 | Image interpolation method and device |
CN109636732A (en) * | 2018-10-24 | 2019-04-16 | 深圳先进技术研究院 | A kind of empty restorative procedure and image processing apparatus of depth image |
US20190172249A1 (en) * | 2017-12-01 | 2019-06-06 | Analytical Graphics, Inc. | Systems and Methods for Real-Time Large-Scale Point Cloud Surface Reconstruction |
WO2021102948A1 (en) * | 2019-11-29 | 2021-06-03 | 深圳市大疆创新科技有限公司 | Image processing method and device |
CN112991193A (en) * | 2020-11-16 | 2021-06-18 | 武汉科技大学 | Depth image restoration method, device and computer-readable storage medium |
CN113242419A (en) * | 2021-04-30 | 2021-08-10 | 电子科技大学成都学院 | 2D-to-3D method and system based on static building |
CN113327318A (en) * | 2021-05-18 | 2021-08-31 | 禾多科技(北京)有限公司 | Image display method, image display device, electronic equipment and computer readable medium |
-
2021
- 2021-09-28 CN CN202111144237.3A patent/CN113837943A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05297921A (en) * | 1992-04-24 | 1993-11-12 | Makino Milling Mach Co Ltd | Shape data production method |
US20100315412A1 (en) * | 2009-06-15 | 2010-12-16 | Microsoft Corporation | Piecewise planar reconstruction of three-dimensional scenes |
WO2016029555A1 (en) * | 2014-08-25 | 2016-03-03 | 京东方科技集团股份有限公司 | Image interpolation method and device |
US20190172249A1 (en) * | 2017-12-01 | 2019-06-06 | Analytical Graphics, Inc. | Systems and Methods for Real-Time Large-Scale Point Cloud Surface Reconstruction |
CN109636732A (en) * | 2018-10-24 | 2019-04-16 | 深圳先进技术研究院 | A kind of empty restorative procedure and image processing apparatus of depth image |
WO2021102948A1 (en) * | 2019-11-29 | 2021-06-03 | 深圳市大疆创新科技有限公司 | Image processing method and device |
CN112991193A (en) * | 2020-11-16 | 2021-06-18 | 武汉科技大学 | Depth image restoration method, device and computer-readable storage medium |
CN113242419A (en) * | 2021-04-30 | 2021-08-10 | 电子科技大学成都学院 | 2D-to-3D method and system based on static building |
CN113327318A (en) * | 2021-05-18 | 2021-08-31 | 禾多科技(北京)有限公司 | Image display method, image display device, electronic equipment and computer readable medium |
Non-Patent Citations (1)
Title |
---|
周真理等: "一种基于深度图的三维/多视点视频视点合成方法", 《测控技术》, vol. 30, no. 5, 31 December 2011 (2011-12-31), pages 18 - 21 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114594798A (en) * | 2022-03-18 | 2022-06-07 | 广州极飞科技股份有限公司 | Map construction method, path planning method and device and electronic equipment |
CN115439543A (en) * | 2022-09-02 | 2022-12-06 | 北京百度网讯科技有限公司 | Method for determining hole position and method for generating three-dimensional model in metauniverse |
CN115439543B (en) * | 2022-09-02 | 2023-11-10 | 北京百度网讯科技有限公司 | Method for determining hole position and method for generating three-dimensional model in meta universe |
CN115984105A (en) * | 2022-12-07 | 2023-04-18 | 深圳大学 | Method and device for optimizing hole convolution, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113837943A (en) | Image processing method and device, electronic equipment and readable storage medium | |
US10901429B2 (en) | Method and apparatus for outputting information of autonomous vehicle | |
CN106845321B (en) | Method and device for processing pavement marking information | |
US9215382B1 (en) | Apparatus and method for data fusion and visualization of video and LADAR data | |
US20180096525A1 (en) | Method for generating an ordered point cloud using mobile scanning data | |
CN112652065A (en) | Three-dimensional community modeling method and device, computer equipment and storage medium | |
CN106688017A (en) | Method and device for generating a point cloud map, and a computer system | |
US10634504B2 (en) | Systems and methods for electronic mapping and localization within a facility | |
CN110470333A (en) | Scaling method and device, the storage medium and electronic device of sensor parameters | |
CN111582022A (en) | Fusion method and system of mobile video and geographic scene and electronic equipment | |
CN112613107B (en) | Method, device, storage medium and equipment for determining construction progress of pole and tower engineering | |
CN115236644A (en) | Laser radar external parameter calibration method, device, equipment and storage medium | |
CN110705381A (en) | Remote sensing image road extraction method and device | |
CN116051777B (en) | Super high-rise building extraction method, apparatus and readable storage medium | |
CN115240168A (en) | Perception result obtaining method and device, computer equipment and storage medium | |
CN114581464A (en) | Boundary detection method and device, electronic equipment and computer readable storage medium | |
CN115588144A (en) | Real-time attitude capturing method, device and equipment based on Gaussian dynamic threshold screening | |
CN115457354A (en) | Fusion method, 3D target detection method, vehicle-mounted device and storage medium | |
CN113011220A (en) | Spike number identification method and device, storage medium and processor | |
KR100709142B1 (en) | Spatial information structure method based image and system thereof | |
CN116402693B (en) | Municipal engineering image processing method and device based on remote sensing technology | |
Oliveira et al. | Occlusion detection by height gradient for true orthophoto generation, using LiDAR data | |
CN111476062A (en) | Lane line detection method and device, electronic equipment and driving system | |
CN117437357A (en) | Model construction method and device, nonvolatile storage medium and electronic equipment | |
Khan et al. | Clifford geometric algebra-based approach for 3D modeling of agricultural images acquired by UAVs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |