CN113724373A - Modeling method and device of GIS (geographic information System) equipment, computer equipment and storage medium - Google Patents

Modeling method and device of GIS (geographic information System) equipment, computer equipment and storage medium Download PDF

Info

Publication number
CN113724373A
CN113724373A CN202111028694.6A CN202111028694A CN113724373A CN 113724373 A CN113724373 A CN 113724373A CN 202111028694 A CN202111028694 A CN 202111028694A CN 113724373 A CN113724373 A CN 113724373A
Authority
CN
China
Prior art keywords
image
gis
model
equipment
digital model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111028694.6A
Other languages
Chinese (zh)
Inventor
刘华
陶冠男
杨文清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202111028694.6A priority Critical patent/CN113724373A/en
Publication of CN113724373A publication Critical patent/CN113724373A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Abstract

The application relates to a modeling method and device of GIS equipment, computer equipment and a storage medium. The method comprises the steps of obtaining multi-frame video images, industrial CT images and drawing design parameters of the GIS equipment, reversely constructing an external digital model of the GIS equipment according to the video images of the GIS equipment, reversely constructing an internal digital model of the GIS equipment according to the industrial CT images, and constructing the GIS equipment model according to the drawing design parameters, the external digital model of the GIS equipment and the internal digital model of the GIS equipment. The panoramic information of the equipment can be reversely generated based on the information such as videos of GIS equipment and industrial CT images acquired on site, and the edge gaps and the internal structure of the equipment model are more in line with the site reality in a mode of combining idealized model parameters (drawing design parameters) and the panoramic information of the equipment (video images and industrial CT images), so that the virtual-to-real of the equipment model is eliminated, and a powerful foundation is laid for the simulation calculation after a GIS digital twin model.

Description

Modeling method and device of GIS (geographic information System) equipment, computer equipment and storage medium
Technical Field
The present application relates to the field of power system technologies, and in particular, to a modeling method, apparatus, device, and storage medium for a GIS device.
Background
The gas insulated metal totally-enclosed combined electrical apparatus (GIS) has the advantages of convenient configuration, small volume, reliable operation, long service life and the like, and is generally applied to power systems.
Therefore, how to analyze and monitor the GIS equipment to guide production operation is very important. In the prior art, when GIS equipment is analyzed and monitored, a model can be constructed according to drawing design parameters of the GIS equipment, and then the model is analyzed and monitored so as to guide the production and operation of the GIS equipment.
However, the existing method for constructing the GIS equipment model has the problems of being not appropriate to the actual situation of an application field and low accuracy.
Disclosure of Invention
In view of the above, it is necessary to provide a method and an apparatus for modeling a GIS device, a computer device, and a storage medium, which can accurately apply actual conditions of a field.
In a first aspect, the present application provides a modeling method for a GIS device, the method including:
acquiring multi-frame video images, industrial CT images and drawing design parameters of GIS equipment;
reversely constructing an external digital model of the GIS equipment according to the video images of the GIS equipment;
according to the industrial CT image, an internal digital model of the GIS equipment is reversely constructed;
and constructing a GIS equipment model according to the drawing design parameters, the external digital model of the GIS equipment and the internal digital model of the GIS equipment.
In one embodiment, reversely constructing an external digital model of a GIS device according to a video image of each GIS device includes:
determining key frame video images from the video images;
determining a seed patch of the external model according to the key frame video image;
and constructing an external digital model of the GIS device according to the seed patches.
In one embodiment, determining a key frame video image from the video images comprises:
carrying out gray value processing on each video image, and calculating the gray change rate of each video image by adopting an average gradient method;
and comparing the gray scale change rate with a change rate threshold, and if the gray scale change rate is greater than the change rate threshold, determining that the video image is a key frame video image.
In one embodiment, determining a seed patch of an external model from a key frame video image comprises:
extracting the characteristic points of the key frame video images and determining the characteristic matching points of two continuous frames of key frame video images;
determining the center coordinates of the seed surface patches by adopting a triangulation method according to the feature matching points;
determining a normal vector of the seed surface patch according to the central coordinate of the seed surface patch and the central coordinate of the camera;
and constructing the seed surface patch according to the central coordinate of the seed surface patch, the normal vector of the seed surface patch and the reference image.
In one embodiment, extracting feature points of a key frame video image and determining feature matching points of two consecutive key frame video images comprises:
acquiring the average energy variation of the local detection window at each position when the local detection window moves along different directions on the key frame video image;
if the average energy variation is larger than a preset energy variation threshold, extracting a central pixel point at the position of the local detection window as a feature point;
acquiring a pixel distance between a first characteristic point on a first key frame video image and a second characteristic point on a second key frame video image; the first key frame video image and the second key frame video image are any two continuous video images;
taking the first characteristic point and the second characteristic point of which the pixel distance is less than or equal to a preset distance threshold value as reference matching points, and judging whether the reference matching points meet a limit constraint rule or not;
and if so, determining the reference matching points as a pair of feature matching points.
In one embodiment, constructing an external digital model of a GIS device from a seed patch includes:
determining adjacent seed patches according to the central coordinates of the seed patches and the normal vectors of the seed patches;
and splicing the adjacent seed patches to obtain an external digital model of the GIS device.
In one embodiment, the internal digital model of the GIS device is reversely constructed according to the industrial CT image, and the method comprises the following steps:
preprocessing an industrial CT image to obtain segmentation images of different areas of GIS equipment;
determining the contour points of each segmented image by adopting an 8-field single-pixel tracking method, and forming a closed contour plan of each segmented image according to the contour points of each segmented image;
carrying out surface sheet reconstruction on the closed contour plane graph by adopting a mode of fitting a cubic spline surface to obtain a surface sheet of each segmented image;
and splicing the curved surface pieces of the divided images to construct an internal digital model of the GIS device.
In one embodiment, the preprocessing the industrial CT image to obtain segmented images of different regions of the GIS device includes:
carrying out gray scale nonlinear transformation on the industrial CT image to obtain an enhanced industrial CT image;
denoising the enhanced industrial CT image by adopting a filtering algorithm to obtain a denoised industrial CT image;
and according to the image gray information of the industrial CT image subjected to noise reduction, segmenting the image subjected to noise reduction to obtain segmented images of different regions of the GIS equipment.
In one embodiment, constructing a GIS device model according to drawing design parameters, an external digital model of the GIS device, and an internal digital model of the GIS device includes:
and respectively substituting the drawing design parameters into an external digital model of the GIS equipment and an internal digital model of the GIS equipment to obtain a GIS equipment digital twin model as the GIS equipment model.
In a second aspect, the present application provides a modeling apparatus for a GIS device, the apparatus comprising:
the acquisition module is used for acquiring multi-frame video images, industrial CT images and drawing design parameters of the GIS equipment;
the external digital model building module is used for reversely building an external digital model of the GIS equipment according to the video images of the GIS equipment;
the internal digital model building module is used for reversely building an internal digital model of the GIS equipment according to the industrial CT image;
and the GIS equipment model determining module is used for constructing the GIS equipment model according to the drawing design parameters, the external digital model of the GIS equipment and the internal digital model of the GIS equipment.
In a third aspect, the present application provides a computer device comprising a memory and a processor, the memory storing a computer program, and the processor implementing the steps of the method in any one of the above embodiments of the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method in any of the embodiments of the first aspect described above.
According to the modeling method, device, computer equipment and storage medium of the GIS equipment, the multi-frame video image, the industrial CT image and the drawing design parameters of the GIS equipment are obtained, the external digital model of the GIS equipment is reversely constructed according to the video image of each GIS equipment, the internal digital model of the GIS equipment is reversely constructed according to the industrial CT image, and the GIS equipment model is constructed according to the drawing design parameters, the external digital model of the GIS equipment and the internal digital model of the GIS equipment. The panoramic information of the equipment can be reversely generated based on the information such as videos of GIS equipment and industrial CT images acquired on site, and the edge gaps and the internal structure of the equipment model are more in line with the site reality in a mode of combining idealized model parameters (drawing design parameters) and the panoramic information of the equipment (video images and industrial CT images), so that the virtual-to-real of the equipment model is eliminated, and a powerful foundation is laid for the simulation calculation after a GIS digital twin model.
Drawings
FIG. 1 is a diagram of an application environment of a modeling method for GIS devices in one embodiment;
FIG. 2 is a schematic flow chart diagram of a modeling method for GIS devices in one embodiment;
FIG. 3 is a schematic flow chart of a modeling method for GIS equipment in another embodiment;
FIG. 4 is a schematic flow chart of a modeling method for GIS equipment in another embodiment;
FIG. 5 is a schematic flow chart of a modeling method for GIS devices in another embodiment;
FIG. 6 is a schematic flow chart of a modeling method for GIS equipment in another embodiment;
FIG. 7 is a schematic flow chart of a method for modeling GIS equipment in another embodiment;
FIG. 8 is a schematic flow chart diagram illustrating a method for modeling GIS devices in another embodiment;
FIG. 9 is a schematic flow chart diagram illustrating a method for modeling GIS devices in another embodiment;
FIG. 10 is a block diagram showing the structure of a modeling apparatus of a GIS device in one embodiment;
FIG. 11 is a block diagram showing the construction of a modeling apparatus of a GIS device in another embodiment;
FIG. 12 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The gas insulated metal fully-enclosed combined electrical apparatus (GIS) is a main switch device in the power grid, and is an important high-voltage electrical apparatus in the extra-high voltage power grid. GIS equipment is generally applied to power systems because of the advantages of convenient configuration, small size, reliable operation, long service life and the like. However, in a large-scale and long-term use of the GIS, substation accidents caused by GIS are gradually increased, and a serious influence is caused on the stable operation of the power grid, so how to analyze and monitor the GIS to guide the production operation is very important.
With the rapid development of emerging technologies such as information fusion, intelligent sensing and network information, the fusion of power equipment and network information technology, intelligent technology and digital technology has become a trend of the development of the power industry. The Digital Twin (Digital Twin) is the best technology for realizing the transformation and upgrading from the traditional power plant to the intelligent power plant, the Digital Twin is actually a Digital model fusing multiple probabilities, multiple scales and multiple physical fields, a virtual model which is mutually mapped with the actual physical model is constructed on the basis of the actual physical model, and the whole life cycle process of the physical model is reflected and described through a large amount of historical data and real-time data transmitted by a sensor. Therefore, the digital twin technology and the GIS equipment are deeply fused, the interactive fusion and mutual mapping of the GIS equipment physical entity and the digital twin thereof are realized, the functions of real-time simulation, optimization analysis, supervision alarm and the like of the GIS equipment are realized, and the purpose of providing intelligent optimization guidance for operators is achieved.
The modeling method of the GIS equipment provided by the application can be applied to the application environment shown in figure 1. The application environment comprises: terminal 101, camera 102, industrial CT machine 103, GIS equipment 104. Among them, the terminal 101 can communicate with the camera 102 and the industrial CT machine 103 through a network. An external image of the GIS device may be photographed in all directions by the camera 102 and the photographed image may be transmitted to the terminal. The industrial CT machine 103 shoots internal devices of the GIS equipment, the shot internal images are transmitted to the terminal, and the terminal 101 analyzes and processes the external images and the internal images to construct internal and external digital models of the GIS equipment. Among them, the terminal 101 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
In an embodiment, as shown in fig. 2, a modeling method for a GIS device is provided, which is described by taking the method as an example of being applied to the terminal in fig. 1, and includes the following steps:
s202, obtaining multi-frame video images, industrial CT images and drawing design parameters of the GIS device.
The video images are the video images of the GIS equipment shot by the camera in all directions. The industrial CT image is an industrial CT image inside the GIS equipment shot by an industrial CT machine. The drawing design parameters are parameters for designing the GIS equipment, and may include length, width, height, etc. of the GIS equipment, which is not limited herein.
Specifically, the terminal acquires multi-frame video images of the GIS equipment in all directions shot by the camera. For example, a plurality of frames of video images of the GIS device in 8 directions of east, south, west, north, northeast, northwest, southeast and southwest are photographed and the images are transmitted to the terminal. And shooting the GIS equipment by adopting an industrial CT machine, and sending the shot industrial CT image to a terminal. The drawing design parameters may be data pre-stored in the terminal.
And S204, reversely constructing an external digital model of the GIS equipment according to the video images of the GIS equipment.
Specifically, after the terminal acquires the multi-frame video images of the GIS device, the multi-frame video images are subjected to enhancement filtering and other processing, then the video images are analyzed, characteristic parameters for constructing the external digital model are obtained, and the external digital model is constructed. The feature parameters of the video images can be extracted by inputting a plurality of frames of video images into the neural network model, for example, feature points of each frame of video image are extracted. And determining a pair of matched feature points of the two frames of video images according to the feature points of each frame of video image, constructing a plurality of seed patches, and splicing the seed patches to obtain an external digital model of the GIS device. Or performing gray processing on each frame of video image to determine the gray parameter of the video image, and then screening the key frame of video image according to the gray parameter. Analyzing the key frame video images, extracting the feature points of the key frame video images, determining the matching feature points of two continuous frames of key frame video images by adopting a rapid nearest neighbor method, establishing a seed patch according to the matching feature points, and splicing the seed patches according to a preset rule to obtain an external digital model, wherein the method is not limited.
And S206, reversely constructing an internal digital model of the GIS equipment according to the industrial CT image.
Specifically, after the industrial CT image is acquired, the industrial CT image may be subjected to analysis processing, for example, gray-scale processing, segmentation processing, and the like. For example, after the GT industrial image is segmented, contour point tracking may be performed on each segmented industrial CT image to obtain a contour planar view of each segmented industrial CT image, a curved surface may be generated from the contour planar view, and a plurality of curved surfaces may be stitched to obtain an internal digital model. The method can also be characterized in that the industrial CT image is input into a pre-constructed neural network model to obtain contour plane maps of different areas of internal devices of the GIS equipment, and after the contour plane maps are curved, the curved surfaces are spliced to obtain an internal digital model of the GIS equipment, wherein the method is not limited herein.
And S208, constructing a GIS equipment model according to the drawing design parameters, the external digital model of the GIS equipment and the internal digital model of the GIS equipment.
Specifically, the digital twin model of the GIS device may be obtained as the GIS device model by substituting the drawing design parameters into the external digital model of the GIS device and the internal digital model of the GIS device, respectively. Alternatively, the design parameters of the drawing are respectively substituted into the external digital model of the GIS device and the internal digital model of the GIS device, and then the external digital model of the GIS device and the internal digital model of the GIS device are fused to obtain the GIS device model, which is not limited herein.
According to the modeling method of the GIS equipment, the multi-frame video image, the industrial CT image and the drawing design parameters of the GIS equipment are obtained, the external digital model of the GIS equipment is reversely constructed according to the video image of each GIS equipment, the internal digital model of the GIS equipment is reversely constructed according to the industrial CT image, and the GIS equipment model is constructed according to the drawing design parameters, the external digital model of the GIS equipment and the internal digital model of the GIS equipment. The panoramic information of the equipment can be reversely generated based on the information such as the video image of the GIS equipment and the industrial CT image acquired on site, and the edge gap and the internal structure of the equipment model are more in line with the site reality in a mode of combining the idealized model parameters (drawing design parameters) and the panoramic information of the equipment (the video image and the industrial CT image), so that the virtual-true edge gap and the internal structure of the equipment model are removed, and a powerful foundation is laid for the simulation calculation after the GIS digital twin model.
In an embodiment, as shown in fig. 3, the reversely constructing the external digital model of the GIS device according to the video images of the GIS devices includes:
s302, determining key frame video images from the video images.
Specifically, the key frame video image can be obtained by inputting each video image into a preset neural network model, or the key frame video image can be determined according to the gray value parameter by processing the gray value of each video image. The gray value parameter may include a gray value change rate of each pixel in the video image.
Further, as shown in fig. 4, determining a key frame video image from the video images includes:
s402, carrying out gray value processing on each video image, and calculating the gray change rate of each video image by adopting an average gradient method.
Specifically, the graying process may be performed by direct grayscale conversion, image inversion, logarithmic conversion, or the like, and is not limited thereto. After the gray value processing is carried out, the change rate of the gray value of the video image in the multi-dimensional direction is reflected by adopting an average gradient method, namely according to a formula
Figure BDA0003244316220000081
Solving the average gradient value of each pixel point in the image as the gray change rate; in the formula, S (f, I) represents an average gradient value of a picture, I represents the entire image domain except for an image boundary, # (I) represents the number of pixels in the image domain, x and y are pixel coordinates, and f represents a gray level.
S404, comparing the gray scale change rate with a change rate threshold value, and if the gray scale change rate is larger than the change rate threshold value, determining that the video image is a key frame video image.
Specifically, the gray scale change rate and the change rate budget are compared, and may be compared in a manner of making a difference, making a quotient, or directly making a comparison smaller, which is not limited herein. And if the gray level change rate is greater than the change rate budget, determining that the video image is the key frame video image.
S304, determining a seed patch of the external model according to the key frame video image.
Specifically, after the key frame images are obtained, feature points of the key frame images can be extracted, and a plurality of seed patches of the external model are constructed by using distance relations among the feature points.
Further, as shown in fig. 5, determining a seed patch of the external model according to the key frame video image includes:
and S502, extracting the characteristic points of the key frame video images and determining the characteristic matching points of two continuous frames of key frame video images.
Specifically, the key frame images are input into a preset feature point extraction neural network, feature points of the key frame images are extracted, and after the feature points of each key frame image are determined, matching feature points are extracted from two continuous key frame images. Optionally, the number of feature points is at least one.
Further, as shown in fig. 6, extracting feature points of the key frame video images and determining feature matching points of two consecutive key frame video images includes:
s602, acquiring the average energy variation of the local detection window at each position when the local detection window moves along different directions on the key frame video image.
Specifically, after the key frame video image is obtained, the preset rejection detection window is adopted to move on the key frame image along different directions, and the average energy variation of the local detection window at each moving position at the moment is obtained. For example, the autocorrelation matrix of the image brightness of the key frame video image is set as:
Figure BDA0003244316220000091
in the formula (I), the compound is shown in the specification,
Figure BDA0003244316220000092
the Harris characteristic point response function is: r ═ det (m) -k (trace (m))2In the formula > T, det (M) ═ λ1λ2=PQ-O2;trace(M)=λ12P + Q; k is an empirical value, typically taken to be 0.1.
S604, if the average energy variation is larger than a preset energy variation threshold, extracting the central pixel point of the position of the local detection window as a feature point.
Specifically, the average energy variation is compared with a preset energy variation threshold, and if the average energy variation is larger than the preset energy variation threshold, the central pixel point at the position of the local detection window is extracted as the feature point. The feature point corresponds to a local maximum point of the function R, and when the R value of a pixel on the image is greater than a given threshold T, the point is considered as an image feature point. There may be a plurality of feature points per frame image.
S606, acquiring a pixel distance between a first characteristic point on the first key frame video image and a second characteristic point on the second key frame video image; the first key frame video image and the second key frame video image are any two continuous video images.
Specifically, after the feature points of each frame of key frame video image are obtained, the matching feature points of two consecutive frame of key frame video images need to be determined. The pixel distance between a first feature point on the first key frame video image and a second feature point on the second key frame video image can be obtained, and a formula is utilized
Figure BDA0003244316220000101
Is calculated in the formula (x)p1,yp1) Is the pixel point coordinate of the first characteristic point p1 in the first key frame video image, (x)p2,yp2) Is the pixel point coordinate of the second characteristic point p2 in the second key frame video image.
S608, the first characteristic point and the second characteristic point of which the pixel distance is smaller than or equal to the preset distance threshold are used as reference matching points, and whether the reference matching points meet the limit constraint rule or not is judged.
Specifically, a first feature point and a second feature point of which the pixel distance D is less than or equal to a preset distance threshold r are taken as reference matching points. At this time, it is necessary to determine whether the first feature point p1 and the second feature point p2 satisfy the limit constraint rule, i.e., p2Fp10. Wherein p1 and p2 are coordinate vectors of pixel points in the corresponding key frames, p1=[xp1 yp1],
Figure BDA0003244316220000102
F is the basis matrix, which is typically estimated using a least squares method.
And S610, if so, determining the reference matching points as a pair of feature matching points.
Specifically, if yes, the reference matching point is determined as a pair of feature matching points of the first key frame video image and the second key frame video image.
And S504, determining the center coordinates of the seed surface patch by adopting a triangulation method according to the feature matching points.
Specifically, after the feature matching points are obtained, the center coordinates c (p) of the seed patch are determined by a triangulation method according to the coordinates of the feature matching points.
And S506, determining the normal vector of the seed patch according to the central coordinate of the seed patch and the central coordinate of the camera.
Wherein, the central coordinate of the camera is the optical center O (Ii) of the camera.
In particular, according to the formula n (p) ← c (p) O (I)i)/|c(p)O(Ii) And l, determining a normal vector n (p) of the seed patch.
And S508, constructing the seed surface patch according to the center coordinates of the seed surface patch, the normal vector of the seed surface patch and the reference image.
Specifically, the seed patch is constructed by taking a central coordinate point of the seed patch as a center and referring to a reference image shot by a camera by a normal vector of the seed patch.
And S306, constructing an external digital model of the GIS device according to the seed patches.
Specifically, after a plurality of seed patches are determined, the seed patches need to be spliced by adopting a preset splicing rule, and an external digital model of the GIS device is constructed.
Further, as shown in fig. 7, constructing an external digital model of the GIS device according to the seed patch includes:
s702, determining the adjacent seed patches according to the center coordinates of the seed patches and the normal vectors of the seed patches.
Specifically, based on the characteristic that the neighboring patches have similar normal vectors and positions, the spatial patches around the seed patch are gradually reconstructed. The condition of the relationship between the two patches can be judged as follows: l (c (p)1)-c(p2))·n(p1)|+|(c(p1)-c(p2))·n(p2) L < p, where c (p)1) Is the center of the first sub-patch, c (p)2) Is the center of the second seed patch, n (p)1) Is the normal vector of the first sub-patch, n (p)2) And p is a distance threshold value, and is a normal vector of the second seed patch. When the proximity relationship is satisfied, the seed patches are adjacent. And determining the expanded diffusion degree according to whether the depth of the pixel points in the video key frame of the GIS equipment is continuous or not.
And S704, splicing the adjacent seed patches to obtain an external digital model of the GIS device.
Specifically, the external digital model of the GIS device can be obtained by splicing adjacent seed patches.
In this embodiment, by determining a key frame video image from each video image, determining a seed patch of an external model according to the key frame video image, and constructing an external digital model of the GIS device according to the seed patch, a model can be constructed based on an external image of the GIS device in a shooting application field, and the model can be made to be in close contact with the GIS device in actual use.
The above embodiment describes how to construct an external digital model of a GIS device, and now describes how to construct an internal digital model of a GIS device with an embodiment, in an embodiment, as shown in fig. 8, the reverse construction of the internal digital model of a GIS device according to an industrial CT image includes:
s802, preprocessing the industrial CT image to obtain the segmentation images of different areas of the GIS device.
The preprocessing may include, but is not limited to, image enhancement, image denoising, image segmentation, and the like.
Specifically, after the industrial CT image is acquired, preprocessing such as image denoising and enhancing may be performed on the image, and the processed image is input into a preset segmentation neural network model, so as to realize segmentation of different regions of each device of the GIS equipment in the industrial CT image, and obtain a segmentation image of each device region.
Further, as shown in fig. 9, the preprocessing is performed on the industrial CT image to obtain segmented images of different regions of the GIS device, including:
and S902, carrying out gray scale nonlinear transformation on the industrial CT image to obtain an enhanced industrial CT image.
Specifically, the gray value of the original industrial CT image can be changed through gray nonlinear transformation to achieve the effect of image gray enhancement. Optionally, the image enhancement is performed by using a gray scale logarithmic transformation, and the formula is as follows:
Figure BDA0003244316220000121
wherein f (x, y) represents the gray scale value of the original image; g (x, y) is a new gray value after gray index transformation; a is the vertical offset of the logarithmic curve, b is the bending degree of the logarithmic curve, and a and b are used for reflecting the nonlinearity of gray scale conversion and take specific valuesThe optimal value needs to be obtained by artificial setting for many times according to the quality of the CT image.
And S904, denoising the enhanced industrial CT image by adopting a filtering algorithm to obtain a denoised industrial CT image.
In particular, Gaussian filtering may be used, operating with a 3 x 3 Gaussian template.
Figure BDA0003244316220000122
In the formula, A is a Gaussian template, values are taken through Gaussian distribution in the matrix, the CT image is scanned by taking the template as a receptive field, convolution operation is carried out, and the result after the operation is the result after the CT image is denoised.
And S906, segmenting the noise-reduced image according to the image gray scale information of the noise-reduced industrial CT image to obtain segmented images of different areas of the GIS device.
Specifically, the industrial CT image after the first two steps of preprocessing contains the black color of the background, i.e. the gray value is near 0; and the part imaging part has relatively large gray value and represents a white area in the image, and the image is segmented by utilizing the gradient change and the similarity of the image gray information based on the factors. After image segmentation, the image is segmented into segmented images of different device areas of the GIS device.
And S804, determining the contour points of each segmented image by adopting an 8-field single-pixel tracking method, and forming a closed contour plan of each segmented image according to the contour points of each segmented image.
Specifically, on the basis of an industrial CT image being divided into divided images of different regions, image contour extraction is performed to obtain cross-sectional contour features. Alternatively, the contour points of the GIS equipment industrial CT image can be determined by an 8-field single-pixel tracking method. That is, as long as a point in the area of the pixel point periphery 8 is a pixel value corresponding to the background area, the point is a contour point. The specific tracking step may include: scanning each segmented image of the industrial CT graph line by line, and acquiring a first boundary point at the upper left of each segmented image as an initial tracking point; tracking the contour of the single area; drawing a contour boundary; and filling the tracked area, and tracking the closed contour to obtain a closed contour plan of the segmented image.
And S806, performing surface sheet reconstruction on the closed contour plane graph by adopting a mode of fitting a cubic spline surface to obtain a surface sheet of each segmented image.
Specifically, two directions (u, v) can be assigned to each data point of the closed contour plan, and the data points can be parameterized by a cumulative chord length method:
Figure BDA0003244316220000131
in the formula ui,jFor cumulative chord length in the u direction, Qi,jFor u-directional data point columns, mu,mvThe numbers of u and v upward curves are represented respectively, and i and j correspond to the upward curves in the u and v directions. Cumulative chord length v in v directioni,jThe formula is as above. Equation according to cubic B-spline surface P (u, v):
Figure BDA0003244316220000132
carrying out curved surface sheet reconstruction on the closed contour plane graph to obtain curved surface sheets of all the segmentation images; in the formula (d)i,jTo control the vertex, Ni,3And Nj,33-order spline basis functions defined in the u, v directions, respectively.
And S808, splicing the curved surface slices of the segmented images to construct an internal digital model of the GIS device.
Specifically, for each curved surface patch of the segmented image, each patch is spliced according to the non-segmented industrial CT image to obtain an internal digital model of the GIS device.
In the implementation, the segmented images of different areas of the GIS equipment are obtained by preprocessing the industrial CT image, the contour points of each segmented image are determined by adopting an 8-field single-pixel tracking method, the closed contour plane graph of each segmented image is formed according to the contour points of each segmented image, and the curved surface sheet of each segmented image is obtained by reconstructing the closed contour plane graph in a mode of fitting a cubic spline surface. The construction of an internal digital model of the GIS equipment can be carried out based on an industrial CT image shot in an application field, the GIS equipment is more suitable for actual use, and the GIS equipment has higher operation guiding significance.
The above embodiment explains how to construct the external digital model and the internal digital model of the GIS device, and after the internal and external digital models are constructed, in order to make the constructed models more accurate, the drawing design parameters of the GIS device can be fused with the internal and external digital models, so that the finally constructed GIS device is more accurate. In one embodiment, constructing a GIS device model according to drawing design parameters, an external digital model of the GIS device, and an internal digital model of the GIS device includes:
and respectively substituting the drawing design parameters into an external digital model of the GIS equipment and an internal digital model of the GIS equipment to obtain a GIS equipment digital twin model as the GIS equipment model.
Specifically, the drawing design parameters are respectively substituted into an external digital model of the GIS equipment and an internal digital model of the GIS equipment, and a GIS equipment digital twin model is obtained and used as the GIS equipment model. For example, the length, width and height design parameters of the GIS device are substituted into an external digital model of the GIS device and an internal digital model of the GIS device to obtain a digital twin model of the GIS device with a digital-form combination.
In this embodiment, the GIS device digital twin model is obtained as the GIS device model by substituting the drawing design parameters into the external digital model of the GIS device and the internal digital model of the GIS device, respectively. The method can combine idealized model parameters (drawing design parameters) with equipment panoramic information (video images and industrial CT images) to enable the edge gaps and the internal structure of the equipment model to be more in line with the field reality, so that the virtual to real removal is realized, and a powerful foundation is laid for simulation calculation after a GIS digital twin model.
To facilitate understanding of those skilled in the art, the modeling method of the GIS device is further described in an embodiment, which includes:
and S101, acquiring multi-frame video images, industrial CT images and drawing design parameters of the GIS equipment.
And S102, performing gray value processing on each video image, and calculating the gray change rate of each video image by adopting an average gradient method.
S103, comparing the gray scale change rate with a change rate threshold value, and if the gray scale change rate is greater than the change rate threshold value, determining that the video image is a key frame video image.
And S104, acquiring the average energy variation of the local detection window at each position when the local detection window moves along different directions on the key frame video image.
And S105, if the average energy variation is larger than a preset energy variation threshold, extracting the central pixel point of the position of the local detection window as a feature point.
S106, acquiring a pixel distance between a first characteristic point on the first key frame video image and a second characteristic point on the second key frame video image; the first key frame video image and the second key frame video image are any two continuous video images.
S107, the first characteristic point and the second characteristic point of which the pixel distance is smaller than or equal to a preset distance threshold are used as reference matching points, and whether the reference matching points meet the limit constraint rule or not is judged.
And S108, if so, determining the reference matching points as a pair of feature matching points.
And S109, determining the center coordinates of the seed surface patch by adopting a triangulation method according to the feature matching points.
And S110, determining the normal vector of the seed patch according to the central coordinate of the seed patch and the central coordinate of the camera.
And S111, constructing the seed surface patch according to the center coordinate of the seed surface patch, the normal vector of the seed surface patch and the reference image.
And S112, determining the adjacent seed patches according to the central coordinates of the seed patches and the normal vectors of the seed patches.
And S113, splicing the adjacent seed patches to obtain an external digital model of the GIS device.
And S114, carrying out gray scale nonlinear transformation on the industrial CT image to obtain an enhanced industrial CT image.
And S115, denoising the enhanced industrial CT image by adopting a filtering algorithm to obtain a denoised industrial CT image.
And S116, segmenting the noise-reduced image according to the image gray scale information of the noise-reduced industrial CT image to obtain segmented images of different areas of the GIS device.
And S117, determining the contour points of each segmented image by adopting an 8-field single-pixel tracking method, and forming a closed contour plan of each segmented image according to the contour points of each segmented image.
And S118, carrying out surface sheet reconstruction on the closed contour plane graph by adopting a mode of fitting a cubic spline surface to obtain a surface sheet of each segmented image.
And S119, splicing the curved surface slices of the segmented images to construct an internal digital model of the GIS device.
And S120, respectively substituting the design parameters of the drawing into an external digital model of the GIS equipment and an internal digital model of the GIS equipment to obtain a GIS equipment digital twin model as the GIS equipment model.
According to the modeling method of the GIS equipment, the multi-frame video image, the industrial CT image and the drawing design parameters of the GIS equipment are obtained, the external digital model of the GIS equipment is reversely constructed according to the video image of each GIS equipment, the internal digital model of the GIS equipment is reversely constructed according to the industrial CT image, and the GIS equipment model is constructed according to the drawing design parameters, the external digital model of the GIS equipment and the internal digital model of the GIS equipment. The panoramic information of the equipment can be reversely generated based on the information such as videos of GIS equipment and industrial CT images acquired on site, and the edge gaps and the internal structure of the equipment model are more in line with the site reality in a mode of combining idealized model parameters (drawing design parameters) and the panoramic information of the equipment (video images and industrial CT images), so that the virtual-to-real of the equipment model is eliminated, and a powerful foundation is laid for the simulation calculation after a GIS digital twin model.
It should be understood that although the various steps in the flow charts of fig. 2-9 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-9 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
The above embodiment describes a modeling method of a GIS device, and now an embodiment describes a modeling apparatus of a GIS device, and in an embodiment, as shown in fig. 10, there is provided a modeling apparatus of a GIS device, including:
the acquisition module 11 is used for acquiring multi-frame video images, industrial CT images and drawing design parameters of the GIS equipment;
the external digital model building module 12 is used for reversely building an external digital model of the GIS equipment according to the video images of the GIS equipment;
the internal digital model building module 13 is used for reversely building an internal digital model of the GIS equipment according to the industrial CT image;
and the GIS equipment model determining module 14 is used for constructing a GIS equipment model according to the drawing design parameters, the external digital model of the GIS equipment and the internal digital model of the GIS equipment.
In the modeling device of the GIS equipment, the acquisition module acquires multi-frame video images, industrial CT images and drawing design parameters of the GIS equipment, the external digital model building module reversely builds an external digital model of the GIS equipment according to the video images of the GIS equipment, the internal digital model building module reversely builds an internal digital model of the GIS equipment according to the industrial CT images, and the GIS equipment model determining module builds the GIS equipment model according to the drawing design parameters, the external digital model of the GIS equipment and the internal digital model of the GIS equipment. The panoramic information of the equipment can be reversely generated based on the information such as videos of GIS equipment and industrial CT images acquired on site, and the edge gaps and the internal structure of the equipment model are more in line with the site reality in a mode of combining idealized model parameters (drawing design parameters) and the panoramic information of the equipment (video images and industrial CT images), so that the virtual-to-real of the equipment model is eliminated, and a powerful foundation is laid for the simulation calculation after a GIS digital twin model.
In one embodiment, as shown in FIG. 11, an external digital model building module 12 includes:
a first determining unit 121 configured to determine a key frame video image from among the video images;
a second determining unit 122, configured to determine a seed patch of the external model according to the key frame video image;
and a first constructing unit 123, configured to construct an external digital model of the GIS device according to the seed patch.
In an embodiment, the first determining unit is specifically configured to perform gray value processing on each video image, and calculate a gray change rate of each video image by using an average gradient method; and comparing the gray scale change rate with a change rate threshold, and if the gray scale change rate is greater than the change rate threshold, determining that the video image is a key frame video image.
In one embodiment, the second determining unit is specifically configured to extract feature points of a key frame video image, and determine feature matching points of two consecutive key frame video images; determining the center coordinates of the seed surface patches by adopting a triangulation method according to the feature matching points; determining a normal vector of the seed surface patch according to the central coordinate of the seed surface patch and the central coordinate of the camera; and constructing the seed surface patch according to the central coordinate of the seed surface patch, the normal vector of the seed surface patch and the reference image.
In an embodiment, the second determining unit is specifically configured to obtain an average energy variation of the local detection window at each position when the local detection window moves in different directions on the key frame video image; if the average energy variation is larger than a preset energy variation threshold, extracting a central pixel point at the position of the local detection window as a feature point; acquiring a pixel distance between a first characteristic point on a first key frame video image and a second characteristic point on a second key frame video image; the first key frame video image and the second key frame video image are any two continuous video images; taking the first characteristic point and the second characteristic point of which the pixel distance is less than or equal to a preset distance threshold value as reference matching points, and judging whether the reference matching points meet a limit constraint rule or not; and if so, determining the reference matching points as a pair of feature matching points.
In an embodiment, the first construction determining unit is specifically configured to determine adjacent seed patches according to the center coordinates of the seed patches and normal vectors of the seed patches; and splicing the adjacent seed patches to obtain an external digital model of the GIS device.
In one embodiment, referring to fig. 11, the internal digital model building module 13 includes:
the image processing unit 131 is used for preprocessing the industrial CT image to obtain the segmentation images of different areas of the GIS equipment;
a closed contour plan determining unit 132, configured to determine contour points of each of the segmented images by using an 8-domain single-pixel tracking method, and form a closed contour plan of each of the segmented images according to the contour points of each of the segmented images;
a surface patch reconstruction unit 133, configured to perform surface patch reconstruction on the closed contour plane graph by fitting a cubic spline surface to obtain a surface patch of each segmented image;
and a second constructing unit 134, configured to splice the curved patches of each segmented image, so as to construct an internal digital model of the GIS device.
In one embodiment, the image processing unit is specifically configured to perform gray scale nonlinear transformation on the industrial CT image to obtain an enhanced industrial CT image; denoising the enhanced industrial CT image by adopting a filtering algorithm to obtain a denoised industrial CT image; and according to the image gray information of the industrial CT image subjected to noise reduction, segmenting the image subjected to noise reduction to obtain segmented images of different regions of the GIS equipment.
In an embodiment, the GIS device model determining module is specifically configured to substitute the drawing design parameters into an external digital model of the GIS device and an internal digital model of the GIS device, respectively, to obtain a GIS device digital twin model as the GIS device model.
For specific definition of the modeling apparatus of the GIS device, reference may be made to the above definition of the modeling method of the GIS device, and details are not repeated here. All or part of each module in the modeling device of the GIS equipment can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 12. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of modeling a GIS device. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above-described method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (12)

1. A modeling method for a GIS device, the method comprising:
acquiring multi-frame video images, industrial CT images and drawing design parameters of GIS equipment;
reversely constructing an external digital model of the GIS equipment according to the video images of the GIS equipment;
according to the industrial CT image, an internal digital model of the GIS equipment is reversely constructed;
and constructing a GIS equipment model according to the drawing design parameters, the external digital model of the GIS equipment and the internal digital model of the GIS equipment.
2. The method of claim 1, wherein inversely constructing the external digital model of the GIS device from the video images of the GIS devices comprises:
determining a key frame video image from each of the video images;
determining a seed patch of an external model according to the key frame video image;
and constructing an external digital model of the GIS device according to the seed patches.
3. The method of claim 2, wherein said determining a key frame video image from each of said video images comprises:
carrying out gray value processing on each video image, and calculating the gray change rate of each video image by adopting an average gradient method;
and comparing the gray change rate with a change rate threshold, and if the gray change rate is greater than the change rate threshold, determining that the video image is a key frame video image.
4. The method of claim 2, wherein determining a seed patch of an external model from the key frame video image comprises:
extracting the characteristic points of the key frame video images and determining the characteristic matching points of two continuous frames of the key frame video images;
determining the center coordinates of the seed surface patches by adopting a triangulation method according to the feature matching points;
determining a normal vector of the seed surface patch according to the central coordinate of the seed surface patch and the central coordinate of the camera;
and constructing the seed surface patch according to the central coordinate of the seed surface patch, the normal vector of the seed surface patch and the reference image.
5. The method according to claim 3, wherein said extracting feature points of said key frame video images and determining feature matching points of two consecutive frames of said key frame video images comprises:
acquiring the average energy variation of a local detection window at each position when the local detection window moves along different directions on the key frame video image;
if the average energy variation is larger than a preset energy variation threshold, extracting a central pixel point at the position of the local detection window as the feature point;
acquiring a pixel distance between a first characteristic point on a first key frame video image and a second characteristic point on a second key frame video image; the first key frame video image and the second key frame video image are any two continuous video images;
taking a first characteristic point and a second characteristic point of which the pixel distance is less than or equal to a preset distance threshold value as reference matching points, and judging whether the reference matching points meet a limit constraint rule or not;
and if so, determining the reference matching point as a pair of feature matching points.
6. The method of claim 3, wherein constructing the external digital model of the GIS device from the seed patches comprises:
determining adjacent seed patches according to the central coordinates of the seed patches and the normal vectors of the seed patches;
and splicing the adjacent seed patches to obtain an external digital model of the GIS device.
7. The method according to any one of claims 1-5, wherein the inversely constructing the internal digital model of the GIS device from the industrial CT image comprises:
preprocessing the industrial CT image to obtain segmentation images of different areas of GIS equipment;
determining the contour points of the segmentation images by adopting an 8-field single-pixel tracking method, and forming a closed contour plan of each segmentation image according to the contour points of each segmentation image;
carrying out surface slice reconstruction on the closed contour plane graph by adopting a mode of fitting a cubic spline surface to obtain a surface slice of each segmented image;
and splicing the curved surface pieces of the segmented images to construct an internal digital model of the GIS equipment.
8. The method of claim 7, wherein the preprocessing the industrial CT image to obtain segmented images of different regions of the GIS device comprises:
carrying out gray scale nonlinear transformation on the industrial CT image to obtain an enhanced industrial CT image;
denoising the enhanced industrial CT image by adopting a filtering algorithm to obtain a denoised industrial CT image;
and segmenting the noise-reduced image according to the image gray scale information of the noise-reduced industrial CT image to obtain segmented images of different regions of the GIS equipment.
9. The method of claim 1, wherein constructing a GIS device model based on the drawing design parameters, the external digital model of the GIS device, and the internal digital model of the GIS device comprises:
and substituting the drawing design parameters into an external digital model of the GIS equipment and an internal digital model of the GIS equipment respectively to obtain a GIS equipment digital twin model as the GIS equipment model.
10. An apparatus for modeling a GIS device, the apparatus comprising:
the acquisition module is used for acquiring multi-frame video images, industrial CT images and drawing design parameters of the GIS equipment;
the external digital model building module is used for reversely building an external digital model of the GIS equipment according to the video images of the GIS equipment;
the internal digital model building module is used for reversely building an internal digital model of the GIS equipment according to the industrial CT image;
and the GIS equipment model determining module is used for constructing a GIS equipment model according to the drawing design parameters, the external digital model of the GIS equipment and the internal digital model of the GIS equipment.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 9.
CN202111028694.6A 2021-09-02 2021-09-02 Modeling method and device of GIS (geographic information System) equipment, computer equipment and storage medium Pending CN113724373A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111028694.6A CN113724373A (en) 2021-09-02 2021-09-02 Modeling method and device of GIS (geographic information System) equipment, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111028694.6A CN113724373A (en) 2021-09-02 2021-09-02 Modeling method and device of GIS (geographic information System) equipment, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113724373A true CN113724373A (en) 2021-11-30

Family

ID=78681247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111028694.6A Pending CN113724373A (en) 2021-09-02 2021-09-02 Modeling method and device of GIS (geographic information System) equipment, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113724373A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115395646A (en) * 2022-08-08 2022-11-25 北京中润惠通科技发展有限公司 Intelligent operation and maintenance system of digital twin traction substation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115395646A (en) * 2022-08-08 2022-11-25 北京中润惠通科技发展有限公司 Intelligent operation and maintenance system of digital twin traction substation
CN115395646B (en) * 2022-08-08 2023-04-07 北京中润惠通科技发展有限公司 Intelligent operation and maintenance system of digital twin traction substation

Similar Documents

Publication Publication Date Title
CN113516135B (en) Remote sensing image building extraction and contour optimization method based on deep learning
CN111311578A (en) Object classification method and device based on artificial intelligence and medical imaging equipment
Wang et al. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information
KR100810326B1 (en) Method for generation of multi-resolution 3d model
JP2021512446A (en) Image processing methods, electronic devices and storage media
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN111899270B (en) Card frame detection method, device, equipment and readable storage medium
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
Liang et al. An extraction and classification algorithm for concrete cracks based on machine vision
US11657528B2 (en) System and method for mobile 3D scanning and measurement
CN112101195A (en) Crowd density estimation method and device, computer equipment and storage medium
Zhao et al. Region-based saliency estimation for 3D shape analysis and understanding
Momtaz Dargahi et al. Color-space analytics for damage detection in 3D point clouds
CN113724373A (en) Modeling method and device of GIS (geographic information System) equipment, computer equipment and storage medium
CN114663598A (en) Three-dimensional modeling method, device and storage medium
Wang et al. 3D human pose and shape estimation with dense correspondence from a single depth image
Pham et al. Automatic detection and measurement of ground crack propagation using deep learning networks and an image processing technique
CN114332457A (en) Image instance segmentation model training method, image instance segmentation method and device
CN105138979A (en) Method for detecting the head of moving human body based on stereo visual sense
CN114202554A (en) Mark generation method, model training method, mark generation device, model training device, mark method, mark device, storage medium and equipment
CN116596935A (en) Deformation detection method, deformation detection device, computer equipment and computer readable storage medium
CN115797547A (en) Image modeling method, computer device, and storage medium
Xiao et al. FastNet: A Lightweight Convolutional Neural Network for Tumors Fast Identification in Mobile Computer-Assisted Devices
CN113592876A (en) Training method and device for split network, computer equipment and storage medium
Zhang et al. Building façade element extraction based on multidimensional virtual semantic feature map ensemble learning and hierarchical clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination