CN115797675A - Artificial intelligence image processing method and system - Google Patents
Artificial intelligence image processing method and system Download PDFInfo
- Publication number
- CN115797675A CN115797675A CN202310086531.6A CN202310086531A CN115797675A CN 115797675 A CN115797675 A CN 115797675A CN 202310086531 A CN202310086531 A CN 202310086531A CN 115797675 A CN115797675 A CN 115797675A
- Authority
- CN
- China
- Prior art keywords
- image
- point cloud
- spatial
- index
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides an artificial intelligence image processing method and system, and relates to the technical field of image processing, wherein the image sequence of a target three-dimensional object is clustered to obtain a plurality of image subsequences, then spatial analysis is carried out to obtain a plurality of spatial indexes, images in each image subsequence are classified, a plurality of classification results are obtained, point cloud identification parameters are configured, a point cloud identification system is connected to carry out point cloud identification on the image sequence of the target three-dimensional object, and a three-dimensional image modeling result is obtained.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an artificial intelligence image processing method and system.
Background
Image processing is involved in a plurality of fields, and the quality of the image processing technology influences the final decision. Compared with a two-dimensional image, the three-dimensional image has obvious information coverage advantage, information is customized based on three-dimensional characteristics, and information variation under the change of a mapping mode is avoided, so that the problem of processing the three-dimensional image is solved for the current hotspot. Nowadays, a conventional image processing method directly performs a series of processing on an image or completes the processing by an auxiliary processing device to meet the image processing requirements of users, but due to the limitations of the current technology, the final image processing result is limited, and further technical innovation is needed.
In the prior art, the image processing mode is traditional, the intelligence is insufficient, and the two-dimensional image cannot be optimized based on three-dimensional characteristics, so that the image processing effect is limited.
Disclosure of Invention
The application provides an artificial intelligence image processing method and system, which are used for solving the technical problems that in the prior art, the image processing mode is more traditional, the intelligence is insufficient, the two-dimensional image cannot be optimized based on three-dimensional characteristics, and the image processing effect is limited.
In view of the foregoing, the present application provides an artificial intelligence image processing method and system.
In a first aspect, the present application provides an artificial intelligence image processing method, including:
acquiring an image sequence of a target three-dimensional object according to the three-dimensional image management system;
clustering the image sequences to obtain a plurality of image subsequences, wherein each image subsequence corresponds to a view angle interval;
acquiring a plurality of spatial indexes by spatially analyzing the images in the plurality of image subsequences;
classifying the images in each image subsequence according to the plurality of spatial indexes to obtain a plurality of classification results;
configuring point cloud identification parameters based on the plurality of classification results;
and connecting the point cloud identification system, and carrying out point cloud identification on the image sequence of the target three-dimensional object according to the point cloud identification parameters to obtain a three-dimensional image modeling result of the target three-dimensional object.
In a second aspect, the present application provides an artificial intelligence image processing system, the system comprising:
the sequence acquisition module is used for acquiring an image sequence of a target three-dimensional object according to the three-dimensional image management system;
the sequence clustering module is used for clustering the image sequences to obtain a plurality of image subsequences, wherein each image subsequence corresponds to a view angle interval;
an index obtaining module, configured to obtain a plurality of spatial indexes by performing spatial analysis on images in the plurality of image subsequences;
the image classification module is used for classifying the images in each image subsequence according to the plurality of spatial indexes to obtain a plurality of classification results;
a parameter configuration module for configuring point cloud identification parameters based on the plurality of classification results;
and the point cloud modeling module is used for connecting the point cloud identification system, and performing point cloud identification on the image sequence of the target three-dimensional object according to the point cloud identification parameters to obtain a three-dimensional image modeling result of the target three-dimensional object.
One or more technical solutions provided in the present application have at least the following technical effects or advantages:
according to the artificial intelligence image processing method provided by the embodiment of the application, an image sequence of a target three-dimensional object is obtained according to the three-dimensional image management system, the image sequence clustering is carried out, a plurality of image subsequences are obtained, and each image subsequence corresponds to a view angle interval; performing spatial analysis on the images in the image subsequences to obtain a plurality of spatial indexes, classifying the images in each image subsequence to obtain a plurality of classification results, and configuring point cloud identification parameters based on the classification results; the point cloud identification system is connected, point cloud identification is carried out on the image sequence of the target three-dimensional object according to the point cloud identification parameters, a three-dimensional image modeling result of the target three-dimensional object is obtained, the technical problems that in the prior art, the image processing mode is more traditional, the intelligence is insufficient, optimization processing cannot be carried out on two-dimensional images based on three-dimensional characteristics, and the image processing effect is limited are solved.
Drawings
FIG. 1 is a schematic flow chart of an artificial intelligence image processing method provided by the present application;
FIG. 2 is a schematic diagram illustrating a flow of acquiring a plurality of spatial indexes in an artificial intelligence image processing method according to the present application;
FIG. 3 is a schematic view illustrating a configuration process of point cloud identification parameters in an artificial intelligence image processing method according to the present application;
fig. 4 is a schematic diagram of an artificial intelligence image processing system according to the present application.
Description of the reference numerals: the system comprises a sequence acquisition module 11, a sequence clustering module 12, an index acquisition module 13, an image classification module 14, a parameter configuration module 15 and a point cloud modeling module 16.
Detailed Description
The application provides an artificial intelligence image processing method and system, clustering is carried out on an image sequence of a target three-dimensional object to obtain a plurality of image subsequences, then spatial analysis is carried out to obtain a plurality of spatial indexes, images in each image subsequence are classified, a plurality of classification results are obtained, point cloud identification parameters are configured, a point cloud identification system is connected, point cloud identification is carried out on the image sequence of the target three-dimensional object to obtain a three-dimensional image modeling result, and the method and system are used for solving the technical problems that in the prior art, the image processing mode is traditional, the intelligence is insufficient, optimization processing cannot be carried out on two-dimensional images based on three-dimensional characteristics, and the image processing effect is limited.
The first embodiment is as follows:
as shown in fig. 1, the present application provides an artificial intelligence image processing method, which is applied to a three-dimensional image management system, where the three-dimensional image management system is in communication connection with a point cloud identification system, and the method includes:
step S100: acquiring an image sequence of a target three-dimensional object according to the three-dimensional image management system;
specifically, the three-dimensional image has an obvious information coverage advantage compared with a two-dimensional image, information is customized based on three-dimensional characteristics, information variation under the change of a mapping mode is avoided, and meanwhile, the problem of processing the three-dimensional image is solved for the current hotspot. In order to better identify a three-dimensional object, the artificial intelligent image processing method and the three-dimensional influence management system provided by the application provide a general control system for identifying and optimizing a three-dimensional built image, the system is in communication connection with a point cloud identification system, and the point cloud identification system is a management and control system for identifying and extracting characteristics of point cloud data, such as point cloud coordinates, colors, frameworks and the like, by performing three-dimensional image atomization. Specifically, based on the three-dimensional image management system, a target image to be subjected to image processing is selected, key frame identification and screening are performed on extracted images, a screening result is sequenced based on time sequence advancing, and an image sequence of the target three-dimensional object is generated, wherein the image sequence comprises the target object under multiple angles, and provides a basic basis for an information source to be subjected to identification processing and subsequent image coverage characteristic information identification and extraction.
Step S200: clustering the image sequences to obtain a plurality of image subsequences, wherein each image subsequence corresponds to a view angle interval;
further, clustering the image sequence to obtain a plurality of image subsequences, where step S200 of the present application further includes:
step S210: performing visual angle analysis on the image sequence of the target three-dimensional object to obtain a visual angle acquisition threshold;
step S220: optimizing the view angle acquisition threshold to obtain a first K value, wherein K is a positive integer greater than or equal to 0;
step S230: and K-clustering the image sequence of the target three-dimensional object by using the first K value, and outputting an image clustering result, wherein the image clustering result is K clustering results, and the K clustering results correspond to the plurality of image subsequences.
Further, step S220 of the present application further includes:
step S221: carrying out image quantitative analysis on the image sequence of the target three-dimensional object to obtain a quantitative index;
step S222: generating a first optimizing constraint condition according to the quantization index;
step S223: acquiring a preset subsequence quantization index, wherein the preset subsequence quantization index is used for identifying the minimum clustering number in each image subsequence;
step S224: generating a second optimizing constraint condition according to the preset subsequence quantization index;
step S225: and optimizing and constraining the view angle acquisition threshold according to the first optimizing constraint condition and the second optimizing constraint condition.
Specifically, a view angle acquisition threshold is configured, an image sequence of the target three-dimensional object is subjected to clustering processing, an image clustering result is obtained, a plurality of clustering results included in the image clustering result are subjected to image extraction, image sequencing is performed based on view angle transition, and a plurality of image subsequences are generated, wherein the plurality of image subsequences correspond to a view angle interval, namely the view angle interval corresponding to the view angle acquisition threshold.
Specifically, the image sequence is obtained by performing image acquisition on the target three-dimensional object, and in order to ensure the information completeness of the target three-dimensional object, the image sequence includes the target three-dimensional object under multiple acquisitions. For the target three-dimensional image, the view angle acquisition threshold is a view angle limiting critical value for clustering the image sequences, for example, 30 ° is used as the view angle acquisition threshold, that is, the image sequences located within the view angle acquisition threshold are classified as the same class. And further optimizing and constraining the visual angle acquisition threshold value to improve the required contact degree of the subsequent clustering result.
Specifically, image quantization measurement is performed on an image sequence of the target three-dimensional object, and the total amount of the image sequence is determined as the quantization index, that is, description data representing the total amount of the image. And generating the first optimizing constraint condition based on the image quantization index, such as configuring a cluster number interval and the like. And performing image subsequence clustering limitation based on the preset subsequence quantization index, wherein the image subsequences correspond to the clustering clusters, and the minimum clustering number in each subsequence is set, namely the number of included images is used as the preset subsequence quantization index. And taking the sub-sequence image magnitude as the second optimization constraint condition based on the preset sub-sequence quantization index, for example, when the divided sub-sequence image magnitude is greater than the preset sub-sequence quantization index, that is, the second optimization constraint condition is not satisfied, and screening a plurality of images with preference among the images as the same clustering result. And optimizing and adjusting the view angle acquisition threshold based on the first optimization constraint condition and the second optimization constraint condition to enable the view angle acquisition threshold to meet a clustering framework in an expected state, for example, when the number of clustering clusters in a clustering result is small, the view angle acquisition threshold can be reduced based on a real-time state, and the view angle acquisition threshold is subjected to optimization constraint by configuring the optimization constraint conditions, so that the required engagement degree of the configured view angle acquisition threshold is further improved.
Further, optimizing is carried out through the visual angle acquisition threshold value, the first K value, namely the demand cluster, is obtained, and K is a positive integer greater than or equal to 0. And taking the first K value as a clustering limiting condition, combining the visual angle acquisition threshold value, performing clustering analysis on the image sequence of the target three-dimensional object, dividing the image sequence into K classes as image clustering results, wherein the K clustering results contained in the image clustering results respectively correspond to the plurality of image subsequences one by one, and each image subsequence respectively corresponds to a visual interval, namely the visual angle interval corresponding to the visual acquisition threshold value. By carrying out image clustering processing, the image orderliness is improved, the subsequent aiming processing for each image subsequence is facilitated, and the processing efficiency and the fault tolerance rate are improved.
Step S300: acquiring a plurality of spatial indexes by spatially analyzing the images in the plurality of image subsequences;
further, as shown in fig. 2, a plurality of spatial indexes are obtained by spatially analyzing the images in the plurality of image sub-sequences, and step S300 of the present application further includes:
step S310: performing feature recognition on the images in the image subsequences to obtain an image linear feature set;
step S320: acquiring structural representation intensity and picture occupation ratio according to the image linear feature set;
step S330: and carrying out spatial analysis according to the structural representation strength and the picture occupation ratio to obtain the plurality of spatial indexes.
Specifically, the image sequences are clustered to generate the plurality of image subsequences, feature recognition extraction is performed on each image subsequence based on the plurality of image subsequences, the structural representation strength and the picture station ratio are further used as analysis dimensions to extract associated data, the extracted multi-dimensional data are identified based on a mapping view angle interval of the corresponding image subsequences to facilitate subsequent recognition and distinguishing, and spatial characteristics are mined on the basis to generate the plurality of spatial indexes.
Specifically, the multiple image subsequences are sequentially arranged based on visual propulsion based on the view angle intervals corresponding to the multiple image subsequences, images covered by a first subsequence in the multiple image subsequences are extracted, image feature recognition is performed on the images, the images comprise three-dimensional coordinate arrangement, colors, spatial layout and the like of the images, illustratively, a space coordinate system is established based on an image space region, multiple necessary positioning points of a target three-dimensional object in a recoverable image are determined, coordinate positioning is performed on the positioning points, and a point cloud coordinate point set is determined. And performing gradient distribution analysis on the same type of features based on the extracted features of the plurality of images in the subsequence to obtain the image linear feature set, wherein the image linear feature set comprises feature information of the plurality of images in each subsequence. And respectively carrying out feature recognition on the image subsequences, and adding recognition results into the image linear feature set, so that the accuracy and completeness of image feature extraction can be effectively improved.
Further, the structural characterization strength and the frame occupation ratio are used as spatial analysis dimensions, and the image linear feature set is subjected to associated data extraction based on a plurality of spatial analysis dimensions, for example, a region area of the target three-dimensional object in the image is determined, and a ratio of the region area to a total region area of the image is used as the frame occupation ratio. And further taking the structure representation strength and the picture occupation ratio obtained after the feature analysis as an evaluation direction, wherein the structure representation strength and the picture station ratio are provided with a view angle interval identifier corresponding to the image subsequence, and mining the spatial characteristics of the target three-dimensional object to generate the plurality of spatial indexes.
Further, the spatial analysis is performed according to the structural characterization strength and the frame occupation ratio, and step S330 of the present application further includes:
step S331: building a spatial analysis model, wherein the spatial analysis model is obtained by training a plurality of groups of training data to convergence, and the plurality of groups of training data comprise the structural representation strength, the picture occupation ratio and an index for identifying the spatial degree;
step S332: inputting the structural representation intensity and the picture ratio corresponding to the image subsequences into the spatial analysis model to obtain a first spatial index and a second spatial index \8230, wherein N is the same as the number of the image subsequences;
step S333: and outputting the plurality of spatial indexes according to the first spatial index, the second spatial index \8230andthe Nth spatial index.
Specifically, by performing big data research statistics, a plurality of sets of sample record data are obtained, including a plurality of sets of target image sequences, and corresponding structural representation intensities, the frame station ratio, and the index identifying the spatial degree, which is historically identified index data matching the plurality of sets of sample record data, such as spatial position, distance, orientation, topological relation, and the like. Taking the structural representation strength and the picture ratio as identification sample data, taking the index for identifying the spatial degree as decision sample data, performing mapping connection on the identification sample data and the decision sample data to obtain a plurality of groups of training data, dividing the plurality of groups of training data into a training set and a test set based on a preset division ratio and the division ratio of the sample data, performing neural network training based on the training set to complete the construction of the spatial analysis model, further outputting the test set to the spatial analysis model, obtaining the index for identifying the spatial degree output by the model, performing proofreading evaluation on the index and corresponding data in the test set, and when the data deviation between the two is smaller than a preset deviation threshold value, namely smaller than a limited critical value of data deviation, indicating that the running mechanism of the current model is better, obtaining the constructed spatial analysis model, otherwise, performing sample division again, and repeatedly performing model training and testing until the analysis accuracy of the model is qualified.
Further, the structural representation strength and the picture occupation ratio corresponding to the image subsequences are determined, the structural representation strength and the picture occupation ratio are input into the spatial analysis model, model identification analysis and decision are carried out, and corresponding spatial indexes are output, wherein the spatial indexes include the first spatial index, the second spatial index and the Nth spatial index, and the first spatial index, the second spatial index and the Nth spatial index respectively correspond to the image subsequences in a one-to-one mode. The image spatial index analysis is carried out by constructing the model, so that the analysis result, namely the accuracy and the data objectivity of the determined spatial index can be effectively improved.
Step S400: classifying the images in each image subsequence according to the plurality of spatial indexes to obtain a plurality of classification results;
step S500: configuring point cloud identification parameters based on the plurality of classification results;
specifically, the plurality of spatial indexes are acquired based on structural representation strength and the picture occupation ratio by building the spatial analysis model. Defining a spatial division standard, wherein the spatial division standard is critical data for spatial high-low division of images, the plurality of spatial indexes are used as analysis reference bases, the images in each image subsequence are divided into two groups based on the spatial division standard, and each image subsequence is used as a plurality of classification results, wherein the plurality of classification results correspond to the plurality of image subsequences; when the image spatiality is low, the necessity of acquiring the sparse point cloud is avoided, the dense point cloud is directly acquired, and the follow-up modeling requirement can be met.
Further, based on the classification results, the point cloud identification parameters are configured according to a point cloud parameter configuration module, the point cloud parameter configuration module comprises a first point cloud reconstruction module and a second point cloud reconstruction module which are respectively used for configuring sparse point cloud parameters and dense point cloud parameters, the configured point cloud parameters are provided with classification marks, point cloud parameter integration and attribution integration are carried out, the point cloud identification parameters are generated, and the point cloud identification parameters are acquired to lay a foundation for the follow-up three-dimensional modeling and tamping.
Step S600: and connecting the point cloud identification system, and carrying out point cloud identification on the image sequence of the target three-dimensional object according to the point cloud identification parameters to obtain a three-dimensional image modeling result of the target three-dimensional object.
Specifically, the point cloud identification system is an auxiliary functional system for point cloud parameter identification and extraction, is connected with the point cloud identification system, takes the point cloud identification parameters as an identification standard, performs point cloud identification on an image sequence of the three-dimensional object based on the point cloud identification system, performs monomer modeling based on the point cloud identification parameters of the segmented structural monomers, further performs monomer modeling summary integration, generates a three-dimensional image modeling result of the target three-dimensional object, can maximally ensure the consistency of the three-dimensional image modeling result and the initially acquired image sequence, improves the modeling reduction degree, and further performs targeted processing on the constructed three-dimensional image modeling result based on the image processing requirement so as to complete the optimization of the image.
Further, as shown in fig. 3, based on the classification results, a point cloud identification parameter is configured, and step S500 of the present application further includes:
step S510: acquiring a point cloud parameter configuration module, wherein the point cloud parameter configuration module comprises a first point cloud reconstruction module and a second point cloud reconstruction module;
step S520: configuring the plurality of classification results according to the first point cloud reconstruction module to obtain sparse point cloud reconstruction parameters;
step S530: configuring the plurality of classification results according to the second point cloud reconstruction module to obtain dense point cloud reconstruction parameters;
step S540: and generating the point cloud identification parameters based on the sparse point cloud reconstruction parameters and the dense point cloud reconstruction parameters.
Further, after generating the point cloud identification parameters, the method further includes step S550, which includes:
step S551: evaluating the three-dimensional image modeling result to obtain a modeling effect evaluation index;
step S552: and acquiring point cloud parameter adjustment information according to the modeling effect evaluation index, and performing feedback optimization on the point cloud identification parameters by using the point cloud parameter adjustment information.
Specifically, the point cloud parameters are required modeling parameters for three-dimensional modeling, the point cloud parameter configuration module is a functional module for performing point cloud parameter analysis configuration, and can be obtained based on a conventional traditional mode, and the point cloud parameter configuration module comprises a first point cloud reconstruction module and a second point cloud reconstruction module, wherein the first point cloud reconstruction module is used for performing sparse point cloud parameter configuration, extracting high-spatial class data based on a plurality of classification results, inputting the high-spatial class data into the first point cloud reconstruction module, performing point cloud parameter identification conversion, and generating sparse point cloud reconstruction parameters, and the sparse point cloud reconstruction parameters are required parameters for modeling based on multi-view images and have less information coverage. And further inputting the classification results into the second point cloud reconstruction module again based on the classification results, wherein the second point cloud reconstruction module is used for configuring dense point cloud parameters, and sequentially identifying and converting the dense point cloud parameters of the classification results to generate the dense point cloud reconstruction parameters. And mapping and corresponding the sparse point cloud reconstruction parameters and the dense point cloud reconstruction parameters to generate a plurality of parameter sequences, for example, sparse point cloud reconstruction parameters-dense point cloud reconstruction parameters, and expressing a sequence empty set by 0.
Further, the three-dimensional image modeling result of the target three-dimensional object is evaluated, and the modeling effect evaluation index, namely modeling effect evaluation data in a desired state, is obtained. Based on the modeling effect evaluation index, optimization of point cloud parameters is carried out based on the expected modeling effect, for example, point cloud segmentation is carried out to determine the point cloud parameters of each structural monomer, subsequent targeted modeling is facilitated, point cloud parameter adjustment information is obtained, the point cloud identification parameters are traversed, feedback adjustment optimization of corresponding parameters is carried out based on the point cloud parameter adjustment information, the finally determined image fit degree of the point cloud identification parameters is improved, and the reduction degree of the subsequent modeling effect can be further improved.
Example two:
based on the same inventive concept as the artificial intelligence image processing method in the foregoing embodiment, as shown in fig. 4, the present application provides an artificial intelligence image processing system, which includes:
a sequence obtaining module 11, where the sequence obtaining module 11 is configured to obtain an image sequence of a target three-dimensional object according to the three-dimensional image management system;
the sequence clustering module 12 is configured to cluster the image sequences to obtain a plurality of image subsequences, where each image subsequence corresponds to a view angle interval;
an index obtaining module 13, where the index obtaining module 13 is configured to obtain a plurality of spatial indexes by performing spatial analysis on the images in the plurality of image sub-sequences;
the image classification module 14 is configured to classify images in each image subsequence according to the plurality of spatial indexes, and obtain a plurality of classification results;
a parameter configuration module 15, wherein the parameter configuration module 15 is configured to configure a point cloud identification parameter based on the plurality of classification results;
and the point cloud modeling module 16 is used for connecting the point cloud identification system, and performing point cloud identification on the image sequence of the target three-dimensional object according to the point cloud identification parameters to obtain a three-dimensional image modeling result of the target three-dimensional object.
Further, the system further comprises:
the threshold acquisition module is used for carrying out visual angle analysis on the image sequence of the target three-dimensional object to acquire a visual angle acquisition threshold;
the threshold optimizing module is used for optimizing the visual angle acquisition threshold to obtain a first K value, wherein K is a positive integer greater than or equal to 0;
and the image clustering module is used for performing K-clustering on the image sequence of the target three-dimensional object by using the first K value and outputting an image clustering result, wherein the image clustering result is K clustering results, and the K clustering results correspond to the image subsequences.
Further, the system further comprises:
the quantization index acquisition module is used for carrying out image quantization analysis on the image sequence of the target three-dimensional object to acquire a quantization index;
a first optimizing constraint condition obtaining module, configured to generate a first optimizing constraint condition according to the quantization index;
the image processing device comprises a preset quantization index obtaining module, a preset quantization index obtaining module and a processing module, wherein the preset quantization index obtaining module is used for obtaining preset subsequence quantization indexes, and the preset subsequence quantization indexes are used for identifying the minimum clustering number in each image subsequence;
the second optimizing constraint condition obtaining module is used for generating a second optimizing constraint condition according to the preset subsequence quantization index;
and the optimizing constraint module is used for optimizing and constraining the view angle acquisition threshold according to the first optimizing constraint condition and the second optimizing constraint condition.
Further, the system further comprises:
the characteristic identification module is used for carrying out characteristic identification on the images in the image subsequences to obtain an image linear characteristic set;
the characteristic parameter acquisition module is used for acquiring structural representation strength and picture occupation ratio according to the image linear feature set;
a spatial index obtaining module, configured to perform spatial analysis according to the structural representation strength and the picture occupation ratio, and obtain the plurality of spatial indexes.
Further, the system further comprises:
the model building module is used for building a spatial analysis model, wherein the spatial analysis model is obtained by training a plurality of groups of training data to convergence, and the plurality of groups of training data comprise the structural representation strength, the picture occupation ratio and indexes for identifying the spatial degree;
the model analysis module is used for inputting the structural representation intensity and the picture ratio corresponding to the image subsequences into the spatial analysis model to obtain a first spatial index and a second spatial index (8230), and an N spatial index, wherein N is the same as the number of the image subsequences;
and the index output module is used for outputting the plurality of spatial indexes according to the first spatial index, the second spatial index \8230andthe Nth spatial index.
Further, the system further comprises:
the system comprises a configuration module, a first point cloud reconstruction module and a second point cloud reconstruction module, wherein the configuration module is used for acquiring a point cloud parameter configuration module;
a sparse point cloud reconstruction parameter acquisition module, configured to configure the plurality of classification results according to the first point cloud reconstruction module, to acquire sparse point cloud reconstruction parameters;
the dense point cloud reconstruction parameter acquisition module is used for configuring the plurality of classification results according to the second point cloud reconstruction module to acquire dense point cloud reconstruction parameters;
a parameter generation module to generate the point cloud identification parameter based on the sparse point cloud reconstruction parameter and the dense point cloud reconstruction parameter.
Further, the system further comprises:
the result evaluation module is used for evaluating the three-dimensional image modeling result to obtain a modeling effect evaluation index;
and the parameter optimization module is used for acquiring point cloud parameter adjustment information according to the modeling effect evaluation index and performing feedback optimization on the point cloud identification parameters by using the point cloud parameter adjustment information.
In the present specification, through the foregoing detailed description of the artificial intelligence image processing method, it will be apparent to those skilled in the art that the method and system for processing an artificial intelligence image in the present embodiment are relatively simple, and for the apparatus disclosed in the embodiment, since they correspond to the method disclosed in the embodiment, the related points can be found in the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (8)
1. An artificial intelligence image processing method is applied to a three-dimensional image management system which is in communication connection with a point cloud identification system, and comprises the following steps:
acquiring an image sequence of a target three-dimensional object according to the three-dimensional image management system;
clustering the image sequences to obtain a plurality of image subsequences, wherein each image subsequence corresponds to a view angle interval;
acquiring a plurality of spatial indexes by spatially analyzing the images in the plurality of image subsequences;
classifying the images in each image subsequence according to the plurality of spatial indexes to obtain a plurality of classification results;
configuring point cloud identification parameters based on the plurality of classification results;
and connecting the point cloud identification system, and carrying out point cloud identification on the image sequence of the target three-dimensional object according to the point cloud identification parameters to obtain a three-dimensional image modeling result of the target three-dimensional object.
2. The method of claim 1, wherein the image sequence is clustered to obtain a plurality of image subsequences, the method further comprising:
performing visual angle analysis on the image sequence of the target three-dimensional object to obtain a visual angle acquisition threshold;
optimizing the view angle acquisition threshold to obtain a first K value, wherein K is a positive integer greater than or equal to 0;
and performing K-clustering on the image sequence of the target three-dimensional object by using the first K value, and outputting an image clustering result, wherein the image clustering result is K clustering results, and the K clustering results correspond to the plurality of image subsequences.
3. The method of claim 2, wherein the method further comprises:
carrying out image quantitative analysis on the image sequence of the target three-dimensional object to obtain a quantitative index;
generating a first optimizing constraint condition according to the quantization index;
acquiring a preset subsequence quantization index, wherein the preset subsequence quantization index is used for identifying the minimum clustering number in each image subsequence;
generating a second optimization constraint condition according to the preset subsequence quantization index;
and optimizing and constraining the view angle acquisition threshold according to the first optimizing constraint condition and the second optimizing constraint condition.
4. The method of claim 1, wherein a plurality of spatiality indicators are obtained by spatiality analysis of images in the plurality of image sub-sequences, the method further comprising:
performing feature recognition on the images in the image subsequences to obtain an image linear feature set;
acquiring structural representation intensity and picture occupation ratio according to the image linear feature set;
and performing spatial analysis according to the structural characterization strength and the picture occupation ratio to acquire the plurality of spatial indexes.
5. The method of claim 4, wherein the spatiality analysis is performed with the structural characterization intensity and the frame occupancy, the method further comprising:
building a spatial analysis model, wherein the spatial analysis model is obtained by training a plurality of groups of training data to convergence, and the plurality of groups of training data comprise the structural representation strength, the picture occupation ratio and an index for identifying the spatial degree;
inputting the structural representation intensity and the picture ratio corresponding to the image subsequences into the spatial analysis model to obtain a first spatial index and a second spatial index \8230, wherein N is the same as the number of the image subsequences;
and outputting the plurality of spatial indexes according to the first spatial index, the second spatial index \8230andthe Nth spatial index.
6. The method of claim 1, wherein point cloud identification parameters are configured based on the plurality of classification results, the method further comprising:
acquiring a point cloud parameter configuration module, wherein the point cloud parameter configuration module comprises a first point cloud reconstruction module and a second point cloud reconstruction module;
configuring the plurality of classification results according to the first point cloud reconstruction module to obtain sparse point cloud reconstruction parameters;
configuring the plurality of classification results according to the second point cloud reconstruction module to obtain dense point cloud reconstruction parameters;
and generating the point cloud identification parameters based on the sparse point cloud reconstruction parameters and the dense point cloud reconstruction parameters.
7. The method of claim 6, wherein after generating the point cloud identification parameters, the method further comprises:
evaluating the three-dimensional image modeling result to obtain a modeling effect evaluation index;
and acquiring point cloud parameter adjustment information according to the modeling effect evaluation index, and performing feedback optimization on the point cloud identification parameters by using the point cloud parameter adjustment information.
8. An artificial intelligence image processing system, wherein the system is communicatively coupled to a point cloud identification system, the system comprising:
the sequence acquisition module is used for acquiring an image sequence of a target three-dimensional object according to the three-dimensional image management system;
the sequence clustering module is used for clustering the image sequences to obtain a plurality of image subsequences, wherein each image subsequence corresponds to a view angle interval;
an index obtaining module, configured to obtain a plurality of spatial indexes by spatially analyzing images in the plurality of image subsequences;
the image classification module is used for classifying the images in each image subsequence according to the plurality of spatial indexes to obtain a plurality of classification results;
a parameter configuration module for configuring point cloud identification parameters based on the plurality of classification results;
and the point cloud modeling module is used for connecting the point cloud identification system, and performing point cloud identification on the image sequence of the target three-dimensional object according to the point cloud identification parameters to obtain a three-dimensional image modeling result of the target three-dimensional object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310086531.6A CN115797675B (en) | 2023-02-09 | 2023-02-09 | Artificial intelligence image processing method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310086531.6A CN115797675B (en) | 2023-02-09 | 2023-02-09 | Artificial intelligence image processing method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115797675A true CN115797675A (en) | 2023-03-14 |
CN115797675B CN115797675B (en) | 2023-06-09 |
Family
ID=85430639
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310086531.6A Active CN115797675B (en) | 2023-02-09 | 2023-02-09 | Artificial intelligence image processing method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115797675B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200242330A1 (en) * | 2017-10-05 | 2020-07-30 | Applications Mobiles Overview Inc. | Method for object recognition |
CN112037320A (en) * | 2020-09-01 | 2020-12-04 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN113066064A (en) * | 2021-03-29 | 2021-07-02 | 郑州铁路职业技术学院 | Cone beam CT image biological structure identification and three-dimensional reconstruction system based on artificial intelligence |
US20220327773A1 (en) * | 2021-04-09 | 2022-10-13 | Georgetown University | Facial recognition using 3d model |
CN115661376A (en) * | 2022-12-28 | 2023-01-31 | 深圳市安泽拉科技有限公司 | Target reconstruction method and system based on unmanned aerial vehicle image |
-
2023
- 2023-02-09 CN CN202310086531.6A patent/CN115797675B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200242330A1 (en) * | 2017-10-05 | 2020-07-30 | Applications Mobiles Overview Inc. | Method for object recognition |
CN112037320A (en) * | 2020-09-01 | 2020-12-04 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN113066064A (en) * | 2021-03-29 | 2021-07-02 | 郑州铁路职业技术学院 | Cone beam CT image biological structure identification and three-dimensional reconstruction system based on artificial intelligence |
US20220327773A1 (en) * | 2021-04-09 | 2022-10-13 | Georgetown University | Facial recognition using 3d model |
CN115661376A (en) * | 2022-12-28 | 2023-01-31 | 深圳市安泽拉科技有限公司 | Target reconstruction method and system based on unmanned aerial vehicle image |
Non-Patent Citations (1)
Title |
---|
龙宇航;吴德胜;: "高空遥感图像空间特征信息三维虚拟重建仿真", no. 12, pages 57 - 61 * |
Also Published As
Publication number | Publication date |
---|---|
CN115797675B (en) | 2023-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104992223B (en) | Intensive population estimation method based on deep learning | |
CN108573225A (en) | A kind of local discharge signal mode identification method and system | |
CN111028939B (en) | Multigroup intelligent diagnosis system based on deep learning | |
Guo et al. | Automatic crack distress classification from concrete surface images using a novel deep-width network architecture | |
CN114092832A (en) | High-resolution remote sensing image classification method based on parallel hybrid convolutional network | |
RU2771515C1 (en) | Method and system for determining the location of a high-speed train in a navigation blind spot based on meteorological parameters | |
CN111553209B (en) | Driver behavior recognition method based on convolutional neural network and time sequence diagram | |
CN112183643B (en) | Hard rock tension-shear fracture identification method and device based on acoustic emission | |
CN111028319A (en) | Three-dimensional non-photorealistic expression generation method based on facial motion unit | |
CN109658347A (en) | Data enhancement methods that are a kind of while generating plurality of picture style | |
CN115049629A (en) | Multi-mode brain hypergraph attention network classification method based on line graph expansion | |
CN109584203A (en) | Reorientation image quality evaluating method based on deep learning and semantic information | |
CN114360030A (en) | Face recognition method based on convolutional neural network | |
CN110751191A (en) | Image classification method and system | |
CN111242028A (en) | Remote sensing image ground object segmentation method based on U-Net | |
CN117313516A (en) | Fermentation product prediction method based on space-time diagram embedding | |
CN115797675B (en) | Artificial intelligence image processing method and system | |
CN117292750A (en) | Cell type duty ratio prediction method, device, equipment and storage medium | |
CN117788810A (en) | Learning system for unsupervised semantic segmentation | |
CN116108751A (en) | Material stress-strain curve prediction model based on graph neural network, construction method and prediction method thereof | |
CN115170793A (en) | Small sample image segmentation self-calibration method for industrial product quality inspection | |
CN114882580A (en) | Measuring method for motion action consistency based on deep learning | |
Di Carlo et al. | Generating 3D building volumes for a given urban context using Pix2Pix GAN | |
CN113537240A (en) | Deformation region intelligent extraction method and system based on radar sequence image | |
CN110232333A (en) | Behavior recognition system model training method, behavior recognition method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |