CN111681309A - Edge computing platform for generating voxel data and edge image characteristic ID matrix - Google Patents

Edge computing platform for generating voxel data and edge image characteristic ID matrix Download PDF

Info

Publication number
CN111681309A
CN111681309A CN202010511907.XA CN202010511907A CN111681309A CN 111681309 A CN111681309 A CN 111681309A CN 202010511907 A CN202010511907 A CN 202010511907A CN 111681309 A CN111681309 A CN 111681309A
Authority
CN
China
Prior art keywords
image
edge
feature
module
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010511907.XA
Other languages
Chinese (zh)
Other versions
CN111681309B (en
Inventor
朱立新
白忠可
宿金超
孙驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
Original Assignee
Beijing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University filed Critical Beijing Normal University
Priority to CN202010511907.XA priority Critical patent/CN111681309B/en
Publication of CN111681309A publication Critical patent/CN111681309A/en
Application granted granted Critical
Publication of CN111681309B publication Critical patent/CN111681309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an edge computing platform for generating voxel data and an edge image characteristic ID matrix, which comprises: the system comprises a feature extraction module, an association fusion module, an image ID module, a spatial image voxelization module and a structural storage module; the image processing device comprises a feature extraction module, a feature extraction module and a feature extraction module, wherein the feature extraction module is used for generating a minimum subset of edge features for the features of an obtained image, and combining the minimum subset of the edge features of the image to form a feature subset of the image; the correlation fusion module is used for carrying out correlation and non-correlation processing on the feature subset of the image to form an edge correlation relationship; the image ID module is used for carrying out ID processing on the edge correlation relationship to form an edge image ID matrix set; the spatial image voxelization module is used for forming a voxel according to the edge image ID matrix set so as to perform three-dimensional modeling; and the structured storage module is used for locally storing the ID matrix and the voxel or transmitting the ID matrix or the voxel to the cloud. The voxel data and the ID matrix generated by the invention can be used for quickly constructing a three-dimensional environment, shortening the modeling response time, improving the bandwidth availability and greatly improving the user experience.

Description

Edge computing platform for generating voxel data and edge image characteristic ID matrix
Technical Field
The invention relates to the technical field of image processing, in particular to an edge computing platform for generating voxel data and an edge image characteristic ID matrix.
Background
With the accelerated innovation of cloud computing and Artificial Intelligence (AI) application, more and more data such as terminal images need to be transmitted to a cloud server for processing, and although the current 5G technology is gradually emerging, the massive data and unprecedented complexity of the technology exceed the capability range of the traditional network and infrastructure. Moreover, the problem of bandwidth and delay often occurs when data generated by various terminal devices are sent to a centralized data center or a cloud end for processing. The occurrence of edge calculation can greatly reduce the two problems to a certain extent. Edge computation algorithms for images and modeling classes generally provide a more efficient alternative to processing data and analyzing data at locations closer to the data source. Latency is significantly reduced because data is not transmitted over a network to a cloud or data center for processing. It is expected that future mobile edge computing on 5G networks will support faster, more comprehensive data analysis, creating opportunities to gain deeper insight, shorten response times and improve customer experience.
The mode that utilizes image processing algorithm to carry out the model building to three-dimensional environment at present on the market generally all adopts the VR technique, carries out a large amount of image acquisition and post processing synthesis in the high in the clouds, all has massive image data to handle, and the single face of tradition use for saving time and performance is modeled probably not feasible, and the user also can be very closely observed the object, so probably need the material of high resolution. Although the more refined operation can be more accurate and fine, a large amount of time is sacrificed, a large amount of manpower and time are usually needed for processing to form a small project, and information on a photo is extracted by taking care of the data scanning technology, so that a high-precision three-dimensional environment voxel is generated. In the conventional three-dimensional environment construction, a learning environment is constructed even by a triangular grid, a height field or a geographical mapping mode such as a BSP tree, but the learning environment is high in cost, is only a surface representation mode, is not easy to modify or has limited application occasions, and the like, and much useless work is done in the process. In addition, for many edge computing platforms in the image identification process, image data are simply preprocessed, and relevance cannot be well controlled, namely, the problem of how to remotely ID the image features by edge computing still exists.
Disclosure of Invention
Aiming at the defects of the prior art, the Atlas microprocessor controls the camera to extract image associated information by combining with the edge calculation algorithm of the system, so that the problem of voxel data acquisition in the three-dimensional environment modeling process of the learning environment is solved, data required by learning environment construction is quickly acquired, the three-dimensional modeling speed of the learning environment is increased, the image features are associated and identified, the cloud processing response time is shortened by local marginalization calculation, and the bandwidth availability is increased.
In order to achieve the above object, the present invention is achieved by the following technical solutions.
An edge computing platform that generates voxel data and an edge image feature ID matrix, comprising: the system comprises a feature extraction module, an association fusion module, an image ID module, a spatial image voxelization module and a structural storage module; wherein,
the characteristic extraction module is used for extracting the characteristics of the acquired images, generating the minimum subset of the edge characteristics, and combining the minimum subsets of the edge characteristics of all the images to form the characteristic subset of the images;
the correlation fusion module is used for performing linear correlation fusion and non-correlation fusion processing on the feature subset of the image to generate an edge correlation relationship;
the image ID module is used for carrying out ID processing on the edge correlation relationship to generate an edge image ID matrix set;
the spatial image voxelization module is used for generating a voxel according to the edge image ID matrix set so as to perform three-dimensional modeling;
and the structured storage module is used for locally storing the ID matrix and the voxel or transmitting the ID matrix or the voxel to the cloud.
Further, the step of generating the minimum subset of edge features in the feature extraction module includes:
s21, extracting image features of the image, wherein the image features comprise color features, shape features, texture features, spatial relation features and graphic significance features;
and S22, expressing the color feature, the shape feature, the texture feature, the spatial relation feature and the graphic significance feature in a feature matrix form, and generating the minimum subset of the edge feature of the image according to the image feature.
Further, in the association fusion module, the step of performing linear fusion and nonlinear fusion on the feature subset of the image to generate the edge correlation relationship includes:
s31, processing the minimal subset of the edge characteristics of the image in parallel by using a linear method, a nonlinear method and/or a flow pattern learning method;
s32, performing optimal orthogonal transformation on the processing result of the step S31, performing variance operation on the feature correlation relation, and sorting the features after eliminating the features with the maximum variance;
s33, performing high-dimensional projection on the matrix obtained after feature sorting and obtaining the vector space for optimal image feature judgment to extract classification information and compress feature space dimensions, ensuring the maximum inter-class distance and the minimum intra-class distance of the feature subsets of the image in the transformed subspace after projection, and generating the representation of the feature subsets of the image in the vector space for optimal image feature judgment according to the distance relation or dissimilarity relation between the feature subsets of the image;
and S34, performing principal component analysis on the high-dimensional projection matrix formed by all the images, removing redundancy and errors, and associating the linear characteristic relationship and the nonlinear characteristic relationship of all the images to form the edge correlation relationship of the images.
Further, in step S31, performing linear fusion transformation on the color, shape, texture, and spatial relationship of the minimum feature subset of the image to obtain a color transition relationship, a gradient transition relationship, a texture transition relationship, and a spatial position projection relationship, respectively; and carrying out nonlinear fusion operation on the color, shape, texture and spatial relation of the minimum feature subset of the video to obtain a nonlinear color transition relation, gradient transition relation, texture transition relation and spatial position projection relation.
Further, in step S32, according to the result of step S31, the feature subset of the obtained image is subjected to optimal orthogonal transformation, variance feature transformation, feature sorting, high-dimensional projection, and principal component analysis, so as to generate an edge correlation relationship associating all image features.
Further, the linear method includes a component analysis method, a linear discriminant analysis method, and a multidimensional scaling method.
Further, in the image ID module, the step of generating an ID matrix is performed by using an ID process to the edge correlation relationship of the feature subset of the image;
s41, reducing the dimension of the edge correlation relation of the feature subset of the image through an edge ID algorithm;
and S42, performing ID processing on the image to generate an edge image ID matrix set.
Further, the step of reducing the dimension comprises the following steps:
s411, converting the edge correlation relation data into a new coordinate system, wherein a first coordinate axis of the new coordinate system is the direction with the largest variance in the edge correlation relation data, and a second coordinate axis is the direction which is orthogonal to the first coordinate axis and has the largest variance;
s412, assuming that hidden variables exist in the observation data, if the number of the data of the hidden variables is less than that of the observation data, the dimension reduction of the data is realized through the hidden variables;
s413, assuming that the observed data is a mixed observed result of a plurality of data sources, the data sources are statistically independent from each other, and in PCA, only the data is assumed to be uncorrelated, and if the number of data sources is less than the number of observed data, the dimension is reduced by projection.
Further, the spatial image voxelization module is configured to form voxels according to the edge image ID matrix set, so as to perform three-dimensional modeling, and the method includes:
and carrying out spatial image splicing and series connection on the edge image ID matrix set to form a regular grid in a three-dimensional space and voxels capable of being used for three-dimensional modeling, and using or storing the regular grid and the voxels in the three-dimensional space by using voxel modeling software.
The system further comprises a structured storage module, which is used for providing a local relational database, integrating all image processing results into a structural storage in a subdirectory form, wherein a file directory is a data set of image ID and space image voxelization, and locally storing data acquired by the system or synchronously connecting the system with the structured storage.
The method adopts an edge calculation mode, processes and processes an image source shot by a user to obtain an edge image characteristic ID matrix for three-dimensional modeling, then performs voxelization compression on the edge image characteristic ID matrix, and then uploads the obtained edge calculation result to a cloud server or stores the edge calculation result locally through a network. And transmitting the compressed voxel and edge image characteristic ID matrix to three-dimensional modeling software, so that three-dimensional modeling of space environments such as indoor and outdoor integrated learning environments can be completed more quickly. By utilizing the voxel and edge image characteristic ID matrix generated by the invention, the characteristic of combining an image processing algorithm and edge calculation is effectively utilized, the three-dimensional data required by the construction of the learning environment is obtained more quickly, the three-dimensional environment is constructed quickly, the modeling response time is shortened, and the bandwidth availability is improved. In addition, a low-delay, image ID, easy-to-expand, easy-to-operate and maintain edge computing platform is provided for the user, shorter response time is provided for artificial intelligence related application development, learning environment real-time modeling, AR/VR and other image processing projects, and user experience is greatly improved.
The purpose of edge calculation is to reduce bandwidth and cloud server pressure, wherein the image is subjected to ID conversion and local voxelization by using edge calculation, an edge image characteristic ID matrix of a semi-finished product is transmitted to a cloud, and the cloud server performs calculation to reduce the cloud pressure.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a block diagram of an edge computing platform according to an embodiment of the invention;
FIG. 2 is a flow chart illustrating the generation of a minimal subset of edge features according to one embodiment of the present invention;
FIG. 3 is a flowchart illustrating the generation of an edge correlation relationship according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of generating an edge image ID matrix according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of generating voxels in accordance with an embodiment of the present invention;
FIG. 6 is a diagram illustrating the hardware and software functions and workflow of an edge computing system according to one embodiment of the invention;
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, a technical solution in an embodiment of the present invention will be described in detail and completely with reference to the accompanying drawings in the embodiment of the present invention, and it is obvious that the described embodiment is a part of embodiments of the present invention, but not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The invention provides an edge computing platform for constructing a learning environment, as shown in fig. 1, comprising: the system comprises a feature extraction module, an association fusion module, an image ID module, a spatial image voxelization module and a structural storage module; the system comprises a feature extraction module, a feature extraction module and a feature extraction module, wherein the feature extraction module is used for generating a minimum subset of edge features for an acquired image; the correlation fusion module is used for carrying out correlation and non-correlation processing on the minimum subset of the edge characteristics to generate an edge correlation relation; the image ID module is used for carrying out ID processing on the edge correlation relationship to generate an ID matrix; the spatial image voxelization module is used for generating voxel data according to the ID matrix so as to carry out three-dimensional modeling; and the structured storage module is used for locally storing the ID matrix and the voxel or transmitting the ID matrix or the voxel to the cloud.
In the feature extraction module, an image may be obtained through a camera or other devices, and then as shown in fig. 2, image feature extraction is performed on the image to obtain a region of interest (i.e., a color feature, a shape feature, a texture feature, and a spatial relationship feature of the image). And (3) extracting features by adopting a non-deep learning method and a deep learning method. The method for extracting the image features comprises an LBP algorithm, an HOG feature extraction algorithm, a SIFT operator, SURF, HAAR and the like, for example, texture features are extracted through the LBP algorithm; extracting directional gradient characteristics of a local target region through an HOG characteristic extraction algorithm; extracting scale space characteristics of a local target region through an SIFT algorithm; extracting feature points through an SURF algorithm; and extracting HAAR-like features in the image through a HAAR algorithm. The features extracted by the feature extraction algorithm comprise color features, shape features, texture features, spatial relationship features and graphic meanings of the images, each feature is represented in the form of a feature matrix, a minimum subset of edge features of a 5-dimensional image is formed by integrating the 5 feature matrices, each dimension represents one feature, and the minimum subset of the edge features has the edge features of a single image.
The algorithm extracts main features and removes unobvious features at the same time.
Through the steps, the preliminary multi-dimensional feature extraction is extracted from the image, irrelevant redundant features of the image are removed, the correlation of unnecessary features is reduced, useless information is reduced, and a group of minimum subsets of edge features is generated.
A video comprises a plurality of frame images, the minimum subset of the edge features is information of each frame image, and the minimum subsets of the edge features of all the frames are combined to form the feature subset of the image.
In the feature fusion module, feature subsets of the image are subjected to corresponding linear and nonlinear correlation fusion, as shown in fig. 3, and then the optimal orthogonal transformation, contrast features, feature ordering, high-dimensional projection, principal component analysis and other operations are performed around the search strategy to the evaluation criterion to generate an edge correlation relationship.
In the feature fusion module, linear methods and nonlinear methods are processed in parallel, wherein the linear methods comprise a component analysis method (PCA), a linear discriminant analysis method (LAD) and a multidimensional scaling Method (MDS), the nonlinear methods comprise KPCA and KDA methods, and the nonlinear methods can also be processed by using a flow Learning (Manifold Learning) method.
The component analysis method (PCA) transforms the feature subset of the image into a group of representations which are linearly independent of each dimension through linear transformation, is used for extracting main feature components of the data and is used for reducing the dimension of high-dimensional data; the linear discriminant analysis (LAD) is used for performing projection dimension reduction processing on the feature subset of the image in a low dimension; simplifying the characteristic subset of the image into a low-dimensional space by a multidimensional scaling Method (MDS) for positioning, analyzing and classifying; the nonlinear method (KPCA, KDA) maps the feature subset of the image to a high-dimensional space through a kernel function (kernel), and then the PCA algorithm is utilized to reduce the dimension; flow pattern learning re-represents a subset of the features of an image in a low-dimensional space. By the method, association fusion and dimension reduction are carried out, and the feature subsets of the images are fused into a more accurate correlation relationship and a low-dimensional matrix relationship, namely an edge correlation relationship.
The accuracy of image feature extraction is improved through correlation fusion, overfitting is reduced, and the method is most helpful for improving the training speed of the cloud.
And solving the optimal orthogonal transformation for the feature subset of the image to obtain a group of new feature matrixes with the largest mutual variance, sequencing the importance of the new feature matrixes, and selecting the first principal components to generate elements in the edge correlation relationship.
And projecting the feature subset (high-dimensional) of the image to the best discriminant vector space to achieve the effects of extracting classification information and compressing the dimension of the feature space, and ensuring the maximum inter-class distance and the minimum intra-class distance of the feature subset of the image in the new subspace after projection, namely the feature subset of the image has the best separability in the space. A matrix representation of the feature subsets of the image is generated in a best discriminant vector space based on distance relationships or dissimilarity relationships between the feature subsets of the image.
And carrying out nonlinear transformation on the feature subset of the image, and carrying out principal component analysis in a transformation space to realize nonlinear principal component analysis in an original space. The nonlinear distance measurement is defined by the local distance, and can be realized under the condition that the feature subsets of the image are distributed more densely, so that the edge correlation relationship is formed.
The complex representation here refers to the relationship of undetermined correlation in the minimal subset of edge features, and there are 5 kinds of mutual transformation.
Specifically, the edge computing platform can perform linear fusion and nonlinear fusion on the feature subsets of the image after generating the feature subsets, mainly comprises the steps of transforming data of various dimensions, performing linear fusion on color features, shape features, texture features, spatial relationships and graphic meanings after the system obtains the minimum subset of the edge features of the image, and performing linear transformation on colors, shapes, textures and spatial relationships to obtain color transition relationships, gradient transition relationships, texture transition relationships and spatial position projection relationships. And then carrying out nonlinear operation on the minimum subset of the edge features to obtain a nonlinear color transition relation, a nonlinear gradient transition relation, a nonlinear texture transition relation and a nonlinear spatial position projection relation. The system integrates the linear and nonlinear relations to obtain the optimal orthogonal transformation, variance feature transformation, feature ordering, high-dimensional projection, principal component analysis and the like of the image, and the edge correlation relation of the image is formed together.
In the image ID module, an edge ID algorithm process is performed on the edge correlation relationship of the image, as shown in fig. 4, the spatial dimension (dimension reduction) of the image features is reduced by the edge ID algorithm operation, and the image ID processing is performed on the image, where the image ID includes: and finally, generating an edge image ID matrix set of the image, and transmitting the edge image ID matrix set to local storage or cloud storage through an interface.
The function of reducing the vitamin is as follows: a group of new feature transformation is obtained through certain mathematical operation, the image feature space dimension is effectively reduced, the correlation among features is eliminated, useless information in the features is reduced, an input image is subjected to ID (identification), namely, each frame of image has unique features, the features of all the images are subjected to correlation fusion to form an ID matrix, the ID matrix comprises visual static image description and dynamic prejudgment, for example, when an indoor environment is shot, a jumping cat is on a table, and the ID of the frame of image has the following functions: the cat position characteristic, the color characteristic, the action characteristic, the spatial relation characteristic, the relative surrounding environment characteristic, the drop prediction characteristic and the like improve the data visualization capacity, and are convenient for the terminal and the cloud to recycle;
the space environment feature dimension reduction is based on two criteria of recent reconfigurability and maximum separability. The method for reducing the dimension comprises the following steps:
1. and aiming at the conversion of the edge correlation relation data from the original coordinate system to a new coordinate system, the first new coordinate axis selects the direction with the largest variance in the original data, and the second new coordinate axis selects the direction which is orthogonal to the first coordinate axis and has the largest variance.
2. Assume that there are some hidden variables that are not observed in the generation of observation data. It is assumed that the observed data is a linear combination of these hidden variables and some noisy data. The hidden variables may have less data than the observed data, i.e., by finding the hidden variables, data dimensionality reduction may be achieved.
3. Data is assumed to be a mixed observation of multiple data sources that are statistically independent of each other, whereas in PCA data is assumed to be uncorrelated only. As with the factoring analysis, dimensionality reduction may be achieved if the number of data sources is less than the number of observation data.
In the spatial image voxelization module, the ID matrix of the edge image is voxelized. As shown in fig. 5, specifically: and performing space position splicing and series connection on the obtained ID matrix of the edge image according to image characteristics to generate an elevation value (the elevation value is digital expression of topographic surface form attribute information and is digital description with space position characteristics and topographic attribute characteristics) of a regular grid with space image surface characteristics and position characteristics in a three-dimensional space, outputting voxel data which can be used for three-dimensional modeling, and using the voxel data by voxel modeling software. The output voxels have no absolute position coordinates in space, only relative positions, and can constitute positions in the data structure of a single volume image. The minimum voxel unit of the output can be made according to the user setting, such as a cube, polyhedron, sphere and other models. The user can use the generated voxel data to carry out rapid three-dimensional construction so as to be displayed closer to a real object, and the model interpretability is effectively improved.
For example, the space where the spatial environment model formed by the voxels is located is divided into grids, and the grids are directly counted by using a triangular patch distance method to determine whether the grids are covered by the model. The specific method comprises the following steps: and traversing all the triangles, calculating the distance between the triangles and the middle of the voxelized grid, and setting a threshold value to judge whether the grid is covered. In addition, under the condition of image splicing of complex environment design, the Atlas microprocessor is adopted by the platform, the GPU in the processor can perform a rendering-based voxelization method to acquire space and object voxels, and the GPU can perform rasterization on a triangular patch in a rendering pipeline. Finally, the voxelized image information of the space environment with the surface information containing the learning environment model and the internal attribute capable of describing the model is formed, and the voxels of the space environment which can be used for three-dimensional modeling are output and used or stored by the voxel modeling software.
The edge computing platform further comprises a structured storage module which is used for providing a local relational database, the image processing results are all integrated into a structural storage in the form of subdirectories, and the file directories are data sets of image ID and space image voxelization. The data acquired by the edge computing platform can be stored locally, and structured storage systems such as Cassandra, Bigtable, Hadoopdb, Megastore, Dynamo and the like can also be synchronously connected.
The voxel data can be integrated to form structural storage in a subdirectory form, the structural storage comprises cloud uploading and local storage backup to a Micro SD or a local hard disk, in addition, the output format of a specific protocol can be customized according to edge calculation requirements in the system, on the premise that a USB port is externally connected with storage equipment, a user presses a report output key, the system can output an edge calculation result, and the image processing progress and result and the like can be checked for the user.
The invention can be used for the construction of the space environment of an AR/VR scene, such as the construction of a learning environment (classroom) and the like.
Referring to FIG. 6, the process flow of the feature and artificial intelligence and edge computing platform of the present invention comprises the following steps:
the method comprises the following steps: after an Atlas microprocessor is powered on, a system is started and enters an initialization or setting mode, then the system starts to enter a self-checking mode, software and hardware self-checking is carried out on peripheral devices such as a camera, a USB interface, a local storage hard disk, 40Pin IO interface connecting equipment and battery power, and the system enters a working mode after error is detected;
step two: and after the system enters a working mode, detecting whether an image input end has an image input instruction, if not, entering a standby mode by the system, and waiting for awakening of the system. At the moment, the system closes the cloud and local storage software functions, all parallel algorithm modules of the system are in a dormant state and are all dormant, and the system only keeps the awakening detection function;
step three: after the system is awakened, the core processor firstly segments the image and acquires the interested region for feature extraction aiming at the transmitted image. After adult is extracted, locally storing;
step four: the system performs correlation fusion on the content completed by the feature extraction algorithm, and mainly performs correlation fusion by using a linear method and a nonlinear method.
Step five: after the images are subjected to feature extraction and association fusion, performing ID processing on each frame of image, forming a specific image ID matrix, and transmitting the specific image ID matrix to a user application or a cloud end through an interface;
step six: after the image is ID-processed, the Atlas microprocessor voxelizes the basic spatial image, and outputs voxels that can be used for three-dimensional modeling, for use by voxel modeling software.
Step seven: and the Atlas microprocessor integrates the data processed by the system algorithm, performs structured processing and uploads the data to the cloud.
Step eight: and after the system finishes image processing, generating an image processing report, wherein the image processing report mainly comprises image ID and voxelization results and is presented in a directory form.
Step nine: when the system is in low power or is in power failure accidentally, the algorithm system stops the current operation, records the working log, and recovers to the last interruption node to continue executing the unfinished part according to the working log after the next start, and the working log is automatically stored locally.
The invention can effectively utilize the characteristic of combining the image processing algorithm and the edge calculation, obtain the three-dimensional data required by the construction of the learning environment more quickly, construct the three-dimensional environment quickly, shorten the modeling response time and improve the bandwidth availability. In addition, a low-delay, image ID, easy-to-expand, easy-to-operate and maintain edge computing platform is provided for the user, shorter response time is provided for artificial intelligence related application development, learning environment real-time modeling, AR/VR and other image processing projects, and user experience is greatly improved. Compared with the conventional edge platform, the invention has the following advantages and bright points:
(1) feature extraction: local fusion use and calculation of various algorithms are performed, edge features are formed and then uploaded to a cloud for further extraction, so that the bandwidth use efficiency can be effectively improved, and the accuracy of feature extraction is improved;
(2) association and fusion: local correlation and non-correlation processing are carried out on the edge features to form an edge correlation relationship, so that overfitting risks can be effectively reduced, and cloud training speed is increased;
(3) image ID conversion: after feature extraction and data association, the edge computing platform performs ID modeling processing on the image, and the data visualization capacity can be greatly improved no matter the image is transmitted to the cloud or used locally, so that the data is more visualized, and a unique edge image ID is formed;
(4) spatial image voxelization: forming a four-dimensional description voxel convenient for modeling after ID-coding of the three-dimensional space image, improving model interpretability and adapting to various modeling engines;
(5) and (3) structured storage: the edge cloud storage and the systematized cloud storage are convenient for the terminal and the cloud computing to use, the disadvantages that the terminal and the cloud computing are all depended on in the past are changed, the real-time modeling efficiency is improved, and the experience sense is enhanced.
The above examples are only for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. An edge computing platform that generates voxel data and an edge image feature ID matrix, comprising: the system comprises a feature extraction module, an association fusion module, an image ID module, a spatial image voxelization module and a structural storage module; wherein,
the characteristic extraction module is used for extracting the characteristics of the acquired images, generating the minimum subset of the edge characteristics, and combining the minimum subsets of the edge characteristics of all the images to form the characteristic subset of the images;
the correlation fusion module is used for performing linear correlation fusion and non-correlation fusion processing on the feature subset of the image to generate an edge correlation relationship;
the image ID module is used for carrying out ID processing on the edge correlation relationship to generate an edge image ID matrix set;
the spatial image voxelization module is used for generating a voxel according to the edge image ID matrix set so as to perform three-dimensional modeling;
and the structured storage module is used for locally storing the ID matrix and the voxel or transmitting the ID matrix or the voxel to the cloud.
2. The edge computing platform of claim 1, wherein the step of generating a minimal subset of edge features in the feature extraction module comprises:
s21, extracting image features of the image, wherein the image features comprise color features, shape features, texture features, spatial relation features and graphic significance features;
and S22, expressing the color feature, the shape feature, the texture feature, the spatial relation feature and the graphic significance feature in a feature matrix form, and generating the minimum subset of the edge feature of the image according to the image feature.
3. The edge computing platform of claim 1, wherein the step of performing linear fusion and nonlinear fusion on the feature subsets of the image in the associative fusion module to generate the edge correlation relationship comprises:
s31, processing the minimal subset of the edge characteristics of the image in parallel by using a linear method, a nonlinear method and/or a flow pattern learning method;
s32, performing optimal orthogonal transformation on the processing result of the step S31, performing variance operation on the feature correlation relation, and sorting the features after eliminating the features with the maximum variance;
s33, performing high-dimensional projection on the matrix obtained after feature sorting and obtaining the vector space for optimal image feature judgment to extract classification information and compress feature space dimensions, ensuring the maximum inter-class distance and the minimum intra-class distance of the feature subsets of the image in the transformed subspace after projection, and generating the representation of the feature subsets of the image in the vector space for optimal image feature judgment according to the distance relation or dissimilarity relation between the feature subsets of the image;
and S34, performing principal component analysis on the high-dimensional projection matrix formed by all the images, removing redundancy and errors, and associating the linear characteristic relationship and the nonlinear characteristic relationship of all the images to form the edge correlation relationship of the images.
4. The edge computing platform of claim 3, wherein in step S31, a linear fusion transformation is performed on the color, shape, texture, and spatial relationship of the minimum feature subset of the image to obtain a color transition relationship, a gradient transition relationship, a texture transition relationship, and a spatial position projection relationship, respectively; and carrying out nonlinear fusion operation on the color, shape, texture and spatial relation of the minimum feature subset of the video to obtain a nonlinear color transition relation, gradient transition relation, texture transition relation and spatial position projection relation.
5. The system according to claim 3, wherein in step S32, according to the result of step S31, the obtained image feature subset is processed by optimal orthogonal transformation, variance feature transformation, feature sorting, high-dimensional projection and principal component analysis to generate an edge correlation relationship associating all image features.
6. The edge computing platform of claim 3, wherein the linear methods include component analysis, linear discriminant analysis, and multidimensional scaling.
7. The edge computing platform of claim 1, wherein in the image ID module, the step of generating the ID matrix comprises performing ID processing on the edge relevance relationships of the feature subsets of the image;
s41, reducing the dimension of the edge correlation relation of the feature subset of the image through an edge ID algorithm;
and S42, performing ID processing on the image to generate an edge image ID matrix set.
8. The system of claim 7, wherein the step of reducing dimensions comprises:
s411, converting the edge correlation relation data into a new coordinate system, wherein a first coordinate axis of the new coordinate system is the direction with the largest variance in the edge correlation relation data, and a second coordinate axis is the direction which is orthogonal to the first coordinate axis and has the largest variance;
s412, assuming that hidden variables exist in the observation data, if the number of the data of the hidden variables is less than that of the observation data, the dimension reduction of the data is realized through the hidden variables;
s413, assuming that the observed data is a mixed observed result of a plurality of data sources, the data sources are statistically independent from each other, and in PCA, only the data is assumed to be uncorrelated, and if the number of data sources is less than the number of observed data, the dimension is reduced by projection.
9. The edge computing platform of claim 1, wherein the spatial image voxelization module to form voxels from the set of edge image ID matrices for three-dimensional modeling comprises:
and carrying out spatial image splicing and series connection on the edge image ID matrix set to form a regular grid in a three-dimensional space and voxels capable of being used for three-dimensional modeling, and using or storing the regular grid and the voxels in the three-dimensional space by using voxel modeling software.
10. The edge computing platform of claim 1, further comprising a structured storage module configured to provide a local relational database, integrate all image processing results into a structured storage in the form of subdirectories, file directories are image-ID and spatial image-voxelized data sets, store data acquired by the system locally, or synchronously connect with the structured storage system.
CN202010511907.XA 2020-06-08 2020-06-08 Edge computing platform for generating voxel data and edge image feature ID matrix Active CN111681309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010511907.XA CN111681309B (en) 2020-06-08 2020-06-08 Edge computing platform for generating voxel data and edge image feature ID matrix

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010511907.XA CN111681309B (en) 2020-06-08 2020-06-08 Edge computing platform for generating voxel data and edge image feature ID matrix

Publications (2)

Publication Number Publication Date
CN111681309A true CN111681309A (en) 2020-09-18
CN111681309B CN111681309B (en) 2023-07-25

Family

ID=72435697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010511907.XA Active CN111681309B (en) 2020-06-08 2020-06-08 Edge computing platform for generating voxel data and edge image feature ID matrix

Country Status (1)

Country Link
CN (1) CN111681309B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113117344A (en) * 2021-04-01 2021-07-16 广州虎牙科技有限公司 Voxel building generation method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388020A (en) * 2008-07-07 2009-03-18 华南师范大学 Composite image search method based on content
JP2009140513A (en) * 2002-07-16 2009-06-25 Nec Corp Pattern characteristic extraction method and device for the same
WO2018040099A1 (en) * 2016-08-31 2018-03-08 深圳市唯特视科技有限公司 Three-dimensional face reconstruction method based on grayscale and depth information
CN109949349A (en) * 2019-01-24 2019-06-28 北京大学第三医院(北京大学第三临床医学院) A kind of registration and fusion display methods of multi-modal 3-D image
US20200043186A1 (en) * 2017-01-27 2020-02-06 Ucl Business Plc Apparatus, method, and system for alignment of 3d datasets

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009140513A (en) * 2002-07-16 2009-06-25 Nec Corp Pattern characteristic extraction method and device for the same
CN101388020A (en) * 2008-07-07 2009-03-18 华南师范大学 Composite image search method based on content
WO2018040099A1 (en) * 2016-08-31 2018-03-08 深圳市唯特视科技有限公司 Three-dimensional face reconstruction method based on grayscale and depth information
US20200043186A1 (en) * 2017-01-27 2020-02-06 Ucl Business Plc Apparatus, method, and system for alignment of 3d datasets
CN109949349A (en) * 2019-01-24 2019-06-28 北京大学第三医院(北京大学第三临床医学院) A kind of registration and fusion display methods of multi-modal 3-D image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋瑞霞 等: "NSCT与边缘检测相结合的多聚焦图像融合算法", 计算机辅助设计与图形学学报 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113117344A (en) * 2021-04-01 2021-07-16 广州虎牙科技有限公司 Voxel building generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111681309B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN107229757B (en) Video retrieval method based on deep learning and Hash coding
CN109325437B (en) Image processing method, device and system
WO2021093453A1 (en) Method for generating 3d expression base, voice interactive method, apparatus and medium
WO2021175050A1 (en) Three-dimensional reconstruction method and three-dimensional reconstruction device
Gomez-Donoso et al. Lonchanet: A sliced-based cnn architecture for real-time 3d object recognition
CA3150045A1 (en) Dynamically estimating light-source-specific parameters for digital images using a neural network
CN109285215A (en) A kind of human 3d model method for reconstructing, device and storage medium
US7379925B2 (en) Logic arrangement, data structure, system and method for multilinear representation of multimodal data ensembles for synthesis, rotation and compression
CN108171133B (en) Dynamic gesture recognition method based on characteristic covariance matrix
CN111127631B (en) Three-dimensional shape and texture reconstruction method, system and storage medium based on single image
US20200057778A1 (en) Depth image pose search with a bootstrapped-created database
CN110009745B (en) Method for extracting plane from point cloud according to plane element and model drive
CN113808277B (en) Image processing method and related device
CN116596755B (en) Method, device, equipment and storage medium for splicing point cloud data
CN108537887A (en) Sketch based on 3D printing and model library 3-D view matching process
CN115512040A (en) Digital twinning-oriented three-dimensional indoor scene rapid high-precision reconstruction method and system
CN111681309B (en) Edge computing platform for generating voxel data and edge image feature ID matrix
Lei et al. Mesh convolution with continuous filters for 3-D surface parsing
Yanmin et al. Research on ear recognition based on SSD_MobileNet_v1 network
CN110910463B (en) Full-view-point cloud data fixed-length ordered encoding method and equipment and storage medium
CN112328821A (en) Three-dimensional tree model matching method based on tree space
Lei et al. A novel sketch-based 3D model retrieval method by integrating skeleton graph and contour feature
CN111597367A (en) Three-dimensional model retrieval method based on view and Hash algorithm
CN116881886A (en) Identity recognition method, identity recognition device, computer equipment and storage medium
KR20200039050A (en) Method for providing digital drawing and digital drawing providing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant