CN111681309B - Edge computing platform for generating voxel data and edge image feature ID matrix - Google Patents

Edge computing platform for generating voxel data and edge image feature ID matrix Download PDF

Info

Publication number
CN111681309B
CN111681309B CN202010511907.XA CN202010511907A CN111681309B CN 111681309 B CN111681309 B CN 111681309B CN 202010511907 A CN202010511907 A CN 202010511907A CN 111681309 B CN111681309 B CN 111681309B
Authority
CN
China
Prior art keywords
image
edge
feature
features
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010511907.XA
Other languages
Chinese (zh)
Other versions
CN111681309A (en
Inventor
朱立新
白忠可
宿金超
孙驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
Original Assignee
Beijing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University filed Critical Beijing Normal University
Priority to CN202010511907.XA priority Critical patent/CN111681309B/en
Publication of CN111681309A publication Critical patent/CN111681309A/en
Application granted granted Critical
Publication of CN111681309B publication Critical patent/CN111681309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention is an edge computing platform for generating voxel data and an edge image feature ID matrix, comprising: the device comprises a feature extraction module, an association fusion module, an image ID module, a space image voxelization module and a structured storage module; the feature extraction module is used for generating an edge feature minimum subset for the features of the acquired image, and combining the edge feature minimum subset of the image to form a feature subset of the image; the correlation fusion module is used for carrying out correlation and non-correlation processing on the feature subsets of the image to form an edge correlation relationship; the image ID module is used for carrying out ID processing on the edge correlation relationship to form an edge image ID matrix set; the space image voxelization module is used for forming voxels according to the edge image ID matrix set so as to perform three-dimensional modeling; and the structured storage module is used for locally storing the ID matrix and the voxels or transmitting the ID matrix or the voxels to the cloud. The voxel data and the ID matrix generated by the method can be used for quickly constructing a three-dimensional environment, shortening modeling response time, improving bandwidth availability and greatly improving user experience.

Description

Edge computing platform for generating voxel data and edge image feature ID matrix
Technical Field
The invention relates to the technical field of image processing, in particular to an edge computing platform for generating voxel data and an edge image feature ID matrix.
Background
With the acceleration innovation of cloud computing and Artificial Intelligence (AI) application, more and more data such as terminal images need to be transmitted to a cloud server for processing, and although the current 5G technology is gradually rising, the complexity of massive data and the unprecedented complexity is beyond the capability range of the traditional network and infrastructure. And the data generated by various terminal devices are sent to a centralized data center or cloud processing, which often causes bandwidth and delay problems. The occurrence of edge calculation can greatly alleviate the two problems to a certain extent. Edge computing algorithms for image and modeling classes in general can provide more efficient alternative methods of processing data and analyzing data at locations closer to the data source. The delay is significantly reduced because the data is not transmitted over the network to the cloud or data center for processing. Future mobile edge calculations over 5G networks are expected to support faster, more comprehensive data analysis, creating opportunities to gain deeper insight, shorten response times, and improve customer experience.
At present, a VR technology is generally adopted in a mode of modeling a three-dimensional environment by using an image processing algorithm in the market, a large amount of image acquisition and post-processing synthesis are carried out on a cloud end, and massive image data processing exists. Although such finer manipulation is more accurate and precise, it sacrifices a lot of time, and doing a small item often requires a lot of manpower and time to process to shape, which uses data scanning techniques to extract information from the photograph, thereby generating highly accurate three-dimensional environmental voxels. In the prior three-dimensional environment construction, even the learning environment is constructed by a triangle grid, a high-level field, a BSP tree and other geographical mapping modes, however, the cost is too high, the three-dimensional environment construction is only a surface representation mode, the three-dimensional environment construction is not easy to modify, the application occasion is limited, and the like, and the idle work in the process is very much. In addition, many edge computing platforms in the image recognition process simply preprocess image data, and the relevance is not well controlled, namely how to remotely ID the image features by edge computing still has the problems.
Disclosure of Invention
Aiming at the defects of the prior art, the Atlas microprocessor is used for controlling the camera to combine with the system edge computing algorithm to extract image association information, so that the problem of voxel data acquisition in the three-dimensional environment modeling process of the learning environment is solved, data required by the learning environment construction is rapidly acquired, the three-dimensional modeling speed of the learning environment is further improved, the image feature association is identified, the cloud processing response time is shortened by local edge computing, and the bandwidth availability is improved.
The invention is realized by the following technical scheme.
An edge computing platform that generates voxel data and an edge image feature ID matrix, comprising: the device comprises a feature extraction module, an association fusion module, an image ID module, a space image voxelization module and a structured storage module; wherein, the liquid crystal display device comprises a liquid crystal display device,
the feature extraction module is used for extracting features of the acquired images, generating edge feature minimum subsets, and combining the edge feature minimum subsets of all the images to form feature subsets of the images;
the association fusion module is used for carrying out linear correlation fusion and non-correlation fusion processing on the feature subsets of the image to generate an edge correlation relationship;
the image ID module is used for carrying out ID processing on the edge correlation relationship to generate an edge image ID matrix set;
the space image voxelization module is used for generating voxels according to the edge image ID matrix set so as to perform three-dimensional modeling;
and the structured storage module is used for locally storing the ID matrix and the voxels or transmitting the ID matrix or the voxels to the cloud.
Further, the step of generating the edge feature minimum subset in the feature extraction module includes:
s21, extracting image features of the image, wherein the image features comprise color features, shape features, texture features, spatial relationship features and graphic meaning features;
s22, representing the color features, the shape features, the texture features, the spatial relationship features and the graphic meaning features by using a feature matrix form, and generating a minimum subset of the edge features of the image according to the image features.
Further, in the association fusion module, the step of performing linear fusion and nonlinear fusion on the feature subsets of the image to generate an edge correlation relationship includes:
s31, processing the edge feature minimum subset of the image in parallel by using a linear method, a nonlinear method and/or a flow pattern learning method;
s32, performing optimal orthogonal transformation on the processing result in the step S31, performing variance operation on the characteristic correlation relationship, removing the characteristic with the largest variance, and then performing characteristic sorting;
s33, performing high-dimensional projection on the matrix obtained after feature sequencing and obtaining a vector space for judging the features of the optimal image so as to extract classification information and compressed feature space dimensions, ensuring that the distance between the feature subsets of the image in the transformed subspace is maximum and the distance between the feature subsets is minimum after projection, and generating a representation of the feature subsets of the image in the vector space for judging the features of the optimal image according to the distance relation or the dissimilarity relation between the feature subsets of the image;
s34, performing principal component analysis on the high-dimensional projection matrix formed by all the images, removing redundancy and errors, and correlating the linear characteristic relation and the nonlinear characteristic relation of all the images so as to form the edge correlation relation of the images.
Further, in step S31, performing linear fusion transformation on the color, shape, texture and spatial relationship of the minimum feature subset of the image to obtain a color transition relationship, a gradient transition relationship, a texture transition relationship and a spatial position projection relationship respectively; and carrying out nonlinear fusion operation on the color, shape, texture and spatial relation of the minimum feature subset of the video to obtain a nonlinear color transition relation, gradient transition relation, texture transition relation and spatial position projection relation.
Further, in step S32, according to the result of step S31, the feature subset of the obtained image is subjected to optimal orthogonal transformation, variance feature transformation, feature ordering, high-dimensional projection, and principal component analysis processing, so as to generate an edge correlation relationship relating all the image features.
Further, the linear method includes a component analysis method, a linear discriminant analysis method, and a multidimensional scaling method.
Further, in the image ID module, the step for performing ID processing on the edge correlation relationship of the feature subset of the image to generate an ID matrix includes;
s41, reducing the dimension of the edge correlation relationship of the feature subset of the image through an edge ID algorithm;
s42, performing ID processing on the image to generate an edge image ID matrix set.
Further, the step of dimension reduction includes:
s411, converting the edge correlation relationship data into a new coordinate system, wherein a first coordinate axis of the new coordinate system is a direction with maximum variance in the edge correlation relationship data, and a second coordinate axis is a direction orthogonal to the first coordinate axis and with maximum variance;
s412, assuming that hidden variables exist in the observed data, if the number of the hidden variables is smaller than that of the observed data, realizing data dimension reduction through the hidden variables;
s413, assuming that the observed data is a mixed observation of a plurality of data sources, the data sources are statistically independent from each other, and in PCA, only the data is assumed to be uncorrelated, if the number of data sources is less than the number of observed data, the dimension is reduced by projection.
Further, the step of performing three-dimensional modeling by the spatial image voxelization module for forming voxels according to the edge image ID matrix set includes:
and performing space image stitching and concatenation on the edge image ID matrix set to form a regular grid and voxels which can be used for three-dimensional modeling in a three-dimensional space, and using or storing the voxels by donor element modeling software.
Further, the system also comprises a structured storage module for providing a local relational database, integrating the image processing results into a structure storage in the form of subdirectory, wherein the file directory is a data set of image ID and space image voxel, and the data acquired by the system is stored locally or synchronously connected with the structured storage system.
According to the method, an edge computing mode is adopted, an image source shot by a user is processed and processed to obtain an edge image feature ID matrix for three-dimensional modeling, then voxel compression is carried out on the edge image feature ID matrix, and the obtained edge computing result is uploaded to a cloud server or is stored locally through a network. And transmitting the compressed voxels and the edge image feature ID matrix to three-dimensional modeling software, so that three-dimensional modeling of spatial environments such as indoor and outdoor integrated learning environments can be completed more quickly. By utilizing the voxel and edge image feature ID matrix generated by the invention, the characteristic of combining an image processing algorithm with edge calculation is effectively utilized, three-dimensional data required by learning environment construction is obtained more quickly, the three-dimensional environment is constructed quickly, modeling response time is shortened, and bandwidth availability is improved. In addition, an edge computing platform with low delay, image ID, easy expansion and Yi Yunwei of a user is provided, related application development is performed for artificial intelligence, real-time modeling is performed on learning environment, shorter response time is provided for image processing items such as AR/VR, and user experience is greatly improved.
The purpose of edge calculation is to reduce the bandwidth and the pressure of a cloud server, wherein the edge calculation is used for carrying out image ID and image local voxelization, an edge image feature ID matrix of a semi-finished product is transmitted to the cloud, and the cloud server carries out calculation again to reduce the pressure of the cloud.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an edge computing platform according to an embodiment of the present invention;
FIG. 2 is a flow diagram of generating a minimal subset of edge features according to an embodiment of the invention;
FIG. 3 is a flow chart of generating edge correlation relationships according to an embodiment of the present invention;
FIG. 4 is a flow chart of generating an edge image ID matrix according to an embodiment of the present invention;
FIG. 5 is a flow chart of generating voxels according to an embodiment of the invention;
FIG. 6 is a schematic diagram of the software and hardware functions and workflow of an edge computing system according to one embodiment of the invention;
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be examined and fully described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides an edge computing platform for constructing a learning environment, as shown in fig. 1, comprising: the device comprises a feature extraction module, an association fusion module, an image ID module, a space image voxelization module and a structured storage module; the feature extraction module is used for generating an edge feature minimum subset for the acquired image; the correlation fusion module is used for carrying out correlation and non-correlation processing on the minimum subset of the edge features to generate an edge correlation relation; the image ID module is used for carrying out ID processing on the edge correlation relationship to generate an ID matrix; the space image voxelization module is used for generating voxel data according to the ID matrix so as to perform three-dimensional modeling; and the structured storage module is used for locally storing the ID matrix and the voxels or transmitting the ID matrix or the voxels to the cloud.
In the feature extraction module, an image may be acquired by a camera or other devices, and then, as shown in fig. 2, image feature extraction is performed on the image to acquire a region of interest (i.e., color feature, shape feature, texture feature, and spatial relationship feature of the image). And extracting features by adopting a non-deep learning method and a deep learning method. The method for extracting image features includes LBP algorithm, HOG feature extraction algorithm, SIFT operator, SURF, HAAR, etc., for example, texture features are extracted by LBP algorithm; extracting the directional gradient characteristics of the local target area through an HOG characteristic extraction algorithm; extracting scale space features of a local target area through a SIFT algorithm; extracting feature points through a SURF algorithm; HAAR-like features in the image are extracted by means of a HAAR algorithm. The features extracted by the feature extraction algorithm comprise color features, shape features, texture features, spatial relation features and graphic meanings of the image, each feature is represented in the form of a feature matrix, a minimum subset of edge features of the 5-dimensional image is formed by integrating the 5 feature matrices, each dimension represents a feature, and the minimum subset of edge features has edge features of a single image.
The above algorithm removes insignificant features while extracting the principal features.
Through the step, preliminary multidimensional feature extraction is performed on the image, irrelevant redundant features of the image are removed, correlation of unnecessary features is reduced, useless information is reduced, and a group of minimum subsets of edge features are generated.
A video contains a plurality of frame images, the minimum subset of edge features is the information of each frame image, and the minimum subset of edge features of all frames are combined to form a feature subset of the image.
In the feature fusion module, corresponding linear and nonlinear association fusion is carried out on the feature subsets of the image, as shown in fig. 3, and then the operations of optimal orthogonal transformation, contrast features, feature ordering, high-dimensional projection, principal component analysis and the like are carried out around from a search strategy to an evaluation criterion to generate an edge correlation relation.
In the feature fusion module, a linear method and a nonlinear method are processed in parallel, wherein the linear method comprises a component analysis method (PCA), a linear discriminant analysis method (LAD) and a multidimensional scaling Method (MDS), the nonlinear method comprises a KPCA and a KDA method, and the nonlinear method can also be processed by using a flow pattern learning (Manifold Learning) method.
Component Analysis (PCA) transforms feature subsets of an image into a set of linearly independent representations of each dimension by linear transformation for extracting principal feature components of the data for dimension reduction of high-dimensional data; linear discriminant analysis (LAD) is used for performing projection dimension reduction processing on the feature subset of the image in a low dimension; a multidimensional scaling Method (MDS) simplifies the feature subset of the image to a low-dimensional space for positioning, analyzing and classifying; mapping the feature subset of the image to a high-dimensional space by a nonlinear method (KPCA, KDA) through a kernel function (kernel), and then carrying out dimension reduction by using a PCA algorithm; flow pattern learning re-represents a feature subset of an image in a low-dimensional space. The method is used for carrying out association fusion and dimension reduction, and feature subsets of the image are fused into a more accurate correlation and a low-dimension matrix relationship, namely an edge correlation.
The accuracy of image feature extraction is improved through association fusion, the overfitting is reduced, and the method is most helpful for improving the training speed of the cloud.
The method comprises the steps of obtaining a group of new feature matrixes with the maximum mutual variance by solving the optimal orthogonal transformation of the feature subsets of the image, carrying out importance ranking on the new feature matrixes, and selecting the first few main components to generate elements in the edge correlation relationship.
The feature subset (high-dimensional) of the image is projected to the best discriminating vector space to achieve the effects of extracting classification information and compressing the dimension of the feature space, and the feature subset of the image is ensured to be the largest in-class distance and the smallest in-class distance in the new subspace after projection, namely, the feature subset of the image has the best separability in the space. A matrix representation of the feature subset of the image is generated in the best-determined vector space based on the distance or dissimilarity relationship between the feature subsets of the image.
The feature subset of the image is subjected to nonlinear transformation, and nonlinear principal component analysis in the original space is realized by performing principal component analysis in the transformation space. The nonlinear distance measure is defined by the local distance, and can be realized under the condition that the feature subsets of the image are densely distributed to form an edge correlation relation.
The complexity here refers to the fact that there are 5 kinds of mutual transformations within the minimum subset of edge features for which no correlation is confirmed.
Specifically, the edge computing platform can generate a feature subset of the image, then perform linear fusion and nonlinear fusion on the feature subset, and mainly comprises the steps of transforming data of various dimensions, performing linear fusion on color features, shape features, texture features, spatial relationships and graphic meanings after the system acquires the minimum feature subset of the image, and performing linear transformation on the color, shape, texture and spatial relationships to obtain a color transition relationship, a gradient transition relationship, a texture transition relationship and a spatial position projection relationship. And then carrying out nonlinear operation on the minimum subset of the edge features to obtain a nonlinear color transition relation, a gradient transition relation, a texture transition relation and a spatial position projection relation. The system integrates the linear and nonlinear relations to obtain the optimal orthogonal transformation, variance feature transformation, feature ordering, high-dimensional projection, principal component analysis and the like of the image, and the optimal orthogonal transformation, variance feature transformation, feature ordering, high-dimensional projection, principal component analysis and the like jointly form the edge correlation relation of the image.
In the image ID processing module, edge ID algorithm processing is performed on the edge correlation relationship of the image, as shown in fig. 4, the spatial dimension (dimension reduction) of the image feature is reduced by edge ID algorithm operation, ID processing is performed on the image, and the ID of the image includes: and finally generating an edge image ID matrix set of the image, and transmitting the edge image ID matrix set to a local storage or a cloud storage through an interface.
The effect of dimension reduction is as follows: a group of new feature transformation is obtained through certain mathematical operation, the feature space dimension of the image is effectively reduced, the correlation among the features is eliminated, the useless information in the features is reduced, the input image is subjected to ID (identity) treatment, namely each frame of image has unique features, the features of all the images are associated and fused to form an ID matrix, the ID matrix comprises visual static image description and dynamic prejudgement, for example, a small cat which is jumped up is arranged on a table when the indoor environment is shot, and the ID of the frame of image at the moment comprises the following steps: the cat position features, color features, action features, spatial relation features, relative surrounding environment features, pre-judging features about to fall and the like, so that the data visualization capability is improved, and the terminal and cloud reuse is facilitated;
the dimension reduction of the spatial environment features is carried out based on two criteria of recent reconfigurability and maximum separability. The method for reducing the dimension comprises the following steps:
1. for the conversion of the edge correlation relation data from the original coordinate system to the new coordinate system, the first new coordinate axis selects the direction with the largest variance in the original data, and the second new coordinate axis selects the direction orthogonal to the first coordinate axis and with the largest variance.
2. It is assumed that there are some hidden variables that are not observed in the generation of the observation data. The observation data is assumed to be a linear combination of these hidden variables and some noise data. The hidden variable may be less data than the number of observed data, i.e., the data dimension reduction may be achieved by finding the hidden variable.
3. The data is assumed to be a hybrid observation of multiple data sources that are statistically independent of each other, whereas in PCA only the data is assumed to be uncorrelated. As with factor analysis, dimension reduction may be achieved if the number of data sources is less than the number of observed data.
In the spatial image voxelization module, an ID matrix of the edge image is voxelized. As shown in fig. 5, specifically: and (3) performing space position splicing and series connection on the ID matrix of the acquired edge image according to the image characteristics to generate an elevation value (the elevation value: the digital expression of the topographic surface morphology attribute information is the digital description with the space position characteristics and the topographic attribute characteristics) with the space image surface characteristics and the position characteristic regular grid in the three-dimensional space, outputting voxel data which can be used for three-dimensional modeling, and using donor element modeling software. The output voxels have no absolute position coordinates in space, only relative positions, and can form positions in the data structure of a single volume image. The smallest voxel element output can be made according to user settings, such as a model of a cube, polyhedron, sphere, etc. The user can use the generated voxel data to perform rapid three-dimensional construction so as to be closer to the display of a real object, and the interpretability of the model is effectively improved.
For example, dividing the space where the space environment model formed by the voxels is located into grids, and directly counting the grids by using a triangular patch distance method to determine whether the grids are covered by the model. The specific method comprises the following steps: traversing all triangles, calculating the distance between the triangles and the voxel grid, setting a threshold value, and judging whether the grid is covered or not. In addition, under the condition of image stitching of complex environment design, an Atlas microprocessor is adopted by the platform, a GPU in the processor can acquire space and object voxels by a voxel method based on rendering, and in a rendering pipeline, the GPU can rasterize a triangular patch. Finally, the spatial environment voxelized image information containing the surface information of the learning environment model and the internal attribute capable of describing the model is formed, spatial environment voxels which can be used for three-dimensional modeling are output, and donor element modeling software is used or stored.
The edge computing platform also comprises a structured storage module which is used for providing a local relational database, and the image processing results are all integrated into the structural storage in the form of subdirectories, and the file catalogue is a data set of image ID and space image voxel. The data acquired by the edge computing platform can be stored locally, and can be synchronously connected with a Cassandra, bigtable, hadoopdb, megastore, dynamo structured storage system.
The voxel data can be integrated to form a structure storage in the form of a subdirectory, and the structure storage comprises uploading by a cloud end and backing up by a local storage to a Micro SD or a local hard disk, in addition, the output format of a specific protocol can be customized in the system according to the edge calculation requirement, on the premise that a USB port is externally connected with storage equipment, a user presses a report output key, the system can output an edge calculation result, and the image processing progress, the result and the like can be checked for the user.
The invention can be used for constructing the space environment of the AR/VR scene, such as constructing a learning environment (classroom) and the like.
Referring to FIG. 6, the process flow of the functional components and artificial intelligence and edge computing platform of the present invention includes the steps of:
step one: after the Atlas microprocessor is electrified, the system is started and enters an initialization or setting mode, then the system starts to enter a self-checking mode, software and hardware self-checking is carried out on peripheral devices such as a camera, a USB interface, a local storage hard disk, 40Pin IO port connecting equipment, battery electric quantity and the like, and the system enters a working mode after the self-checking is free;
step two: after the system enters the working mode, detecting whether an image input end has an image input instruction, if not, entering a standby mode by the system, and waiting for the system to wake up. At the moment, the system closes the functions of cloud and local storage software, all parallel algorithm modules of the system are in a dormant state, and the system only keeps a wake-up detection function;
step three: after the system is awakened, for the transmitted image, the core processor first segments the image and acquires the region of interest for feature extraction. After the extraction is finished, storing locally;
step four: the system carries out association fusion on the content completed by the feature extraction algorithm, and mainly uses a linear method and a nonlinear method to carry out association fusion.
Step five: after the images are subjected to feature extraction and association fusion, carrying out ID processing on each frame of image, forming a specific image ID matrix, and transmitting the specific image ID matrix to a user application or a cloud through an interface;
step six: after the image is subjected to ID, the Atlas microprocessor can carry out voxel processing on the basic space image, and output voxels which can be used for three-dimensional modeling, and donor element modeling software is used.
Step seven: and integrating the data processed by the system algorithm by the Atlas microprocessor, carrying out structural processing and uploading the data to the cloud.
Step eight: and after the system image processing is finished, generating an image processing report mainly comprising image ID and voxel result, and presenting the image ID and voxel result in a catalog form.
Step nine: the system is insufficient in electric quantity or is powered off accidentally, the algorithm system can terminate the current operation and record a work log, the next time is started, the incomplete part is restored to the last interrupt node according to the work log, and the work log is automatically stored locally.
The invention can effectively utilize the characteristic of combining an image processing algorithm and edge calculation, obtain three-dimensional data required by learning environment construction more quickly, construct the three-dimensional environment quickly, shorten modeling response time and improve bandwidth availability. In addition, an edge computing platform with low delay, image ID, easy expansion and Yi Yunwei of a user is provided, related application development is performed for artificial intelligence, real-time modeling is performed on learning environment, shorter response time is provided for image processing items such as AR/VR, and user experience is greatly improved. Compared with the conventional edge platform, the invention has the following advantages and bright points:
(1) Feature extraction: the local fusion of various algorithms is used and calculated, edge features are formed and then uploaded to the cloud for further extraction, so that the bandwidth utilization efficiency can be effectively improved, and the feature extraction accuracy is improved;
(2) And (3) association fusion: local correlation and non-correlation processing are carried out on the edge features to form an edge correlation relationship, so that the risk of excessive fitting can be effectively reduced, and the cloud training speed can be improved;
(3) Image ID conversion: after feature extraction and data association, the edge computing platform performs ID modeling processing on the image, whether the image is transmitted to the cloud or used locally, the data visualization capability can be greatly improved, the data is more visualized, and unique edge image IDs are formed;
(4) Voxelization of space images: the three-dimensional space image is subjected to ID to form four-dimensional description voxels convenient to model, so that the interpretability of the model is improved, and various modeling engines are adapted;
(5) And (3) structural storage: the edge cloud storage and the systematic cloud storage are convenient for the terminal and the cloud computing to use, the disadvantage that all the cloud storage and computing are relied on in the past is changed, the real-time modeling efficiency is improved, and the experience is enhanced.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting thereof; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. An edge computing platform for generating voxel data and an edge image feature ID matrix, comprising: the device comprises a feature extraction module, an association fusion module, an image ID module, a space image voxelization module and a structured storage module; wherein, the liquid crystal display device comprises a liquid crystal display device,
the feature extraction module is used for extracting features of the acquired images, generating edge feature minimum subsets, and combining the edge feature minimum subsets of all the images to form feature subsets of the images;
the association fusion module is used for carrying out linear correlation fusion and non-correlation fusion processing on the feature subsets of the image to generate an edge correlation relationship;
the image ID module is used for carrying out ID processing on the edge correlation relationship to generate an edge image ID matrix set;
the space image voxelization module is used for generating voxels according to the edge image ID matrix set so as to perform three-dimensional modeling;
the structural storage module is used for locally storing the ID matrix and the voxels or transmitting the ID matrix or the voxels to the cloud;
in the association fusion module, performing linear fusion and nonlinear fusion on the feature subsets of the image, and generating an edge correlation relationship comprises the following steps:
s31, processing the edge feature minimum subset of the image in parallel by using a linear method, a nonlinear method and/or a flow pattern learning method;
s32, performing optimal orthogonal transformation on the processing result in the step S31, performing variance operation on the characteristic correlation relationship, removing the characteristic with the largest variance, and then performing characteristic sorting;
s33, performing high-dimensional projection on the matrix obtained after feature sequencing and obtaining a vector space for judging the features of the optimal image so as to extract classification information and compressed feature space dimensions, ensuring that the distance between the feature subsets of the image in the transformed subspace is maximum and the distance between the feature subsets is minimum after projection, and generating a representation of the feature subsets of the image in the vector space for judging the features of the optimal image according to the distance relation or the dissimilarity relation between the feature subsets of the image;
s34, performing principal component analysis on the high-dimensional projection matrix formed by all the images, removing redundancy and errors, and correlating the linear characteristic relation and the nonlinear characteristic relation of all the images so as to form the edge correlation relation of the images.
2. The edge computing platform of claim 1, wherein the step of generating a minimal subset of edge features in the feature extraction module comprises:
s21, extracting image features of the image, wherein the image features comprise color features, shape features, texture features, spatial relationship features and graphic meaning features;
s22, representing the color features, the shape features, the texture features, the spatial relationship features and the graphic meaning features by using a feature matrix form, and generating a minimum subset of the edge features of the image according to the image features.
3. The edge computing platform according to claim 1, wherein in step S31, the color, shape, texture, and spatial relationship of the minimum feature subset of the image are subjected to linear fusion transformation to obtain a color transition relationship, a gradient transition relationship, a texture transition relationship, and a spatial position projection relationship, respectively; and carrying out nonlinear fusion operation on the color, shape, texture and spatial relation of the minimum feature subset of the video to obtain a nonlinear color transition relation, gradient transition relation, texture transition relation and spatial position projection relation.
4. The edge computing platform according to claim 1, wherein in step S32, according to the result of step S31, the obtained subset of image features is subjected to optimal orthogonal transformation, variance feature transformation, feature ordering, high-dimensional projection, principal component analysis processing, and an edge correlation relationship associated with all the image features is generated.
5. The edge computing platform of claim 1, wherein the linear method comprises a component analysis method, a linear discriminant analysis method, and a multidimensional scaling method.
6. The edge computing platform of claim 1, wherein in the image-ID module, the step of generating an ID matrix comprises;
s41, reducing the dimension of the edge correlation relationship of the feature subset of the image through an edge ID algorithm;
s42, performing ID processing on the image to generate an edge image ID matrix set.
7. The edge computing platform of claim 6, wherein the step of dimension reduction comprises:
s411, converting the edge correlation relationship data into a new coordinate system, wherein a first coordinate axis of the new coordinate system is a direction with maximum variance in the edge correlation relationship data, and a second coordinate axis is a direction orthogonal to the first coordinate axis and with maximum variance;
s412, assuming that hidden variables exist in the observed data, if the number of the hidden variables is smaller than that of the observed data, realizing data dimension reduction through the hidden variables;
s413, assuming that the observed data is a mixed observation of a plurality of data sources, the data sources are statistically independent from each other, and in PCA, only the data is assumed to be uncorrelated, if the number of data sources is less than the number of observed data, the dimension is reduced by projection.
8. The edge computing platform of claim 1, wherein the step of spatially image voxelization module for forming voxels from the set of edge image ID matrices for three-dimensional modeling comprises:
and performing space image stitching and concatenation on the edge image ID matrix set to form a regular grid and voxels which can be used for three-dimensional modeling in a three-dimensional space, and using or storing the voxels by donor element modeling software.
9. The edge computing platform of claim 1, further comprising a structured storage module for providing a local relational database, integrating image processing results into a structured store in the form of subdirectories, file directories being image-ID and spatial image-voxel data sets, data acquired for the system being stored locally, or synchronously with the structured storage system.
CN202010511907.XA 2020-06-08 2020-06-08 Edge computing platform for generating voxel data and edge image feature ID matrix Active CN111681309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010511907.XA CN111681309B (en) 2020-06-08 2020-06-08 Edge computing platform for generating voxel data and edge image feature ID matrix

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010511907.XA CN111681309B (en) 2020-06-08 2020-06-08 Edge computing platform for generating voxel data and edge image feature ID matrix

Publications (2)

Publication Number Publication Date
CN111681309A CN111681309A (en) 2020-09-18
CN111681309B true CN111681309B (en) 2023-07-25

Family

ID=72435697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010511907.XA Active CN111681309B (en) 2020-06-08 2020-06-08 Edge computing platform for generating voxel data and edge image feature ID matrix

Country Status (1)

Country Link
CN (1) CN111681309B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113117344B (en) * 2021-04-01 2023-07-18 广州虎牙科技有限公司 Voxel building generation method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388020A (en) * 2008-07-07 2009-03-18 华南师范大学 Composite image search method based on content
JP2009140513A (en) * 2002-07-16 2009-06-25 Nec Corp Pattern characteristic extraction method and device for the same
WO2018040099A1 (en) * 2016-08-31 2018-03-08 深圳市唯特视科技有限公司 Three-dimensional face reconstruction method based on grayscale and depth information
CN109949349A (en) * 2019-01-24 2019-06-28 北京大学第三医院(北京大学第三临床医学院) A kind of registration and fusion display methods of multi-modal 3-D image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2559157A (en) * 2017-01-27 2018-08-01 Ucl Business Plc Apparatus, method and system for alignment of 3D datasets

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009140513A (en) * 2002-07-16 2009-06-25 Nec Corp Pattern characteristic extraction method and device for the same
CN101388020A (en) * 2008-07-07 2009-03-18 华南师范大学 Composite image search method based on content
WO2018040099A1 (en) * 2016-08-31 2018-03-08 深圳市唯特视科技有限公司 Three-dimensional face reconstruction method based on grayscale and depth information
CN109949349A (en) * 2019-01-24 2019-06-28 北京大学第三医院(北京大学第三临床医学院) A kind of registration and fusion display methods of multi-modal 3-D image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NSCT与边缘检测相结合的多聚焦图像融合算法;宋瑞霞 等;计算机辅助设计与图形学学报;28(12);2134-2141 *

Also Published As

Publication number Publication date
CN111681309A (en) 2020-09-18

Similar Documents

Publication Publication Date Title
US20230098115A1 (en) Generating light-source-specific parameters for digital images using a neural network
CN108509848B (en) The real-time detection method and system of three-dimension object
WO2021093453A1 (en) Method for generating 3d expression base, voice interactive method, apparatus and medium
CN107229757B (en) Video retrieval method based on deep learning and Hash coding
Lian et al. Visual similarity based 3D shape retrieval using bag-of-features
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN108171133B (en) Dynamic gesture recognition method based on characteristic covariance matrix
CN111127631B (en) Three-dimensional shape and texture reconstruction method, system and storage medium based on single image
CN108121950B (en) Large-pose face alignment method and system based on 3D model
EP1579378A2 (en) Clustering appearances of objects under varying illumination conditions
US20200057778A1 (en) Depth image pose search with a bootstrapped-created database
US20230326173A1 (en) Image processing method and apparatus, and computer-readable storage medium
CN113808277B (en) Image processing method and related device
CN111507357A (en) Defect detection semantic segmentation model modeling method, device, medium and equipment
CN111862278B (en) Animation obtaining method and device, electronic equipment and storage medium
Feng et al. 3D shape retrieval using a single depth image from low-cost sensors
JP2022095591A (en) Machine-learning for 3d object detection
CN111681309B (en) Edge computing platform for generating voxel data and edge image feature ID matrix
CN112330825A (en) Three-dimensional model retrieval method based on two-dimensional image information
Lei et al. Mesh convolution with continuous filters for 3-d surface parsing
Yanmin et al. Research on ear recognition based on SSD_MobileNet_v1 network
Yin et al. Virtual reconstruction method of regional 3D image based on visual transmission effect
CN110910463B (en) Full-view-point cloud data fixed-length ordered encoding method and equipment and storage medium
CN116881886A (en) Identity recognition method, identity recognition device, computer equipment and storage medium
CN111597367A (en) Three-dimensional model retrieval method based on view and Hash algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant