CN113011584B - Coding model training method, coding device and storage medium - Google Patents
Coding model training method, coding device and storage medium Download PDFInfo
- Publication number
- CN113011584B CN113011584B CN202110293408.2A CN202110293408A CN113011584B CN 113011584 B CN113011584 B CN 113011584B CN 202110293408 A CN202110293408 A CN 202110293408A CN 113011584 B CN113011584 B CN 113011584B
- Authority
- CN
- China
- Prior art keywords
- entity
- neural network
- trained
- data
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012549 training Methods 0.000 title claims abstract description 102
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000007781 pre-processing Methods 0.000 claims abstract description 68
- 238000003062 neural network model Methods 0.000 claims abstract description 63
- 238000013528 artificial neural network Methods 0.000 claims abstract description 15
- 239000000470 constituent Substances 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application provides a method for training a coding model, a coding method, a device and a storage medium, wherein the method comprises the following steps: acquiring training data, wherein the training data comprises geographic entity vector data marked with coding information; preprocessing the training data to obtain a preprocessing result, wherein the preprocessing result comprises corresponding characteristic attribute values, the number of characteristic attributes and the total number of entity categories; generating a neural network model to be trained according to the preprocessing result, wherein the number of input features of an input layer of the neural network model to be trained is equal to the number of the feature attributes, and an output layer of the neural network to be trained is used for outputting recognition results with the same number as the total number of the entity categories; and training the neural network model to be trained to obtain an entity class coding model, so that the neural network model can be flexibly configured to carry out entity coding.
Description
Technical Field
The embodiment of the application relates to the field of geographic information systems, in particular to a coding model training method, a coding device and a storage medium.
Background
In the related art, a scheme of performing coding assignment on geographic vector data mainly depends on a manual identification mode, and after geographic elements are drawn according to remote sensing images or three-dimensional geographic data, coding attributes of drawn entities are given to the drawn entities. In the deep learning related art technology, the input data of a deep learning network of a fixed model requires a uniform number of characteristic attributes, that is, the related art adopts a single neural network which is uniformly constructed to identify and encode. It can be understood that when the feature attributes actually required by the entity to be encoded are small, a large number of useless parameters exist by adopting the neural network model strictly limiting the number of input feature attributes, and when the feature attributes actually required by the entity to be encoded are large, data cannot be input because the number of input features of the neural network strictly limiting the number of input feature attributes is smaller than that of the input feature attributes, so that the construction and training methods of the deep learning network model adopting the related technology have high requirements on the input data limitation, and meanwhile, the neural network model actually required cannot be flexibly configured.
Therefore, how to flexibly configure the neural network model to perform multiple entity encodings is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a method for training a coding model, a coding method, a device and a storage medium, and the method, the device and the storage medium can at least realize the flexible configuration of a neural network model to identify entity data through some embodiments of the application, so that coding is performed according to an identification result.
In a first aspect, a method of coding model training, the method comprising: acquiring training data, wherein the training data comprises geographic vector data marked with coding information; preprocessing the training data to obtain a preprocessing result, wherein the preprocessing result comprises corresponding characteristic attribute values, the number of characteristic attributes and the total number of entity categories; generating a neural network model to be trained according to the preprocessing result, wherein the number of input features of an input layer of the neural network model to be trained is equal to the number of the feature attributes, and an output layer of the neural network to be trained is used for outputting recognition results with the same number as the total number of the entity categories; and training the neural network model to be trained to obtain an entity class coding model.
Therefore, according to the embodiment of the application, the input layer and the output layer of the matched neural network model to be trained are adaptively constructed according to the number of the characteristic attributes corresponding to the training data and the total number of the entity categories to be encoded, so that the entity category encoding model is obtained, the input of the number of the characteristic attributes of various numbers and the identification mark of the types can be realized, the automatic identification degree of various entities is improved, omission is effectively prevented, the manual resources are greatly saved, the advantages are obvious when a large amount of entity data are processed, and the data processing efficiency is greatly improved.
With reference to the first aspect, in an implementation manner, the preprocessing the training data to obtain a preprocessing result includes: extracting at least one characteristic attribute in the training data, wherein the at least one characteristic attribute is a basic element composing the entity; calculating at least one characteristic attribute value corresponding to the at least one characteristic attribute respectively; and taking the at least one characteristic attribute value as the preprocessing result.
With reference to the first aspect, in an implementation manner, before the extracting at least one feature attribute in the training data, the method further includes: acquiring at least one self-selection feature attribute selected by a user in advance, wherein the at least one feature attribute comprises the at least one self-selection feature attribute, and the at least one self-selection feature attribute is selected by the user according to actual requirements; the extracting at least one characteristic attribute in the training data comprises: extracting the self-selected feature attribute in the training data; the calculating at least one characteristic attribute value corresponding to the at least one characteristic attribute respectively includes: calculating self-selection feature attribute values respectively corresponding to the self-selection feature attributes; said taking said at least one characteristic attribute value as said preprocessing result comprises: and taking the self-selected characteristic attribute value as the preprocessing result.
Therefore, according to the embodiment of the application, the customization of the training data can be realized by pre-selecting at least one self-selected characteristic attribute by the user, the training data is converted into at least one self-selected characteristic attribute value, the model can be automatically adapted to the data generated by different characteristic attributes, and the characteristic attributes obtained by the user can be subjected to training and learning by adopting different neural network models.
With reference to the first aspect, in an implementation manner, the generating a neural network model to be trained according to the preprocessing result includes: inputting the self-selected characteristic attribute values into the neural network model to be trained, and training the neural network model to be trained.
Therefore, the embodiment of the application trains the neural network model to be trained by taking the self-selected characteristic attribute value as input, and can obtain the neural network model matched with the self-selected characteristic attribute selected by the user, so that the entity data is automatically identified and encoded according to the identification result.
With reference to the first aspect, in one implementation manner, the geographic vector data is composed of vertices, line segments or faces.
In a second aspect, a method of encoding, the method comprising: receiving input entity data to be encoded, wherein the entity data to be encoded comprises all constituent elements of at least one entity, and all the constituent elements are obtained by analyzing the entity to be encoded; and encoding the entity data to be encoded by using the entity class encoding model obtained by the method according to the first aspect and all implementation manners of the first aspect.
Therefore, the embodiment of the application can realize the encoding of the entity data to be encoded by using the entity class encoding model.
In a third aspect, an apparatus for coding model training, the apparatus comprising: an acquisition unit configured to acquire training data, wherein the training data includes geographic vector data labeled with coding information; the preprocessing unit is configured to preprocess the training data to obtain a preprocessing result, wherein the preprocessing result comprises characteristic attribute values corresponding to the training data, the number of the characteristic attributes and the total number of entity categories; the generation unit is configured to generate a neural network model to be trained at least according to the preprocessing result, wherein the number of input nodes of an input layer of the neural network model to be trained is equal to the number of the characteristic attributes, and an output layer of the neural network to be trained is used for outputting the same number of recognition results as the total number of the entity categories; the training unit is configured to train the neural network model to be trained to obtain an entity class coding model.
With reference to the third aspect, in an embodiment, the preprocessing unit is specifically configured to: extracting at least one characteristic attribute in the training data; calculating at least one characteristic attribute value corresponding to the at least one characteristic attribute respectively; and taking the at least one characteristic attribute value as the preprocessing result.
With reference to the third aspect, in an embodiment, the preprocessing unit is specifically configured to: acquiring at least one self-selection feature attribute selected by a user in advance, wherein the at least one feature attribute comprises the at least one self-selection feature attribute, and the at least one self-selection feature attribute is selected by the user according to actual requirements; the extracting at least one characteristic attribute in the training data comprises: extracting the self-selected feature attribute in the training data; the calculating at least one characteristic attribute value corresponding to the at least one characteristic attribute respectively includes: calculating self-selection feature attribute values respectively corresponding to the self-selection feature attributes; said taking said at least one characteristic attribute value as said preprocessing result comprises: and taking the self-selected characteristic attribute value as the preprocessing result.
With reference to the third aspect, in one embodiment, the generating unit is specifically configured to: inputting the self-selected characteristic attribute values into the neural network model to be trained, and training the neural network model to be trained.
With reference to the third aspect, in one embodiment, the geographic vector data is composed of vertices, line segments, or faces.
In a fourth aspect, an apparatus for encoding, the apparatus comprising: the receiving unit is configured to receive input entity data to be encoded, wherein the entity data to be encoded comprises all constituent elements of at least one entity, and the all constituent elements are obtained through analysis of the entity to be encoded; an encoding unit configured to encode the entity data to be encoded using the entity class encoding model obtained by the method according to any one of the first aspect.
In a fifth aspect, an electronic device includes: a processor, a memory, and a bus; the processor is connected to the memory via the bus, the memory storing computer readable instructions which, when executed by the processor, are adapted to carry out the method according to any one of the first aspect, any one of the embodiments of the first aspect, the second aspect and the second aspect.
A sixth aspect, a computer readable storage medium having stored thereon a computer program for implementing the method according to any of the first aspect, any of the embodiments of the first aspect, the second aspect and any of the embodiments of the second aspect when executed by a server.
Drawings
FIG. 1 is a flow chart of a method for coding model training according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of encoding shown in an embodiment of the present application;
FIG. 3 is a diagram of internal units of a device for coding model training according to an embodiment of the present application;
FIG. 4 is a diagram of an encoded device internal unit shown in an embodiment of the present application;
fig. 5 is a block diagram of an internal unit of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application based on the embodiments of the present application.
The method steps in the embodiments of the present application are described in detail below with reference to the accompanying drawings.
The embodiment of the application can be applied to various entity identification scenes, for example, the scenes comprise the scenes for identifying and encoding the geographic entities after the geographic entities are drawn according to remote sensing images or three-dimensional geographic data, for example, the scenes for identifying and encoding the geographic entities such as mountains, rivers and the like in a map are needed, and the encoding of the geographic entities is taken as an example to illustrate the problems of the related art method. In the related art, after geographic entity is drawn into geographic entity vector data according to remote sensing images or three-dimensional geographic data, coding attributes are given to the entity, and in the process of deep learning, the input data needs to unify the number of characteristic attributes, and a single neural network is used for identification and coding. This may cause that when the actually required feature attributes are small, there are a large number of useless parameters, and when the actually required feature attributes are large, data cannot be input, so that the input data has a high limitation requirement, and at the same time, the neural network model cannot be flexibly configured.
At least to solve the above-mentioned problem, some embodiments of the present application provide a method for preprocessing training data and constructing a neural network model to be trained (for example, constructing the number of nodes of an input layer and the number of nodes of an output layer of the neural network model to be trained) according to the result of the preprocessing, and performing entity class identification coding on an entity to be coded according to the constructed neural network model to be trained. For example, in some embodiments of the present application, at least one self-selection feature attribute selected by a user according to actual needs is input into a neural network model to be trained, and after training, a model of a corresponding identifiable entity class is obtained, so that flexible configuration of the neural network model to encode entity data can be realized. It can be appreciated that the application scenario of the embodiments of the present application is not limited thereto.
A method of coding model training and a method of coding performed by the electronic device will be described below.
S110, training data is acquired.
In one embodiment, training data is obtained, wherein the training data includes geographic vector data labeled with encoded information.
The above-described each geographic vector data represents an element constituting an entity, for example: the vertices of the mountains, the faces of the houses or the lines of the roads.
Before acquiring training data, the entity data needs to be marked, and the marked entity data is used as the training data. As an embodiment, constituent elements (including points, lines, planes, or the like) included in each of the different categories of entities in the entity data are marked with corresponding encodings. For example: each line segment forming the road entity is marked as 001 by adopting codes, and the surface forming the house is marked as 002 by adopting codes; and the following steps: a house has 6 surfaces, the category of which 6 surfaces is labeled 110; a mountain is made up of 100 vertices, and the class of these 100 vertices is labeled 120.
In one embodiment, the geographic vector data is formed of vertices, line segments, or faces, e.g., the marked entity is a house, and the vector data corresponding to the house entity is the face forming the house, and the vertices forming the house, and the line segments forming the house. As another example, the vector data corresponding to the house entity is the face constituting the house, or the vertex constituting the house, or the line segment constituting the house.
The embodiment of S120 is exemplarily set forth below.
S120, preprocessing the training data to obtain a preprocessing result.
In one embodiment, the training data is preprocessed to obtain a preprocessing result, wherein the preprocessing result comprises characteristic attributes corresponding to the training data, the number of the characteristic attributes and the total number of entity categories.
After the training data is obtained, the training data is preprocessed, and the preprocessed result comprises characteristic attributes in the training data, the number of the characteristic attributes and the number of entity categories needing to be encoded.
The feature attribute corresponding to the training data may include perimeter, area, number of vertices, etc. of constituent elements of each entity, and the entity category includes mountain, street, and river. For example, mountain and river are denoted by 001, 002, 003, and the like, and the embodiment of the present application is not limited thereto.
The preprocessing is to formulate a feature extraction scheme for the geographic entity vector data in a visual mode, and the preprocessing result can be obtained by executing the feature extraction scheme, wherein the feature extraction scheme is as follows:
in one embodiment, preprocessing the training data to obtain a preprocessing result includes: extracting at least one characteristic attribute in the training data; calculating at least one characteristic attribute value corresponding to the at least one characteristic attribute respectively; and taking the at least one characteristic attribute value as the preprocessing result.
It should be noted that the at least one feature attribute includes all feature attributes and part of feature attributes, which are related to the entity type to be marked.
The preprocessing of the training data includes extracting at least one of the characteristic attributes in the training data, wherein the characteristic attributes may include, but are not limited to: the method comprises the steps of calculating at least one characteristic attribute value corresponding to at least one extracted characteristic attribute, and taking the calculated at least one characteristic attribute value as a preprocessing result, wherein the at least one characteristic attribute value corresponds to the geometric type, closing state, line width, perimeter, area, top point number and the like of an entity.
For example: at least one characteristic attribute in the extracted training data is three, and the three characteristic attributes comprise the perimeter of the side forming the entity, the area of the surface forming the entity and the number of the top points forming the entity, and accordingly, the characteristic attribute values are obtained, namely perimeter values corresponding to the perimeter, area values corresponding to the area and the number of the top points are calculated. The input layer of the neural network model to be trained constructed for three according to the number of the characteristic attributes comprises three input nodes which are respectively used for receiving the perimeter characteristics, the area characteristics and the vertex number characteristics of the input.
In one embodiment, prior to said extracting at least one characteristic attribute in the training data, the method further comprises: acquiring at least one self-selection feature attribute selected by a user in advance, wherein the at least one feature attribute comprises the at least one self-selection feature attribute, and the at least one self-selection feature attribute is selected by the user according to actual requirements; the corresponding S120 includes: extracting the self-selected feature attribute of the training data; calculating self-selection feature attribute values respectively corresponding to the self-selection feature attributes; and taking the self-selected characteristic attribute value as the preprocessing result.
It may be understood that in some embodiments of the present application, before extracting at least one feature attribute in the training data, a visualization software for providing a user with a feature attribute selection is set up, the user may combine the feature attributes according to actual requirements, and the types of the feature attributes may also be expanded, and may be expanded with respect to other feature attributes, and the expanded feature attributes are also displayed by the visualization software for the user to select. After the user preselects at least one self-selection feature attribute, the at least one self-selection feature attribute preselected by the user is obtained, the at least one self-selection feature attribute is extracted, at least one self-selection feature attribute value corresponding to the at least one self-selection feature attribute respectively is calculated, and the at least one self-selection feature attribute value is used as a preprocessing result.
Therefore, according to the embodiment of the application, the customization of the training data can be realized by pre-selecting at least one self-selected characteristic attribute by the user, the training data is converted into at least one self-selected characteristic attribute value, the model can be automatically adapted to the data generated by different characteristic attributes, and the characteristic attributes obtained by the user can be subjected to training and learning by adopting different neural network models.
The embodiment of S130 is exemplarily set forth below.
And S130, generating a neural network model to be trained according to the preprocessing result.
In one embodiment, a neural network model to be trained is generated according to the preprocessing result, wherein the number of input features of an input layer of the neural network model to be trained is equal to the number of the feature attributes, and an output layer of the neural network to be trained is used for outputting the same number of recognition results as the total number of the entity categories.
It should be noted that, in the process of executing the method, the electronic device does not output a plurality of entity identification results each time, but the output layer has the capability of outputting the same number of identification results as the total number of entity categories.
And taking the number of at least one characteristic attribute in the preprocessing result as a node of an input layer of the neural network model to be trained, and taking the number of the marked entity categories as a node of an output layer of the neural network model to be trained. As an embodiment, the number of hidden layers may be appropriately reduced to increase the response speed of the neural network model to be trained or may be appropriately increased to increase the accuracy of the neural network model to be trained. As another embodiment, the architecture LibTorch of the C++ neural network can be introduced, the complex problem in the mapping field can be solved by using a machine learning technology, and the data can be subjected to single-machine training and analysis at the user side without networking training through a server, so that the possibility of leakage of secret data is avoided.
For example: the total number of feature attributes corresponding to the preprocessing result is n, the total number of marked entity categories is m, then the total number of input layer nodes of the neural network to be trained generated according to the preprocessing result is n, the total number of output nodes is m, as an example, the generated neural network to be trained includes 5 fully connected layers, and the specific neural network to be trained can generate 5 layers of fully connected neural networks, for example: a first layer: an input layer for inputting n attributes and outputting n values; a second layer: inputting n values and outputting 2n values; third layer: 2n values are input, and 3n values are output; fourth layer: 3n values are input, and 2n values are output; fifth layer: and an output layer for inputting 2n values and outputting m values. Embodiments of the present application are not limited thereto, wherein n and m are natural numbers greater than or equal to 1.
In one embodiment, generating the neural network model to be trained according to the preprocessing result includes: inputting the self-selected characteristic attribute values into the neural network model to be trained, and training the neural network model to be trained.
And taking the number of the self-selected characteristic attribute values selected by the user according to the requirements as the number of the input layer nodes of the neural network model to be trained, and training the neural network model to be trained.
Therefore, the embodiment of the application trains the neural network model to be trained by taking the self-selected characteristic attribute value as input, and can obtain the neural network model matched with the self-selected characteristic attribute selected by the user, so that the entity data is automatically identified and encoded according to the identification result.
The embodiment of S140 is exemplarily set forth below.
And S140, training the neural network model to be trained to obtain an entity class coding model.
After the neural network model to be trained is obtained, the neural network model to be trained is trained, and the entity class coding model is obtained.
It should be noted that, the entity class is a class of various geographic entity objects, for example, the entity class includes mountain, river, etc., the entity class is characterized by adopting a predefined identifier, that is, the entity is encoded, the application is not limited to a specific type of identifier, as an example, various entities are encoded by adopting a digital identifier, specifically, mountain is denoted by 001, river is denoted by 002, and the embodiment of the application is not limited thereto.
Therefore, according to the embodiment of the application, the input layer and the output layer of the matched neural network model to be trained are adaptively generated according to the number of the characteristic attributes and the number of the entity categories corresponding to the training data, so that the entity category identification model is obtained, automatic identification of the entity data can be realized, omission is effectively prevented, manual resources are greatly saved, advantages are obvious when a large amount of entity data are processed, and the data processing efficiency is greatly improved.
The method of training the coding model by the electronic device is described above, and a specific embodiment of a coding method will be described below.
As shown in fig. 2, as an encoding method in the present application, there is included:
s210, receiving input entity data to be encoded.
In one embodiment, input entity data to be encoded is received, wherein the entity data to be encoded includes all constituent elements of at least one entity, the all constituent elements being obtained by parsing a map to be encoded.
Analyzing entity data to be encoded from a map to be encoded, inputting the entity data to be encoded into an entity type identification model, and receiving the input entity data to be encoded after the electronic equipment loads the entity type encoding model and parameters in the model.
It should be noted that, the entity data to be encoded may be formed by vertices, line segments, or planes, and all constituent elements are points, lines, planes, etc. of the geographic entity vector data, which the embodiments of the present application are not limited to.
S220, encoding the entity data to be encoded by using an entity class encoding model.
After the electronic equipment receives the entity data to be encoded, the entity data to be encoded is classified and identified by using the entity class encoding model obtained through training, and the entity data to be encoded is encoded according to the classification result.
As an embodiment of an encoding method, the entity data to be encoded received by the entity class encoding model includes vertices and line segments, the vertices representing houses are encoded as 001, and the line segments representing roads are encoded as 002 using the entity class encoding model.
Having described a specific embodiment of a method for encoding, an apparatus for training a model for encoding geographic vector data is described below.
As shown in fig. 4, an apparatus 300 for coding model training includes: an acquisition unit 310, a preprocessing unit 320, a generation unit 330 and a training unit 340.
In one embodiment, an apparatus for coding model training, the apparatus comprising: an acquisition unit configured to acquire training data, wherein the training data includes entity data for marking each geographic vector data on a map with a code; the preprocessing unit is configured to preprocess the training data to obtain a preprocessing result, wherein the preprocessing result comprises characteristic attributes corresponding to the training data, the number of the characteristic attributes and the total number of entity categories; the generation unit is configured to generate a neural network model to be trained according to the preprocessing result, wherein the number of input nodes of an input layer of the neural network model to be trained is equal to the number of the characteristic attributes, and an output layer of the neural network to be trained is used for outputting the same number of recognition results as the total number of the entity categories; the training unit is configured to train the neural network model to be trained to obtain an entity class coding model.
In one embodiment, the preprocessing unit is specifically configured to: extracting at least one characteristic attribute in the training data; calculating at least one characteristic attribute value corresponding to the at least one characteristic attribute respectively; and taking the at least one characteristic attribute value as the preprocessing result.
In one embodiment, the preprocessing unit is specifically configured to: acquiring at least one self-selection feature attribute selected by a user in advance, wherein the at least one feature attribute comprises the at least one self-selection feature attribute, and the at least one self-selection feature attribute is selected by the user according to actual requirements; the extracting at least one characteristic attribute in the training data comprises: extracting the self-selected feature attribute in the training data; the calculating at least one characteristic attribute value corresponding to the at least one characteristic attribute respectively includes: calculating self-selection feature attribute values respectively corresponding to the self-selection feature attributes; said taking said at least one characteristic attribute value as said preprocessing result comprises: and taking the self-selected characteristic attribute value as the preprocessing result.
In one embodiment, the generating unit is specifically configured to: inputting the self-selected characteristic attribute values into the neural network model to be trained, and training the neural network model to be trained.
In the embodiment of the present application, the module shown in fig. 3 can implement each process in the embodiment of the method of fig. 1. The operation and/or function of the individual modules in fig. 3 are respectively for realizing the corresponding flow in the method embodiment in fig. 1. Reference is specifically made to the description in the above method embodiments, and detailed descriptions are omitted here as appropriate to avoid repetition.
The foregoing describes an apparatus for coding model training, and the following describes an apparatus for coding.
As shown in fig. 4, an encoded apparatus 400 includes: a receiving unit 410 and an encoding unit 420.
In one embodiment, an apparatus for encoding, the apparatus comprising: the receiving unit is configured to receive input entity data to be encoded, wherein the entity data to be encoded comprises all constituent elements of at least one entity, and the all constituent elements are obtained through analysis of the geographic entity to be encoded; an encoding unit configured to encode the entity data to be encoded using the entity class encoding model obtained by the method as described in the first aspect and any one of the embodiments of the first aspect.
In the embodiment of the present application, the module shown in fig. 4 can implement each process in the embodiment of the method of fig. 2. The operation and/or function of the individual modules in fig. 4 are respectively for realizing the corresponding flow in the method embodiment in fig. 2. Reference is specifically made to the description in the above method embodiments, and detailed descriptions are omitted here as appropriate to avoid repetition.
As shown in fig. 5, an embodiment of the present application provides an electronic device 500, including: a processor 510, a memory 520 and a bus 530, the processor being connected to the memory by means of the bus, the memory storing computer readable instructions which, when executed by the processor, are adapted to carry out the method according to any one of the above-mentioned embodiments, in particular with reference to the description of the above-mentioned method embodiments, and detailed descriptions are omitted here as appropriate to avoid redundancy.
Wherein the bus is used to enable direct connection communication of these components. The processor in the embodiment of the application may be an integrated circuit chip, which has a signal processing capability. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but may also be a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The Memory may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc. The memory has stored therein computer readable instructions which, when executed by the processor, perform the method described in the above embodiments.
It will be appreciated that the configuration shown in fig. 5 is illustrative only and may include more or fewer components than shown in fig. 5 or have a different configuration than shown in fig. 5. The components shown in fig. 5 may be implemented in hardware, software, or a combination thereof.
The embodiments of the present application further provide a computer readable storage medium, on which a computer program is stored, which when executed by a server, implements the method according to any one of the foregoing embodiments, and specifically reference may be made to the description in the foregoing method embodiments, and detailed descriptions are omitted here as appropriate to avoid redundancy.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A method of coding model training, the method comprising:
obtaining training data, wherein the training data comprises geographic vector data marked with coding information, and the geographic vector data comprises at least one of vertexes, line segments and planes;
preprocessing the training data to obtain a preprocessing result, wherein the preprocessing result comprises a corresponding characteristic attribute value, the number of characteristic attributes and the total number of entity categories, and the characteristic attributes at least comprise one or more of the geometric types, whether the elements of the entity are closed, the line width, the perimeter, the area and the vertex;
generating a neural network model to be trained according to the preprocessing result, wherein the number of input features of an input layer of the neural network model to be trained is equal to the number of the feature attributes, and an output layer of the neural network to be trained is used for outputting recognition results with the same number as the total number of the entity categories;
training the neural network model to be trained to obtain an entity class coding model;
wherein the entity class coding model is used for marking the coding of the constituent elements included by each entity of different classes.
2. The method of claim 1, wherein preprocessing the training data to obtain a preprocessing result comprises:
extracting at least one characteristic attribute in the training data;
calculating at least one characteristic attribute value corresponding to the at least one characteristic attribute respectively;
and taking the at least one characteristic attribute value as the preprocessing result.
3. The method of claim 2, wherein prior to said extracting at least one characteristic attribute in the training data, the method further comprises:
acquiring at least one self-selection feature attribute selected by a user in advance, wherein the at least one feature attribute comprises the at least one self-selection feature attribute, and the at least one self-selection feature attribute is selected by the user according to actual requirements;
the extracting at least one characteristic attribute in the training data comprises:
extracting the self-selected feature attribute in the training data;
the calculating at least one characteristic attribute value corresponding to the at least one characteristic attribute respectively includes:
calculating self-selection feature attribute values respectively corresponding to the self-selection feature attributes;
said taking said at least one characteristic attribute value as said preprocessing result comprises:
and taking the self-selected characteristic attribute value as the preprocessing result.
4. A method according to claim 3, wherein generating a neural network model to be trained from the preprocessing results comprises:
inputting the self-selected characteristic attribute values into the neural network model to be trained, and training the neural network model to be trained.
5. The method of claim 1, wherein the geographic vector data is comprised of vertices, line segments, or faces.
6. A method of encoding, the method comprising:
receiving input entity data to be encoded, wherein the entity data to be encoded comprises all constituent elements of at least one entity, and all the constituent elements are obtained by analyzing the entity to be encoded;
an entity class coding model obtained according to the method of any of claims 1-5, for coding the entity data to be coded.
7. An apparatus for coding model training, the apparatus comprising:
an acquisition unit configured to acquire training data, wherein the training data includes geographic vector data labeled with coding information, and the geographic vector data is composed of at least one of a vertex, a line segment and a plane;
the preprocessing unit is configured to preprocess the training data to obtain a preprocessing result, wherein the preprocessing result comprises a characteristic attribute value, the number of characteristic attributes and the total number of entity categories corresponding to the training data, and the characteristic attributes at least comprise one or more of the geometric types, whether the elements of the entity are closed, the line width, the perimeter, the area and the vertex;
the generation unit is configured to generate a neural network model to be trained according to the preprocessing result, wherein the number of input features of an input layer of the neural network model to be trained is equal to the number of the feature attributes, and an output layer of the neural network to be trained is used for outputting the same number of recognition results as the total number of the entity categories;
the training unit is configured to train the neural network model to be trained to obtain an entity class coding model;
wherein the entity class coding model is used for marking the coding of the constituent elements included by each entity of different classes.
8. An apparatus for encoding, the apparatus comprising:
the receiving unit is configured to receive input entity data to be encoded, wherein the entity data to be encoded comprises all constituent elements of at least one entity, and the all constituent elements are obtained through analysis of the entity to be encoded;
an encoding unit configured to encode the entity data to be encoded using an entity class encoding model obtained by the method according to any one of claims 1-5.
9. An electronic device, comprising: a processor, a memory, and a bus;
the processor is connected to the memory via the bus, the memory storing computer readable instructions which, when executed by the processor, are adapted to carry out the method of any one of claims 1-6.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a server, implements the method according to any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110293408.2A CN113011584B (en) | 2021-03-18 | 2021-03-18 | Coding model training method, coding device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110293408.2A CN113011584B (en) | 2021-03-18 | 2021-03-18 | Coding model training method, coding device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113011584A CN113011584A (en) | 2021-06-22 |
CN113011584B true CN113011584B (en) | 2024-04-16 |
Family
ID=76402741
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110293408.2A Active CN113011584B (en) | 2021-03-18 | 2021-03-18 | Coding model training method, coding device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113011584B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103823845A (en) * | 2014-01-28 | 2014-05-28 | 浙江大学 | Method for automatically annotating remote sensing images on basis of deep learning |
CN107273502A (en) * | 2017-06-19 | 2017-10-20 | 重庆邮电大学 | A kind of image geographical marking method learnt based on spatial cognition |
CN108764263A (en) * | 2018-02-12 | 2018-11-06 | 北京佳格天地科技有限公司 | The atural object annotation equipment and method of remote sensing image |
CN110309856A (en) * | 2019-05-30 | 2019-10-08 | 华为技术有限公司 | Image classification method, the training method of neural network and device |
CN110390340A (en) * | 2019-07-18 | 2019-10-29 | 暗物智能科技(广州)有限公司 | The training method and detection method of feature coding model, vision relationship detection model |
CN110909768A (en) * | 2019-11-04 | 2020-03-24 | 北京地平线机器人技术研发有限公司 | Method and device for acquiring marked data |
-
2021
- 2021-03-18 CN CN202110293408.2A patent/CN113011584B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103823845A (en) * | 2014-01-28 | 2014-05-28 | 浙江大学 | Method for automatically annotating remote sensing images on basis of deep learning |
CN107273502A (en) * | 2017-06-19 | 2017-10-20 | 重庆邮电大学 | A kind of image geographical marking method learnt based on spatial cognition |
CN108764263A (en) * | 2018-02-12 | 2018-11-06 | 北京佳格天地科技有限公司 | The atural object annotation equipment and method of remote sensing image |
CN110309856A (en) * | 2019-05-30 | 2019-10-08 | 华为技术有限公司 | Image classification method, the training method of neural network and device |
CN110390340A (en) * | 2019-07-18 | 2019-10-29 | 暗物智能科技(广州)有限公司 | The training method and detection method of feature coding model, vision relationship detection model |
CN110909768A (en) * | 2019-11-04 | 2020-03-24 | 北京地平线机器人技术研发有限公司 | Method and device for acquiring marked data |
Also Published As
Publication number | Publication date |
---|---|
CN113011584A (en) | 2021-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110825968A (en) | Information pushing method and device, storage medium and computer equipment | |
CN116978011B (en) | Image semantic communication method and system for intelligent target recognition | |
CN115170575B (en) | Method and equipment for remote sensing image change detection and model training | |
CN115630771B (en) | Big data analysis method and system applied to intelligent construction site | |
CN111652181A (en) | Target tracking method and device and electronic equipment | |
CN110991298A (en) | Image processing method and device, storage medium and electronic device | |
CN111639700A (en) | Target similarity recognition method and device, computer equipment and readable storage medium | |
CN111241298A (en) | Information processing method, apparatus and computer readable storage medium | |
CN111291695B (en) | Training method and recognition method for recognition model of personnel illegal behaviors and computer equipment | |
CN114612902A (en) | Image semantic segmentation method, device, equipment, storage medium and program product | |
CN112016617A (en) | Fine-grained classification method and device and computer-readable storage medium | |
CN115049919B (en) | Remote sensing image semantic segmentation method and system based on attention regulation | |
CN117036060A (en) | Vehicle insurance fraud recognition method, device and storage medium | |
CN111753729A (en) | False face detection method and device, electronic equipment and storage medium | |
CN111741329A (en) | Video processing method, device, equipment and storage medium | |
CN114998583A (en) | Image processing method, image processing apparatus, device, and storage medium | |
CN113011584B (en) | Coding model training method, coding device and storage medium | |
CN112183303A (en) | Transformer equipment image classification method and device, computer equipment and medium | |
CN115272667B (en) | Farmland image segmentation model training method and device, electronic equipment and medium | |
CN112686996B (en) | Game mountain terrain creation method, model training method and related devices | |
CN113869431A (en) | False information detection method, system, computer device and readable storage medium | |
CN112785275A (en) | Industrial APP identification method and device | |
CN117271819B (en) | Image data processing method and device, storage medium and electronic device | |
CN117611877B (en) | LS-YOLO network-based remote sensing image landslide detection method | |
CN117541883B (en) | Image generation model training, image generation method, system and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |