CN116796203A - Spatial scene similarity comparison method, device and storage medium - Google Patents

Spatial scene similarity comparison method, device and storage medium Download PDF

Info

Publication number
CN116796203A
CN116796203A CN202311056789.8A CN202311056789A CN116796203A CN 116796203 A CN116796203 A CN 116796203A CN 202311056789 A CN202311056789 A CN 202311056789A CN 116796203 A CN116796203 A CN 116796203A
Authority
CN
China
Prior art keywords
space
similarity
spatial
scene
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311056789.8A
Other languages
Chinese (zh)
Other versions
CN116796203B (en
Inventor
郭旦怀
于萦雪
于珊珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Chemical Technology
Original Assignee
Beijing University of Chemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Chemical Technology filed Critical Beijing University of Chemical Technology
Priority to CN202311056789.8A priority Critical patent/CN116796203B/en
Publication of CN116796203A publication Critical patent/CN116796203A/en
Application granted granted Critical
Publication of CN116796203B publication Critical patent/CN116796203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/27Regression, e.g. linear or logistic regression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/151Transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a spatial scene similarity comparison method, a device and a storage medium. The method for comparing the similarity of the space scenes applied to the electronic equipment comprises the following steps: given space scene map data, extracting space objects and forms and features of the space objects from the space scene map data; extracting a spatial relationship between the spatial objects according to the extracted spatial objects; generating corresponding description text data by adopting a supervised deep learning method according to the triplet set; according to the description text data, a self-attention mechanism-based model is constructed to compare the similarity between the description texts of different spatial scenes, and a similarity value between the description texts is determined; and establishing a corresponding spatial scene similarity evaluation mechanism based on text similarity according to the similarity values between the descriptive texts, and calculating the similarity values between the spatial scenes.

Description

Spatial scene similarity comparison method, device and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method, a device and a storage medium for comparing spatial scene similarity.
Background
A spatial scene is a collection of a series of spatial objects and their associated relationships. In practice, a space scene often includes a plurality of space objects, each of which includes its own shape, perimeter, area, etc., and exhibits specific semantics, and meanwhile, various space relationships are formed between the space objects, so evaluating the space scene is a complex problem. Spatial scene similarity is an important indicator for quantitatively measuring the complete peer-to-peer deviation between two spatial scenes. By comparing the similarity between the spatial scenes, applications such as similar scene search, building layout reference, memory scene restoration and the like can be realized.
At present, the research on the similarity of the space scenes is limited to directly and respectively calculating and comparing the morphology, the characteristics and the space relation of the space objects in the space scenes, but the similarity among the space scenes calculated by the method can be the problems that the information among a plurality of space objects is ignored, the step of calculating the similarity of the space scenes is complex and the effect is poor because the space scenes contain a plurality of complex elements.
Disclosure of Invention
The disclosure provides a spatial scene similarity comparison method, a device and a storage medium.
A first aspect of an embodiment of the present disclosure provides a spatial scene similarity comparison method, applied to an electronic device, where the method includes:
given space scene map data, extracting space objects and forms and features of the space objects from the space scene map data;
extracting a spatial relationship between the spatial objects according to the extracted spatial objects; wherein the spatial relationship comprises: topological relation, direction relation and distance relation;
constructing a space scene description triplet set according to the space object, the form of the space object, the characteristics of the space object and the space relation data among the space object;
generating corresponding description text data by adopting a supervised deep learning method according to the triplet set;
according to the description text data, a self-attention mechanism-based model is constructed to compare the similarity between the description texts of different spatial scenes, and a similarity value between the description texts is determined;
and establishing a corresponding spatial scene similarity evaluation mechanism based on text similarity according to the similarity values between the descriptive texts, and calculating the similarity values between the spatial scenes.
Based on the above scheme, the self-attention mechanism based model refers to: a language characterization model based on a Transformer architecture and trained using a bi-directional masking mechanism is used to score text similarity.
Based on the above scheme, the evaluation mechanism refers to: and respectively giving different weights for describing the morphology of the space object, the characteristics of the space object and the space relation in the text, thereby obtaining a similarity value corresponding to the space scene.
A second aspect of an embodiment of the present disclosure provides a spatial scene similarity comparison method apparatus, applied to an electronic device, where the apparatus includes:
the first extraction module is used for giving space scene map data, and extracting space objects and the forms and features of the space objects from the space scene map data;
the second extraction module is used for extracting the spatial relation between the spatial objects according to the extracted spatial objects; wherein the spatial relationship comprises: topological relation, direction relation and distance relation;
the construction module is used for constructing a space scene description triplet set according to the space object, the shape of the space object, the characteristics of the space object and the space relation data among the space objects;
the generation module is used for generating corresponding description text data by adopting a supervised deep learning method according to the triplet set;
the determining module is used for constructing a self-attention mechanism model based similarity comparison between different space scene description texts according to the description text data, and determining a similarity value between the description texts;
and the calculation module is used for establishing a corresponding space scene similarity evaluation mechanism based on text similarity according to the similarity values between the description texts, and calculating the similarity values between the space scenes.
Based on the above scheme, the self-attention mechanism based model refers to: a language characterization model based on a Transformer architecture and trained using a bi-directional masking mechanism is used to score text similarity.
Based on the above scheme, the evaluation mechanism refers to: and respectively giving different weights for describing the morphology of the space object, the characteristics of the space object and the space relation in the text, thereby obtaining a similarity value corresponding to the space scene.
A third aspect of the disclosed embodiments provides a non-transitory computer-readable storage medium, which when executed by a processor of a computer, enables the computer to perform the spatial scene similarity comparison method as set forth in any one of the preceding claims.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
firstly, the embodiment of the disclosure provides a method for converting the comparison problem of the similarity of the space scenes into the comparison of the similarity of the description texts for the first time, and the method skillfully evaluates the similarity of the corresponding space scenes by comparing the similarity of the texts. The method has the beneficial effects that:
a. the direct similarity calculation of various factors which are difficult to measure in the space scene can be avoided.
b. The model for text similarity comparison is used, so that the calculation complexity is greatly reduced.
Secondly, the method can extract the information such as the space objects, the space relations and the like in the space scene to directly construct the triplet set, and compared with the prior art, the method can more fully utilize the space scene information.
Thirdly, the method can expand the application field of the similarity of the space scenes, and help the experts in the fields of space planning, building design, examination Gu Wenbo and the like to complete corresponding work more conveniently and rapidly.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method of spatial scene similarity comparison, according to an exemplary embodiment;
FIG. 2 is a partial flow diagram illustrating a method of spatial scene similarity comparison according to an exemplary embodiment;
FIG. 3 is a partial flow diagram illustrating a method of spatial scene similarity comparison according to an exemplary embodiment;
FIG. 4 is a partial flow diagram illustrating a method of spatial scene similarity comparison according to an exemplary embodiment;
fig. 5 is a schematic structural diagram of a spatial scene similarity comparison method apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus consistent with some aspects of the disclosure as detailed in the accompanying claims.
A spatial scene is a collection of a series of spatial objects and their associated relationships. In practice, a space scene often includes a plurality of space objects, each of which includes its own shape, perimeter, area, etc., and exhibits specific semantics, and meanwhile, various space relationships are formed between the space objects, so evaluating the space scene is a complex problem. Spatial scene similarity is an important indicator for quantitatively measuring the complete peer-to-peer deviation between two spatial scenes. By comparing the similarity between the spatial scenes, applications such as similar scene search, building layout reference, memory scene restoration and the like can be realized.
At present, the research on the similarity of the space scenes is limited to directly and respectively calculating and comparing the morphology, the characteristics and the space relation of the space objects in the space scenes, but the similarity among the space scenes calculated by the method can be the problems that the information among a plurality of space objects is ignored, the step of calculating the similarity of the space scenes is complex and the effect is poor because the space scenes contain a plurality of complex elements.
In the present day, a method for calculating text similarity has been developed to be mature, and the embodiment of the present disclosure converts the similarity between spatial scenes into a calculation for describing text similarity, and builds a text similarity comparison model based on a self-attention mechanism, thereby creating a novel spatial scene similarity comparison method.
As shown in fig. 1, an embodiment of the present disclosure provides a spatial scene similarity comparison method, which is applied to an electronic device, and includes:
s110: given space scene map data, extracting space objects and forms and features of the space objects from the space scene map data;
s120: extracting a spatial relationship between the spatial objects according to the extracted spatial objects; wherein the spatial relationship comprises: topological relation, direction relation and distance relation;
s130: constructing a space scene description triplet set according to the space object, the form of the space object, the characteristics of the space object and the space relation data among the space object;
s140: generating corresponding description text data by adopting a supervised deep learning method according to the triplet set;
s150: according to the description text data, a self-attention mechanism-based model is constructed to compare the similarity between the description texts of different spatial scenes, and a similarity value between the description texts is determined;
s160: and establishing a corresponding spatial scene similarity evaluation mechanism based on text similarity according to the similarity values between the descriptive texts, and calculating the similarity values between the spatial scenes.
The electronic device herein may be various electronic devices including mobile terminals and fixed terminals. By way of example, mobile terminals may include various smart home devices, wearable devices, tablet computers, smart office devices, or cell phones. Fixed terminals include, but are not limited to, desktop computers.
The embodiment of the invention discloses a space scene similarity comparison method, which comprises the steps of firstly extracting the form and the characteristic of a space object and the space relation between the form and the characteristic from space scene map data to construct a space scene description triplet set; then generating a corresponding description text through a supervised deep learning method according to the extracted triplet set; and then, a model is constructed to compare the similarity between the description texts, a similarity evaluation mechanism corresponding to the space scenes is built, and finally, the similarity value between the space scenes is obtained.
FIG. 2 is a flow diagram of a transition from a spatial scene similarity comparison to a corresponding text similarity comparison, whereinRepresenting text similarity>Representing spatial scene similarity, ++>Representing map->Triple set constructed in (a),>representing map->A set of triples constructed in the same.
FIG. 3 is a schematic diagram of spatial object related information extracted from a spatial scene, including the morphology of the spatial objectFeatures of a spatial object->And spatial relation between spatial objects->The spatial relationship in turn comprises a topological relationship>Relation of directions>And distance relation->
Fig. 4 is a schematic diagram of a corresponding spatial scene similarity evaluation mechanism based on text similarity, wherein,similarity representing the morphological description of the spatial object in the text, < >>Similarity representing the description of the topological relation between the spatial objects in the text,/->Similarity representing the description of the directional relation between the spatial objects in the text,/->Similarity representing description of distance relation between spatial objects in text,/->Representing similarity of spatial object feature descriptions in the text; />Weights assigned to morphological similarity of spatial objects in text description, < >>Weights expressed as a degree of similarity of topological relations between spatial objects in the textual description, +.>Weights expressed as similarity of direction relations between spatial objects in the text description, +.>Weights expressed as similarity of distance relationships between spatial objects in the text description, +.>And represents the weight given to the similarity of the spatial object features in the text description.
The flow of comparing the similarity of the spatial scenes in this embodiment is shown in fig. 2, and the steps are as follows:
1) Given spatial scene mapAnd space scene map->Map from space scene->Extracting all the included space objects: />Map from space scene->Extracting all the included space objects:
2) According to the slave space scene mapAnd space scene map->Each of the extracted spatial objects is extracted with its associated information, where the spatial object associated information is shown in fig. 3, and includes the morphology, feature, and spatial relationship between the spatial objects.
(1) Map for slave space sceneEach of the extracted spatial objects: /> The morphological information of each spatial object is extracted separately +.>. Map for slave space scene->Each of the extracted spatial objects: />The morphological information of each spatial object is extracted separately +.>. The morphological information of the space object comprises the space shape, size, perimeter and area of the space object.
(2) Map for slave space sceneEach of the extracted spatial objects: />The characteristic information of each spatial object is extracted +.>. Map for slave space scene->Each of the extracted spatial objects: />The characteristic information of each spatial object is extracted +.>. The characteristic information of the space object comprises the type and the semantics of the space object.
(3) Map for slave space sceneEach of the extracted spatial objects: />Extracting the spatial relationship between them>Including topological relationsDirection relation ofRelationship with distance. Map for slave space scene->Each of the extracted spatial objects: />Extracting the spatial relationship between them>Including topological relation->Direction relation ofRelationship with distance
3) Map for slave space sceneThe related information of the extracted space object is used for constructing a space scene description triplet set>Wherein->,/>Representing map->Spatial object extracted from (B)>Comprises spatial object->Unique identification +.>Space object->Morphological information of->Space object->Characteristic information of->The method comprises the steps of carrying out a first treatment on the surface of the Wherein->,/>Representing map->Spatial object extracted from (B)>Comprises spatial object->Unique identification +.>Space object->Morphological information of->Space object->Characteristic information of->The method comprises the steps of carrying out a first treatment on the surface of the Wherein->Comprising a space object->And space object->Topological relation betweenRelation of directions>And distance relation->. Map for slave space scene->The related information of the extracted space object is used for constructing a space scene description triplet set>Procedure for constructing triples and +.>And consistent.
4) For triplet setsAnd->Giving out a triplet and a text corresponding sample, performing model training by a supervised deep learning method, measuring the difference between a model generation result and an actual situation by using an loss function, converting the triplet into a mapping relation of a text by model learning, and generating a corresponding description text by adopting a trained model>And->Constitutes a' space scene>Descriptive text->"AND" space scene>Descriptive text->"corresponding data.
5) A model based on a self-attention mechanism is constructed, trained and fine-tuned, and similarity between texts is calculated by using the model. First, for descriptive textAnd descriptive text->Preprocessing, inputting the text vector into an embedding layer of a model to obtain a vector representation of the text, inputting the text vector into a transform encoder to process, obtaining a final representation vector of the text through maximum pooling, performing similarity calculation on the representation vector of the text by adopting a cosine similarity method, dividing the text description into five parts based on semantic information, and respectively recording calculated similarity values, namely the text similarity>Similarity including morphological description of spatial object in text +.>Similarity of topological relation descriptions among space objects>Similarity of direction relation descriptions between spatial objects +.>Similarity of distance relation description between spatial objects +.>Similarity of spatial object feature descriptions>
6) The corresponding space scene similarity evaluation mechanism based on text similarity is shown in fig. 4, and is based on the calculated description textAnd descriptive text->Five kinds of similarity betweenRespectively given different weightsFinally, the spatial scene->And spatial scene->Spatial scene similarity between:in the method, in the process of the invention,the sum of (2) is 1.
In some embodiments, the self-attention mechanism based model refers to: a language characterization model based on a Transformer architecture and trained using a bi-directional masking mechanism is used to score text similarity.
In some embodiments, the evaluation mechanism refers to: and respectively giving different weights for describing the morphology of the space object, the characteristics of the space object and the space relation in the text, thereby obtaining a similarity value corresponding to the space scene.
The embodiment of the disclosure provides a spatial scene similarity comparison method, which comprises the following steps:
step one: giving space scene map data, and extracting space objects, forms and features thereof;
step two: according to the space object information extracted in the first step, further extracting a space relation among the space objects, wherein the space relation comprises a topological relation, a direction relation and a distance relation;
step three: constructing a space scene description triplet set according to the space objects extracted in the first step and the second step, the forms of the space objects, the characteristics of the space objects and the space relation data among the space objects;
step four: generating a corresponding description text by adopting a supervised deep learning method according to the space scene description triplet set constructed in the step three;
step five: according to the descriptive text data generated in the step four, a model based on a self-attention mechanism is constructed to compare the similarity between descriptive texts of different spatial scenes;
step six: and D, establishing a corresponding spatial scene similarity evaluation mechanism based on the text similarity according to the similarity value between the texts obtained in the step five, and calculating the similarity value between the spatial scenes.
Further, the model based on the self-attention mechanism in the fifth step refers to a language characterization model based on a transducer architecture and trained by using a bi-directional mask mechanism, and is used for scoring the text similarity.
Further, the evaluation mechanism in the sixth step is to assign different weights to the "morphology of the spatial object", "feature of the spatial object", and "spatial relationship" in the text description, so as to obtain the similarity of the corresponding spatial scene.
As shown in fig. 5, an embodiment of the present disclosure provides a spatial scene similarity comparison method apparatus 200, which is applied to an electronic device, and the apparatus includes:
a first extraction module 210, configured to give spatial scene map data, extract a spatial object and a morphology and a feature of the spatial object from the spatial scene map data;
a second extraction module 220, configured to extract a spatial relationship between the spatial objects according to the extracted spatial objects; wherein the spatial relationship comprises: topological relation, direction relation and distance relation;
a construction module 230, configured to construct a spatial scene description triplet set according to the spatial object, the morphology of the spatial object, the feature of the spatial object, and the spatial relationship data between the spatial objects;
the generating module 240 is configured to generate corresponding descriptive text data by using a supervised deep learning method according to the triplet set;
the determining module 250 is configured to construct a similarity value between description texts based on the self-attention mechanism model and compare the similarity between the description texts of different spatial scenes according to the description text data;
and the calculating module 260 is configured to establish a corresponding spatial scene similarity evaluation mechanism based on text similarity according to the similarity values between the descriptive texts, and calculate the similarity values between the spatial scenes.
In some embodiments, the self-attention mechanism based model refers to: a language characterization model based on a Transformer architecture and trained using a bi-directional masking mechanism is used to score text similarity.
In some embodiments, the evaluation mechanism refers to: and respectively giving different weights for describing the morphology of the space object, the characteristics of the space object and the space relation in the text, thereby obtaining a similarity value corresponding to the space scene.
Embodiments of the present disclosure provide a non-transitory computer readable storage medium, which when executed by a processor of a UE, enables the UE or a base station to perform the spatial scene similarity comparison method provided by any of the foregoing embodiments, and to perform at least one of the methods shown in any of fig. 1 to 4.
The method for comparing the similarity of the space scenes applied to the electronic equipment can comprise the following steps: given space scene map data, extracting space objects and forms and features of the space objects from the space scene map data;
extracting a spatial relationship between the spatial objects according to the extracted spatial objects; wherein the spatial relationship comprises: topological relation, direction relation and distance relation;
constructing a space scene description triplet set according to the space object, the form of the space object, the characteristics of the space object and the space relation data among the space object;
generating corresponding description text data by adopting a supervised deep learning method according to the triplet set;
according to the description text data, a self-attention mechanism-based model is constructed to compare the similarity between the description texts of different spatial scenes, and a similarity value between the description texts is determined;
and establishing a corresponding spatial scene similarity evaluation mechanism based on text similarity according to the similarity values between the descriptive texts, and calculating the similarity values between the spatial scenes.
As can be appreciated, the model based on the self-attention mechanism refers to: a language characterization model based on a Transformer architecture and trained using a bi-directional masking mechanism is used to score text similarity.
As will be appreciated, the evaluation mechanism refers to: and respectively giving different weights for describing the morphology of the space object, the characteristics of the space object and the space relation in the text, thereby obtaining a similarity value corresponding to the space scene.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (7)

1. A spatial scene similarity comparison method applied to electronic equipment, the method comprising:
given space scene map data, extracting space objects and forms and features of the space objects from the space scene map data;
extracting a spatial relationship between the spatial objects according to the extracted spatial objects; wherein the spatial relationship comprises: topological relation, direction relation and distance relation;
constructing a space scene description triplet set according to the space object, the form of the space object, the characteristics of the space object and the space relation data among the space object;
generating corresponding description text data by adopting a supervised deep learning method according to the triplet set;
according to the description text data, a self-attention mechanism-based model is constructed to compare the similarity between the description texts of different spatial scenes, and a similarity value between the description texts is determined;
and establishing a corresponding spatial scene similarity evaluation mechanism based on text similarity according to the similarity values between the descriptive texts, and calculating the similarity values between the spatial scenes.
2. The method of claim 1, wherein the model based on the self-attention mechanism refers to: a language characterization model based on a Transformer architecture and trained using a bi-directional masking mechanism is used to score text similarity.
3. The method of claim 1, wherein the evaluation mechanism refers to: and respectively giving different weights for describing the morphology of the space object, the characteristics of the space object and the space relation in the text, thereby obtaining a similarity value corresponding to the space scene.
4. The device for comparing the similarity of the spatial scenes is applied to the electronic equipment and is characterized by comprising the following components:
the first extraction module is used for giving space scene map data, and extracting space objects and the forms and features of the space objects from the space scene map data;
the second extraction module is used for extracting the spatial relation between the spatial objects according to the extracted spatial objects; wherein the spatial relationship comprises: topological relation, direction relation and distance relation;
the construction module is used for constructing a space scene description triplet set according to the space object, the shape of the space object, the characteristics of the space object and the space relation data among the space objects;
the generation module is used for generating corresponding description text data by adopting a supervised deep learning method according to the triplet set;
the determining module is used for constructing a self-attention mechanism model based similarity comparison between different space scene description texts according to the description text data, and determining a similarity value between the description texts;
and the calculation module is used for establishing a corresponding space scene similarity evaluation mechanism based on text similarity according to the similarity values between the description texts, and calculating the similarity values between the space scenes.
5. The apparatus of claim 4, wherein the self-attention mechanism based model refers to: a language characterization model based on a Transformer architecture and trained using a bi-directional masking mechanism is used to score text similarity.
6. The apparatus of claim 4, wherein the evaluation mechanism refers to: and respectively giving different weights for describing the morphology of the space object, the characteristics of the space object and the space relation in the text, thereby obtaining a similarity value corresponding to the space scene.
7. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of a computer, enable the computer to perform the spatial scene similarity comparison method of any one of claims 1 to 3.
CN202311056789.8A 2023-08-22 2023-08-22 Spatial scene similarity comparison method, device and storage medium Active CN116796203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311056789.8A CN116796203B (en) 2023-08-22 2023-08-22 Spatial scene similarity comparison method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311056789.8A CN116796203B (en) 2023-08-22 2023-08-22 Spatial scene similarity comparison method, device and storage medium

Publications (2)

Publication Number Publication Date
CN116796203A true CN116796203A (en) 2023-09-22
CN116796203B CN116796203B (en) 2023-11-17

Family

ID=88044097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311056789.8A Active CN116796203B (en) 2023-08-22 2023-08-22 Spatial scene similarity comparison method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116796203B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106202379A (en) * 2016-07-09 2016-12-07 兰州交通大学 A kind of matching inquiry method based on spatial scene similarity
US20170097966A1 (en) * 2015-10-05 2017-04-06 Yahoo! Inc. Method and system for updating an intent space and estimating intent based on an intent space
CN110599592A (en) * 2019-09-12 2019-12-20 北京工商大学 Three-dimensional indoor scene reconstruction method based on text
CN112465144A (en) * 2020-12-11 2021-03-09 北京航空航天大学 Multi-modal demonstration intention generation method and device based on limited knowledge
CN116028635A (en) * 2022-11-03 2023-04-28 阿里巴巴(中国)有限公司 Entity relationship prediction and knowledge graph construction method, device, equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170097966A1 (en) * 2015-10-05 2017-04-06 Yahoo! Inc. Method and system for updating an intent space and estimating intent based on an intent space
CN106202379A (en) * 2016-07-09 2016-12-07 兰州交通大学 A kind of matching inquiry method based on spatial scene similarity
CN110599592A (en) * 2019-09-12 2019-12-20 北京工商大学 Three-dimensional indoor scene reconstruction method based on text
CN112465144A (en) * 2020-12-11 2021-03-09 北京航空航天大学 Multi-modal demonstration intention generation method and device based on limited knowledge
CN116028635A (en) * 2022-11-03 2023-04-28 阿里巴巴(中国)有限公司 Entity relationship prediction and knowledge graph construction method, device, equipment and medium

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DANHUAI GUO 等: "DeepSSN: A deep convolutional neural network to assess spatial scene similarity", 《TRANSACTION IN GIS》, vol. 26, no. 4, pages 1914 - 1938 *
JIFEI SONG 等: "Deep Spatial-Semantic Attention for Fine-Grained Sketch-Based Image Retrieval", 《PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》, pages 5551 - 5560 *
YE LIU 等: "Kg-bart: Knowledge graph-augmented bart for generative commonsense reasoning", 《PROCEEDINGS OF THE AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE》, vol. 35, no. 7, pages 6418 - 6425 *
王云阁 等: "一种基于改进TDD模型的空间场景相似性度量方法", 《测绘科学技术学报》, vol. 38, no. 3, pages 309 - 315 *
田泽宇 等: "基于形状及空间关系的场景相似性检索", 《电子学报》, vol. 44, no. 8, pages 1892 - 1898 *
陈晖萱 等: "M2T多源知识图谱融合的空间场景描述文本自动生成框架", 《地球信息科学学报》, vol. 25, no. 6, pages 1176 - 1185 *

Also Published As

Publication number Publication date
CN116796203B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
JP7540127B2 (en) Artificial intelligence-based image processing method, image processing device, computer program, and computer device
CN110659582A (en) Image conversion model training method, heterogeneous face recognition method, device and equipment
WO2016023264A1 (en) Fingerprint identification method and fingerprint identification device
CN108319957A (en) A kind of large-scale point cloud semantic segmentation method based on overtrick figure
CN110781413B (en) Method and device for determining interest points, storage medium and electronic equipment
CN105243139A (en) Deep learning based three-dimensional model retrieval method and retrieval device thereof
CN112102294B (en) Training method and device for generating countermeasure network, and image registration method and device
CN112949740B (en) Small sample image classification method based on multilevel measurement
CN114022900A (en) Training method, detection method, device, equipment and medium for detection model
CN107301643A (en) Well-marked target detection method based on robust rarefaction representation Yu Laplce&#39;s regular terms
CN114463805B (en) Deep forgery detection method, device, storage medium and computer equipment
CN114298997B (en) Fake picture detection method, fake picture detection device and storage medium
CN114880484B (en) Satellite communication frequency track resource map construction method based on vector mapping
CN111460223A (en) Short video single-label classification method based on multi-mode feature fusion of deep network
CN108122202A (en) A kind of map Linear element multi-scale information derived method based on Shape context
CN113673465B (en) Image detection method, device, equipment and readable storage medium
CN114782503A (en) Point cloud registration method and system based on multi-scale feature similarity constraint
CN110503148A (en) A kind of point cloud object identifying method with scale invariability
CN116796203B (en) Spatial scene similarity comparison method, device and storage medium
CN113222167A (en) Image processing method and device
Backes et al. Texture classification using fractal dimension improved by local binary patterns
CN110826726B (en) Target processing method, target processing device, target processing apparatus, and medium
CN111194004B (en) Base station fingerprint positioning method, device and system and computer readable storage medium
CN113628217A (en) Three-dimensional point cloud segmentation method based on image convolution and integrating direction and distance
CN114283289A (en) Image classification method based on multi-model fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant