CN117542007A - Building identification method, device, equipment and storage medium based on vehicle - Google Patents
Building identification method, device, equipment and storage medium based on vehicle Download PDFInfo
- Publication number
- CN117542007A CN117542007A CN202311316273.2A CN202311316273A CN117542007A CN 117542007 A CN117542007 A CN 117542007A CN 202311316273 A CN202311316273 A CN 202311316273A CN 117542007 A CN117542007 A CN 117542007A
- Authority
- CN
- China
- Prior art keywords
- feature
- building
- information
- target
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000000605 extraction Methods 0.000 claims abstract description 33
- 238000006243 chemical reaction Methods 0.000 claims description 19
- 230000007613 environmental effect Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 7
- 238000005516 engineering process Methods 0.000 claims description 6
- 230000000717 retained effect Effects 0.000 claims description 4
- 230000001133 acceleration Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000005206 flow analysis Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003706 image smoothing Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a building identification method based on a vehicle, a building identification device, computer equipment and a computer readable storage medium. The method comprises the following steps: collecting an environment image through an image collecting unit of the vehicle, wherein the environment image at least comprises a target building; performing feature extraction operation on the environment image and building information model data corresponding to the target building respectively to obtain first feature information and second feature information corresponding to the target building; performing feature matching operation on the first feature information and the second feature information to obtain a matching result of the target building; and identifying the target building according to the matching result to obtain the identification information of the target building. The embodiment of the application aims at combining the environment image and the building information model data so as to realize the identification of the building.
Description
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a vehicle-based building identification method, a building identification device, a computer device, and a computer-readable storage medium.
Background
When the automobile realizes functions such as navigation assistance, traffic flow analysis, safety pre-warning and the like, building information of the current environment is usually required to be acquired. The traditional method mainly utilizes manual collection of multimedia information (news broadcast by televisions, radio, newspapers and the like) to obtain newly-added building information, thereby providing convenience for a navigation system of a car.
However, for remote or freshly reclaimed areas, building information cannot be timely and effectively known by using the traditional method; furthermore, the traditional method for acquiring the information of the newly added building by manual collection is not intelligent enough.
Disclosure of Invention
The application provides a building identification method, a building identification device, computer equipment and a computer readable storage medium based on a vehicle, which aim to combine environment images and building information model data so as to realize the identification of a building.
To achieve the above object, the present application provides a vehicle-based building identification method, the method including:
collecting an environment image through an image collecting unit of the vehicle, wherein the environment image at least comprises a target building;
performing feature extraction operation on the environment image and building information model data corresponding to the target building respectively to obtain first feature information and second feature information corresponding to the target building;
performing feature matching operation on the first feature information and the second feature information to obtain a matching result of the target building;
and identifying the target building according to the matching result to obtain the identification information of the target building.
To achieve the above object, the present application further provides a building identification device, including:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring an environment image through an image acquisition unit of a vehicle, and the environment image at least comprises a target building;
the feature extraction module is used for performing feature extraction operation on the environment image and building information model data corresponding to the target building respectively to obtain first feature information and second feature information corresponding to the target building;
the feature matching module is used for carrying out feature matching operation on the first feature information and the second feature information to obtain a matching result of the target building;
and the building identification module is used for identifying the target building according to the matching result to obtain the identification information of the target building.
In addition, to achieve the above object, the present application further provides a computer apparatus including a memory and a processor; the memory is used for storing a computer program; the processor is configured to execute the computer program and implement the steps of the vehicle-based building identification method according to any one of the embodiments of the present application when the computer program is executed.
In addition, to achieve the above object, the present application further provides a computer-readable storage medium storing a computer program, which when executed by a processor, causes the processor to implement the steps of the vehicle-based building identification method according to any one of the embodiments provided herein.
The building identification method, the building identification device, the computer equipment and the computer readable storage medium based on the vehicle, disclosed by the embodiment of the application, can acquire an environment image through an image acquisition unit of the vehicle, wherein the environment image at least comprises a target building; performing feature extraction operation on the environment image and building information model data corresponding to the target building to obtain first feature information and second feature information corresponding to the target building; performing feature matching operation on the first feature information and the second feature information to obtain a matching result of the target building; thus, the target building can be identified based on the matching result, and the identification information of the target building can be obtained. The method aims at combining the environment image and the building information model data, matching the characteristic information obtained based on the environment image and the characteristic information obtained based on the building information model to obtain a matching result of a target building, and further realizing the identification of the building based on the matching result.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a vehicle-based building identification method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of obtaining a matching result of a target building according to an embodiment of the present application;
fig. 3 is a schematic flow chart of obtaining identification information of a target building according to an embodiment of the present application;
FIG. 4 is a schematic block diagram of a building identification device provided in an embodiment of the present application;
fig. 5 is a schematic block diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations. In addition, although the division of the functional modules is performed in the apparatus schematic, in some cases, the division of the modules may be different from that in the apparatus schematic.
The term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flow chart of a vehicle-based building identification method according to an embodiment of the present application. As shown in fig. 1, the vehicle-based building identification method includes steps S11 to S14.
Step S11: an environmental image is acquired by an image acquisition unit of the vehicle, wherein the environmental image includes at least the target building.
The image capturing unit is a device capable of capturing an image or a video, such as a vehicle-mounted camera, a monitoring camera, and the like, which is not limited in this application.
Specifically, an image of the environment in which the vehicle is currently located can be acquired by an image acquisition unit of the vehicle to obtain an image including the target building for subsequent operations such as feature extraction.
Optionally, a preprocessing operation may be performed on the acquired environmental image, so as to improve accuracy of subsequent operations such as feature extraction.
It should be noted that, the preprocessing operation of the image refers to an operation of optimizing the image for analysis, recognition, and processing of the subsequent image. The preprocessing operation of the image includes an image denoising operation, an image correction operation, an image smoothing operation, and the like, which are not limited in this application.
Furthermore, the image denoising can remove noise points and interference in the image through median filtering, gaussian filtering and other methods so as to improve the image quality; image enhancement can enhance the contrast, brightness or color of the image by histogram equalization, contrast stretching and other methods so as to make the image clearer and easy to analyze; image smoothing can reduce high frequency noise in the image using a smoothing filter to facilitate detection and identification of the image.
In the embodiment of the application, the environment image including the target building can be acquired for subsequent operations such as feature extraction.
Step S12: and respectively carrying out feature extraction operation on the environment image and building information model data corresponding to the target building to obtain first feature information and second feature information corresponding to the target building.
The first characteristic information is characteristic information extracted based on the environmental image characteristics; the second characteristic information is characteristic information extracted based on building information model data.
It should be noted that building information model data (BIM, building Information Modeling) data is data for describing and managing various aspects of a building project. Which contains geometric information of the building (e.g., physical shape, size of the building), spatial information (e.g., spatial layout of the building), material information, equipment information, engineering information, and management information, as well as other data related to the building project.
Specifically, feature extraction operations may be performed on the collected environmental image and building information model data related to the target building to extract the first feature information and the second feature information, which may be further used for subsequent analysis and processing.
Optionally, performing feature extraction operation on the environmental image and building information model data corresponding to the target building to obtain first feature information and second feature information corresponding to the target building, including: performing feature extraction operation on the environment image through a scale-invariant feature transformation algorithm or an acceleration robust feature algorithm to obtain first feature information; and performing feature extraction operation on the building information model data through a three-dimensional reconstruction technology to obtain second feature information.
The Scale-invariant feature transform algorithm (SIFT, scale-Invariant Feature Transform) is an algorithm for extracting key points and descriptors from an image. It is able to maintain the invariance of the feature at different scales and rotations. In the process of extracting features from an environment image by using the SIFT algorithm, the extracted feature information is usually a significant point which is not influenced by scale and rotation changes in the image, so that the feature information can be used for subsequent matching.
The accelerated robust feature algorithm is an algorithm used in the fields of computer vision and image processing, and aims to improve the speed and the robustness of feature extraction while realizing image feature extraction. The main goal of accelerating robust feature algorithms is to be able to extract meaningful feature points or descriptors quickly and accurately when processing large-scale image data.
In addition, the three-dimensional reconstruction technique is a technique of creating a three-dimensional model of a building from building information model data. Including using point cloud data, CAD models, or other data sources to reconstruct three-dimensional shapes and structures of buildings. Then, feature information of the target building, such as elevation features, structural features, volumes, etc., is extracted from the reconstructed three-dimensional model.
Therefore, the method and the device can realize the feature extraction of the environment image through a scale-invariant feature transformation algorithm or an acceleration robust feature algorithm, and realize the feature extraction of building information model data through a three-dimensional reconstruction technology.
Step S13: and performing feature matching operation on the first feature information and the second feature information to obtain a matching result of the target building.
Specifically, the feature information includes feature descriptors of key points. Therefore, feature descriptors of the key points extracted from the environment image and feature descriptors of the key points extracted from the building information model data can be subjected to feature matching to obtain a matching result of the target building.
It should be noted that, the method of feature matching operation is not limited in this application, and includes, for example, a nearest neighbor matching method, a nearest neighbor searching method, and the like.
The nearest neighbor matching method is a method commonly used in the field of data analysis and machine learning to find the closest matching sample or data point between two or more data sets. The goal of the nearest neighbor matching method is to match or pair each data point in one data set with the most similar data point in the other data set. Nearest neighbor matching methods typically calculate the similarity between data points based on some similarity measure. Common similarity metrics include euclidean distance, manhattan distance, cosine similarity, etc., with the particular choice depending on the nature of the problem and the nature of the data. Therefore, the method can realize feature matching through a nearest neighbor matching method.
Optionally, feature descriptors of key points corresponding to environmental images (photographed from different positions or angles) from multiple viewpoints may be further feature-matched with feature descriptors of key points extracted from building information model data, so as to improve robustness and accuracy of a matching result.
Optionally, the matching result includes a matching pair, and after performing feature matching operation on the first feature information and the second feature information to obtain a matching result of the target building, the matching method further includes: verifying the validity of the matched pair through a geometric constraint algorithm or a motion consistency algorithm; invalid matching pairs are excluded and valid matching pairs are retained.
The geometric constraint algorithm is a technique for deducing the position, shape, or relative relationship of an object from geometric information in an image or images. Geometric constraint algorithms can generally utilize geometric principles, such as projection geometry, to establish spatial relationships between objects in an image. Geometric constraint algorithms are commonly used for object recognition, pose estimation, stereo vision, camera calibration, and other tasks.
A motion consistency algorithm is an algorithm for analyzing the motion of an object or scene in a video sequence, whose object is to estimate the speed, direction and trajectory of the object, by comparing the pixel displacement or optical flow between different frames in an image sequence to infer the motion information of the object.
Therefore, the feature descriptors in the matching pair can be detected through a geometric constraint algorithm or a motion consistency algorithm, and whether the corresponding positions, shapes, motion information and the like are consistent or not is judged. If the information is consistent, the matching pair is indicated to be effective, and the matching pair can be reserved at the moment, otherwise, the matching pair can be eliminated.
Optionally, considering that noise, shielding, illumination change and other factors may exist in feature matching, statistical information or context information of local features may be used for matching verification and screening to reject unreliable matching.
In the embodiment of the application, the first feature information and the second feature information can be subjected to feature matching operation to obtain the matching result of the target building, so that the subsequent identification of the building can be realized based on the matching result.
Step S14: and identifying the target building according to the matching result to obtain the identification information of the target building.
Specifically, the matching result can be identified through the identification model, so as to obtain identification information of the target building, including, for example, the name, category, position coordinates, attribute or other relevant information of the building.
The present application is not limited to the recognition model, and for example, the recognition model includes a cyclic neural network, a support vector machine, a convolutional neural network, and a graph neural network.
Further, the target building identification information may be applied to various fields such as automated driving, building management, security monitoring, traffic flow analysis, and virtual reality, which are not limited in this application.
According to the vehicle-based building identification method disclosed by the embodiment of the application, the environment image can be acquired through the image acquisition unit of the vehicle, wherein the environment image at least comprises a target building; performing feature extraction operation on the environment image and building information model data corresponding to the target building to obtain first feature information and second feature information corresponding to the target building; performing feature matching operation on the first feature information and the second feature information to obtain a matching result of the target building; thus, the target building can be identified based on the matching result, and the identification information of the target building can be obtained. The method aims at combining the environment image and the building information model data, matching the characteristic information obtained based on the environment image and the characteristic information obtained based on the building information model to obtain a matching result of a target building, and further realizing the identification of the building based on the matching result.
Referring to fig. 2, fig. 2 is a flow chart of obtaining a matching result of a target building according to an embodiment of the present application. As shown in fig. 2, the first feature information includes a first target feature descriptor, and the second feature information includes a second target feature descriptor, so that a matching result of the target building can be obtained through steps S131 to S133.
Step S131: the validity of the first initial feature descriptor is determined through a scale invariance algorithm or a corner detection algorithm.
Step S132: and determining the first initial feature descriptors with the validity larger than a preset threshold value as first target feature descriptors.
Step S133: and performing feature matching operation on the first target feature descriptors and the second target feature descriptors to obtain a matching result of the target building.
The first initial feature descriptors are feature descriptors extracted based on the environment image; the first target feature descriptors are feature descriptors after screening the first initial feature descriptors; the second target feature descriptors are feature descriptors extracted based on building information model data.
Specifically, before feature matching, a first target feature descriptor with good features, that is, valid, may also be determined from the first initial feature descriptors by a scale invariance algorithm or a corner detection algorithm. And then carrying out feature matching operation on the first target feature descriptor and the second target feature descriptor to obtain a matching result of the target building.
The preset threshold value is not limited, and for example, the preset threshold value is 80%, 90%, etc., and the preset threshold value is 80% as an example. That is, when the validity of the first initial feature descriptor is greater than 80%, the first initial feature descriptor is indicated to be valid, and the feature descriptor has good features. Thus, a first initial feature descriptor having a validity of greater than 80% may be determined as a first target feature descriptor for subsequent feature matching.
Further, scale invariance algorithms are a technique for finding and describing feature points in images at different scales. The scale invariance algorithm makes the detection and description of feature points have invariance to scaling changes of the image, i.e. feature points found at different scales should have similar descriptions. The method is generally used in the fields of image matching, object recognition, image stitching, stereoscopic vision and the like and is used for judging the characteristic performance of the characteristic points.
Corner detection algorithms are techniques for identifying corners or locations with protruding edges in an image. Corner points are often important feature points in an image that mark texture changes in the image or intersection points of object edges. Therefore, the corner detection algorithm can judge the performance of the feature points.
Thus, the validity of the feature descriptors can be determined by a scale invariance algorithm or a corner detection algorithm.
Optionally, performing feature matching operation on the first target feature descriptor and the second target feature descriptor to obtain a matching result of the target building, including: determining the similarity of the first target feature descriptors and the second target feature descriptors to obtain similarity values; and determining a matching result of the target building according to the similarity value.
Specifically, the similarity between the first target feature descriptor and the second target feature descriptor can be evaluated by using euclidean distance, hamming distance or other distance measurement methods, so that the first target feature descriptor and the second target feature descriptor with the closest similarity value are determined as a matching pair, and the matching pair is used as a matching result of the target building.
Optionally, after the matching result is obtained, whether the matching result is correct or not can be verified by using a RANSAC matching, geometric constraint or motion consistency method, so as to improve the accuracy of the matching result.
It should be noted that, in feature matching, the RANSAC algorithm may be used to identify and reject erroneous matching pairs. It estimates model parameters from a set of matching pairs by randomly selecting them and then evaluating the weight of each point based on the consistency of the model with other matching pairs. Finally, the RANSAC algorithm selects the model parameters with the greatest consistency as the final matching result.
In the embodiment of the application, the validity of the feature descriptors can be determined, and then the valid feature descriptors are subjected to feature matching, so that the accuracy of a matching result is improved.
Referring to fig. 3, fig. 3 is a flow chart for obtaining identification information of a target building according to an embodiment of the present application. As shown in fig. 3, obtaining the identification information of the target building may be achieved through steps S141 to S144.
Step S141: and acquiring a first coordinate system corresponding to the environment image and a second coordinate system corresponding to the building information model data.
Step S142: and determining coordinate conversion parameters of the first coordinate system and the second coordinate system.
Step S143: and carrying out coordinate conversion on the matching result according to the coordinate conversion parameters.
Step S144: and identifying the target building according to the matching result after the coordinate conversion to obtain the identification information of the target building.
The first coordinate system is a coordinate system corresponding to the environment image; the second coordinate system is a coordinate system corresponding to the building information model data.
Because the environment image and the building information model data are respectively located in different coordinate systems, in order to keep the consistency of the data in the matching result, the matching result needs to be subjected to coordinate conversion, so that the accurate alignment of the data is ensured, and the identification process is more accurate and reliable.
Specifically, the coordinate conversion parameters of the first coordinate system and the second coordinate system may include translation, rotation, scaling, and other transformation matrices for aligning the two coordinate systems. Further, feature points or matching results in the environmental image may be converted from a first coordinate system to a second coordinate system, or vice versa, according to the coordinate conversion parameters. Therefore, the characteristic points in the environment image are aligned with the corresponding parts in the building information model, namely, the aligned matching result is obtained, and the method can be used for identifying the target building and obtaining the category, the position or other related information of the target building.
Optionally, the method further includes, after identifying the target building according to the matching result and obtaining the identification information of the target building: performing path planning by using a path planning algorithm based on identification information to obtain a target path, wherein the identification information comprises position information, shape information and attribute information; the control vehicle runs according to the target path.
The path planning algorithm includes Dijkstra algorithm, RRT (Rapidly-Exploring Random Trees), and the like, which is not limited in this application.
Specifically, a path from the current vehicle position to the target building can be generated based on the identification information of the target building by a path planning algorithm, thereby enabling control of the vehicle to travel along the generated target path until the position of the target building is reached.
In the embodiment of the application, the matching result can be subjected to coordinate transformation so as to optimize the matching result and improve the matching accuracy. Therefore, the target building can be identified according to the matching result after the label conversion, and the identification information of the target building with high accuracy can be obtained. In addition, the path planning of the vehicle can be realized based on the identification information so as to control the vehicle to run according to the target path.
Referring to fig. 4, fig. 4 is a schematic block diagram of a building identification device according to an embodiment of the present application. The building identification device may be configured in a server for performing the aforementioned vehicle-based building identification method.
As shown in fig. 4, the building identification apparatus 200 includes: the system comprises an acquisition module 201, a feature extraction module 202, a feature matching module 203 and a building identification module 204.
An acquisition module 201, configured to acquire an environmental image through an image acquisition unit of a vehicle, where the environmental image includes at least a target building;
the feature extraction module 202 is configured to perform feature extraction operation on the environmental image and building information model data corresponding to the target building, so as to obtain first feature information and second feature information corresponding to the target building;
the feature matching module 203 is configured to perform feature matching operation on the first feature information and the second feature information to obtain a matching result of the target building;
and the building identification module 204 is configured to identify the target building according to the matching result, so as to obtain identification information of the target building.
The feature extraction module 202 is further configured to perform feature extraction operation on the environmental image through a scale-invariant feature transformation algorithm or an accelerated robust feature algorithm, so as to obtain the first feature information; and performing feature extraction operation on the building information model data through a three-dimensional reconstruction technology to obtain the second feature information.
The feature matching module 203 is further configured to determine validity of the first initial feature descriptor through a scale invariance algorithm or a corner detection algorithm; determining a first initial feature descriptor with the validity larger than a preset threshold value as the first target feature descriptor; and performing feature matching operation on the first target feature descriptors and the second target feature descriptors to obtain a matching result of the target building.
The feature matching module 203 is further configured to determine a similarity between the first target feature descriptor and the second target feature descriptor, so as to obtain a similarity value; and determining a matching result of the target building according to the similarity value.
The feature matching module 203 is further configured to verify the validity of the matching pair by using a geometric constraint algorithm or a motion consistency algorithm; invalid matching pairs are excluded and valid matching pairs are retained.
Building identification module 204 is further configured to determine coordinate transformation parameters of the first coordinate system and the second coordinate system; according to the coordinate conversion parameters, carrying out coordinate conversion on the matching result; and identifying the target building according to the matching result after the coordinate conversion to obtain the identification information of the target building.
The building identification module 204 is further configured to perform path planning based on the identification information by using a path planning algorithm, so as to obtain a target path, where the identification information includes location information, shape information, and attribute information; and controlling the vehicle to run according to the target path.
It should be noted that, for convenience and brevity of description, specific working processes of the above-described apparatus and each module, unit may refer to corresponding processes in the foregoing method embodiments, which are not repeated herein.
The methods and apparatus of the present application are operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
By way of example, the methods, apparatus described above may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 4.
Referring to fig. 4, fig. 4 is a schematic diagram of a computer device according to an embodiment of the present application. The computer device may be a server.
As shown in fig. 4, the computer device includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a volatile storage medium, a non-volatile storage medium, and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions that, when executed, cause the processor to perform any of a number of vehicle-based building identification methods.
The processor is used to provide computing and control capabilities to support the operation of the entire computer device.
The internal memory provides an environment for the execution of a computer program in a non-volatile storage medium that, when executed by the processor, causes the processor to perform any of a number of vehicle-based building identification methods.
The network interface is used for network communication such as transmitting assigned tasks and the like. It will be appreciated by those skilled in the art that the structure of the computer device is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein in some embodiments the processor is configured to run a computer program stored in the memory to implement the steps of: collecting an environment image through an image collecting unit of the vehicle, wherein the environment image at least comprises a target building; performing feature extraction operation on the environment image and building information model data corresponding to the target building respectively to obtain first feature information and second feature information corresponding to the target building; performing feature matching operation on the first feature information and the second feature information to obtain a matching result of the target building; and identifying the target building according to the matching result to obtain the identification information of the target building.
In some embodiments, the processor is further configured to perform a feature extraction operation on the environmental image through a scale-invariant feature transform algorithm or an accelerated robust feature algorithm, to obtain the first feature information; and performing feature extraction operation on the building information model data through a three-dimensional reconstruction technology to obtain the second feature information.
In some embodiments, the processor is further configured to determine validity of the first initial feature descriptor by a scale invariance algorithm or a corner detection algorithm; determining a first initial feature descriptor with the validity larger than a preset threshold value as the first target feature descriptor; and performing feature matching operation on the first target feature descriptors and the second target feature descriptors to obtain a matching result of the target building.
In some embodiments, the processor is further configured to determine a similarity between the first target feature descriptor and the second target feature descriptor, to obtain a similarity value; and determining a matching result of the target building according to the similarity value.
In some embodiments, the processor is further configured to verify the validity of the matching pair by a geometric constraint algorithm or a motion consistency algorithm; invalid matching pairs are excluded and valid matching pairs are retained.
In some embodiments, the processor is further configured to obtain a first coordinate system corresponding to the environmental image and a second coordinate system corresponding to the building information model data; determining coordinate conversion parameters of the first coordinate system and the second coordinate system; according to the coordinate conversion parameters, carrying out coordinate conversion on the matching result; and identifying the target building according to the matching result after the coordinate conversion to obtain the identification information of the target building.
In some embodiments, the processor is further configured to perform path planning based on the identification information by using a path planning algorithm to obtain a target path, where the identification information includes location information, shape information, and attribute information; and controlling the vehicle to run according to the target path.
The embodiment of the application also provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, wherein the computer program comprises program instructions, and the program instructions are executed to realize any vehicle-based building identification method provided by the embodiment of the application.
The computer readable storage medium may be an internal storage unit of the computer device according to the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, which are provided on the computer device.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A vehicle-based building identification method, the method comprising:
collecting an environment image through an image collecting unit of the vehicle, wherein the environment image at least comprises a target building;
performing feature extraction operation on the environment image and building information model data corresponding to the target building respectively to obtain first feature information and second feature information corresponding to the target building;
performing feature matching operation on the first feature information and the second feature information to obtain a matching result of the target building;
and identifying the target building according to the matching result to obtain the identification information of the target building.
2. The method according to claim 1, wherein the performing feature extraction on the environmental image and the building information model data corresponding to the target building to obtain first feature information and second feature information corresponding to the target building includes:
performing feature extraction operation on the environment image through a scale-invariant feature transformation algorithm or an acceleration robust feature algorithm to obtain the first feature information; the method comprises the steps of,
and carrying out feature extraction operation on the building information model data through a three-dimensional reconstruction technology to obtain the second feature information.
3. The method according to claim 1, wherein the first feature information includes a first target feature descriptor, the second feature information includes a second target feature descriptor, and before performing feature matching operation on the first feature information and the second feature information, the method includes:
determining the effectiveness of a first initial feature descriptor through a scale invariance algorithm or a corner detection algorithm;
determining a first initial feature descriptor with the validity larger than a preset threshold value as the first target feature descriptor;
and performing feature matching operation on the first feature information and the second feature information to obtain a matching result of the target building, wherein the feature matching operation comprises the following steps:
and performing feature matching operation on the first target feature descriptors and the second target feature descriptors to obtain a matching result of the target building.
4. The method of claim 3, wherein performing feature matching on the first target feature descriptor and the second target feature descriptor to obtain a matching result of the target building comprises:
determining the similarity of the first target feature descriptors and the second target feature descriptors to obtain similarity values;
and determining a matching result of the target building according to the similarity value.
5. The method according to claim 1, wherein the matching result includes a matching pair, and the performing the feature matching operation on the first feature information and the second feature information, after obtaining the matching result of the target building, further includes:
verifying the validity of the matching pair through a geometric constraint algorithm or a motion consistency algorithm;
invalid matching pairs are excluded and valid matching pairs are retained.
6. The method according to claim 1, wherein the identifying the target building according to the matching result, before obtaining the identification information of the target building, includes:
acquiring a first coordinate system corresponding to the environment image and a second coordinate system corresponding to the building information model data;
determining coordinate conversion parameters of the first coordinate system and the second coordinate system;
according to the coordinate conversion parameters, carrying out coordinate conversion on the matching result;
the step of identifying the target building according to the matching result to obtain the identification information of the target building comprises the following steps:
and identifying the target building according to the matching result after the coordinate conversion to obtain the identification information of the target building.
7. The method according to claim 1, wherein the identifying the target building according to the matching result, after obtaining the identification information of the target building, further comprises:
performing path planning based on the identification information by using a path planning algorithm to obtain a target path, wherein the identification information comprises position information, shape information and attribute information;
and controlling the vehicle to run according to the target path.
8. A building identification device, characterized in that the building identification device comprises:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring an environment image through an image acquisition unit of a vehicle, and the environment image at least comprises a target building;
the feature extraction module is used for performing feature extraction operation on the environment image and building information model data corresponding to the target building respectively to obtain first feature information and second feature information corresponding to the target building;
the feature matching module is used for carrying out feature matching operation on the first feature information and the second feature information to obtain a matching result of the target building;
and the building identification module is used for identifying the target building according to the matching result to obtain the identification information of the target building.
9. A computer device, comprising: a memory and a processor; wherein the memory is connected to the processor for storing a program, the processor being adapted to implement the steps of the vehicle-based building identification method according to any one of claims 1-7 by running the program stored in the memory.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the steps of the vehicle-based building identification method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311316273.2A CN117542007A (en) | 2023-10-11 | 2023-10-11 | Building identification method, device, equipment and storage medium based on vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311316273.2A CN117542007A (en) | 2023-10-11 | 2023-10-11 | Building identification method, device, equipment and storage medium based on vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117542007A true CN117542007A (en) | 2024-02-09 |
Family
ID=89783016
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311316273.2A Pending CN117542007A (en) | 2023-10-11 | 2023-10-11 | Building identification method, device, equipment and storage medium based on vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117542007A (en) |
-
2023
- 2023-10-11 CN CN202311316273.2A patent/CN117542007A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3382644B1 (en) | Method for 3d modelling based on structure from motion processing of sparse 2d images | |
Drost et al. | 3d object detection and localization using multimodal point pair features | |
CN109658454B (en) | Pose information determination method, related device and storage medium | |
CN103325112B (en) | Moving target method for quick in dynamic scene | |
CN107622489B (en) | Image tampering detection method and device | |
JP5538868B2 (en) | Image processing apparatus, image processing method and program | |
JP6021689B2 (en) | Vehicle specification measurement processing apparatus, vehicle specification measurement method, and program | |
Petrelli et al. | Pairwise registration by local orientation cues | |
CN105160344A (en) | Method and device for extracting local features of three-dimensional point cloud | |
CN102915540A (en) | Image matching method based on improved Harris-Laplace and scale invariant feature transform (SIFT) descriptor | |
CN110675442A (en) | Local stereo matching method and system combined with target identification technology | |
CN113793413A (en) | Three-dimensional reconstruction method and device, electronic equipment and storage medium | |
Hofer et al. | Semi-Global 3D Line Modeling for Incremental Structure-from-Motion. | |
Zheng et al. | Accelerated RANSAC for accurate image registration in aerial video surveillance | |
Jung et al. | Moving object detection with single moving camera and IMU sensor using mask R-CNN instance image segmentation | |
CN109523570A (en) | Beginning parameter transform model method and device | |
Li et al. | Road-network-based fast geolocalization | |
CN117765039A (en) | Point cloud coarse registration method, device and equipment | |
Bae et al. | Fast and scalable 3D cyber-physical modeling for high-precision mobile augmented reality systems | |
CN117593420A (en) | Plane drawing labeling method, device, medium and equipment based on image processing | |
CN111126436B (en) | Visual matching method and device | |
Torre-Ferrero et al. | 3D point cloud registration based on a purpose-designed similarity measure | |
CN111860084B (en) | Image feature matching and positioning method and device and positioning system | |
CN117542007A (en) | Building identification method, device, equipment and storage medium based on vehicle | |
CN117011481A (en) | Method and device for constructing three-dimensional map, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |