CN111738232B - Method and device for marking pile foundation - Google Patents

Method and device for marking pile foundation Download PDF

Info

Publication number
CN111738232B
CN111738232B CN202010783736.6A CN202010783736A CN111738232B CN 111738232 B CN111738232 B CN 111738232B CN 202010783736 A CN202010783736 A CN 202010783736A CN 111738232 B CN111738232 B CN 111738232B
Authority
CN
China
Prior art keywords
pile foundation
pile
images
image
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010783736.6A
Other languages
Chinese (zh)
Other versions
CN111738232A (en
Inventor
王涛
张洁
孙连瑞
李�根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xumi Yuntu Space Technology Co Ltd
Original Assignee
Shenzhen Xumi Yuntu Space Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xumi Yuntu Space Technology Co Ltd filed Critical Shenzhen Xumi Yuntu Space Technology Co Ltd
Priority to CN202010783736.6A priority Critical patent/CN111738232B/en
Publication of CN111738232A publication Critical patent/CN111738232A/en
Application granted granted Critical
Publication of CN111738232B publication Critical patent/CN111738232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document
    • G06V30/422Technical drawings; Geographical maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for marking a pile foundation, wherein the method comprises the following steps: firstly, determining a pile foundation template in response to the operation of a pile foundation on a marking drawing image, and determining the type, code and number N of the pile foundation corresponding to the pile foundation template; n is a positive integer greater than or equal to 2; then, carrying out image recognition processing on the target image based on the pile foundation template to obtain N non-overlapping area images which are the size of the pile foundation template and comprise the pile foundation as N pile foundation images; the target image is the whole drawing image or a part of the drawing image; and finally, marking the pile foundations in the N pile foundation images based on the pile foundation code and the pile foundation quantity N. Therefore, the automatic identification of the pile foundation on the target image is carried out through the image identification technology after the pile foundation template is determined, the pile foundation on the target image is automatically marked, the workload of marking the pile foundation in a manual mode is greatly reduced, labor is saved, manual marking errors are avoided, and the efficiency of marking the pile foundation is greatly improved.

Description

Method and device for marking pile foundation
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and a related apparatus for marking a pile foundation.
Background
At present, every pile foundation all needs to be built a file on to the drawing image of construction engineering, namely, one pile carries out construction information and checks record and management of relevant information such as acceptance information, when the above-mentioned relevant information is reported and checked to the professional, need look over the pile foundation that reports and checks through the drawing image and build the corresponding position of engineering building site, consequently, at first need mark each pile foundation so that follow-up looking over of pile foundation on the drawing image.
In the prior art, marking each pile foundation on a drawing image means manually finding the position and frame of each pile foundation on the drawing image to select each pile foundation, and naming each pile foundation selected for the frame. However, a construction project often has a plurality of buildings, and when each building is provided with a foundation, a concrete pile is usually required to be used as a pile foundation at intervals of about 2-3 meters, that is, a large number of pile foundations exist in one construction project. Adopt artifical manual mode mark each pile foundation among the prior art, work is tedious loaded down with trivial details, and work load is huge, needs to consume more manpower energy, and appears the manual mark mistake extremely easily.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and a related apparatus for marking a pile foundation, so as to reduce workload of manually marking the pile foundation, save labor and effort, and avoid manual marking errors, thereby significantly improving efficiency of marking the pile foundation.
In a first aspect, an embodiment of the present application provides a method for marking a pile foundation, where the method includes:
marking any pile foundation on a drawing image by clicking or framing, and determining the type, code and number N of the pile foundations matched with the pile foundation template by taking the marked pile foundation as a pile foundation template, wherein N is a positive integer greater than or equal to 2;
selecting a target area within the drawing image range, and setting an image in the target area as a target image;
intercepting a region image with the same size as the pile foundation template from one side boundary of the target region, moving a preset step length to the other side direction each time until moving to a second side boundary of the target region until traversing the target region, so as to identify and extract all region images with the same size as the pile foundation template in the target image;
carrying out image recognition processing on the region images in the target image based on the pile foundation template, and obtaining N non-overlapping region images containing pile foundations in all the region images through similarity calculation to serve as N pile foundation images;
and marking the pile foundations in the pile foundation images based on the pile foundation code and the pile foundation quantity N.
Further, the image recognition processing is performed on the area images in the target image based on the pile foundation template, and through similarity calculation, N non-overlapping area images including pile foundations are obtained in all the area images and serve as N pile foundation images, and specifically includes:
obtaining the similarity between each area image on the target image and the pile foundation template;
sequencing all the area images from high to low according to the similarity, and determining the first N non-overlapping area images which contain pile foundations in the sequencing;
and extracting the first N non-overlapped area images containing the pile foundations in the sequence as N pile foundation images.
Further, the intercepting a region image with the same size as the pile foundation template from one side boundary of the target region, moving a preset step length to the other side direction each time until moving to a second side boundary of the target region, and traversing the target region to identify and extract all region images with the same size as the pile foundation template in the target image specifically includes:
sequentially intercepting a plurality of area images from the upper left corner of the target area, and moving the area images to the right by a preset step length each time in the intercepting process until the area images move to the right boundary of the target area;
and moving the preset step length downwards from the left boundary of the target area each time, and repeatedly executing the step of moving the preset step length to the right boundary of the drawing image each time until the target area is traversed.
Further, the preset step size includes at least one pixel.
Further, the pile foundation type is one of an artificial earth digging pile, a pipe sinking cast-in-place pile, a cast-in-place pile, an anchor static pressure pile, a long auger drilling cast-in-place pile or a precast tubular pile.
Further, still include:
recursively dividing the target image into a plurality of target sub-images;
correspondingly, the image recognition processing is performed on the target image based on the pile foundation template, and N non-overlapping area images are obtained as N pile foundation images, specifically:
and performing parallel image recognition processing on the plurality of target sub-images based on the pile foundation template to obtain N non-overlapping area images as N pile foundation images.
Further, after the pile foundation in each pile foundation image is marked based on the pile foundation code and the pile foundation quantity N, still include:
and adjusting the marking data of the pile foundations in the N pile foundation images.
The present invention also provides a device for marking a pile foundation, comprising:
the determining unit is used for marking any pile foundation on the drawing image through point selection or frame selection, and determining the type, code and number of the pile foundations matched with the pile foundation template by taking the marked pile foundation as a pile foundation template, wherein N is a positive integer greater than or equal to 2;
the obtaining unit is used for selecting a target area in the drawing image range and setting an image in the target area as a target image; intercepting a region image with the same size as the pile foundation template from one side boundary of the target region, moving a preset step length to the other side direction each time until moving to a second side boundary of the target region until traversing the target region, so as to identify and extract all region images with the same size as the pile foundation template in the target image; carrying out image recognition processing on the region images in the target image based on the pile foundation template, and obtaining N non-overlapping region images containing pile foundations in all the region images through similarity calculation to serve as N pile foundation images;
and the marking unit is used for marking the pile foundations in the pile foundation images based on the pile foundation code and the pile foundation quantity N.
The invention also provides a terminal device, which comprises a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is used for executing the method for marking the pile foundation according to the instructions in the program code.
The present invention also provides a computer-readable storage medium for storing program code for performing the method of marking a pile foundation as described above. Compared with the prior art, the method has the advantages that:
by adopting the technical scheme of the embodiment of the application, firstly, a pile foundation template is determined in response to the operation of a pile foundation on a marking drawing image, and the pile foundation type, the pile foundation code and the pile foundation number N corresponding to the pile foundation template are determined; then, carrying out image recognition processing on the target image based on the pile foundation template to obtain N non-overlapping area images which are the size of the pile foundation template and comprise the pile foundation as N pile foundation images; and finally, marking the pile foundations in the N pile foundation images based on the pile foundation code and the pile foundation quantity N. Therefore, after the pile foundation template is determined, the method can automatically identify the pile foundation on the target image through the image identification technology and automatically mark the pile foundation on the target image, so that the workload caused by manually marking the pile foundation is reduced, the labor energy is saved, the manual marking error is avoided, and the efficiency of marking the pile foundation is obviously improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a system framework related to an application scenario in an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for marking a pile foundation according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of obtaining an area image of a plurality of pile foundation templates according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a device for marking a pile foundation according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At the present stage, marking each pile foundation on a drawing image of a construction project requires manual work to find a position frame of each pile foundation on the drawing image to select each pile foundation, and each pile foundation selected by the frame is named. However, a construction project often needs a large number of pile foundations, that is, a large number of pile foundations are arranged on a drawing image, and each pile foundation is marked in a manual mode in the prior art, so that the work is tedious and tedious, the workload is huge, more manpower and energy are required to be consumed, and manual marking errors are very easy to occur.
In order to solve the problem, in the embodiment of the application, a pile foundation template is determined in response to the operation of one pile foundation on a marking drawing image, and the pile foundation type, the pile foundation code and the pile foundation number N (wherein N is a positive integer greater than or equal to 2) corresponding to the pile foundation template are determined; carrying out image recognition processing on the target image based on the pile foundation template to obtain N non-overlapping area images which are the size of the pile foundation template and comprise the pile foundation as N pile foundation images; the target image is the whole drawing image or a part of the drawing image; pile foundation in N pile foundation images is marked based on the pile foundation code and the pile foundation quantity N. Therefore, the automatic identification of the pile foundation on the target image is carried out through the image identification technology after the pile foundation template is determined, the pile foundation on the target image is automatically marked, the workload of marking the pile foundation in a manual mode is greatly reduced, labor is saved, manual marking errors are avoided, and the efficiency of marking the pile foundation is greatly improved.
For example, one of the scenarios in the embodiment of the present application may be applied to the scenario shown in fig. 1, where the scenario includes a user terminal 101 and a processor 102, a user marks a pile foundation on a construction project drawing image displayed on the user terminal 101, and first, the processor 102 determines a pile foundation template and a corresponding pile foundation type, pile foundation code number, and pile foundation number N; then, the processor 102 performs image recognition processing on a target image selected from the drawing image based on the pile foundation template to obtain N non-overlapping area images of the size of the pile foundation template as N pile foundation images; finally, the processor 102 marks the pile foundations in the N pile foundation images based on the pile foundation code and the pile foundation number N, so as to display the marked N pile foundations on the drawing image displayed by the user terminal 101.
In the above application scenario, although the actions of the embodiments provided by the embodiments of the present application are described as being performed by the processor 102; however, the embodiments of the present application are not limited in terms of executing subjects as long as the actions disclosed in the embodiments provided by the embodiments of the present application are executed.
The above scenario is only one example of a scenario provided in the embodiment of the present application, and the embodiment of the present application is not limited to this scenario.
The following describes in detail a specific implementation manner of the method for marking a pile foundation and the related device in the embodiments of the present application by way of embodiments with reference to the accompanying drawings.
Exemplary method
Referring to fig. 2, a schematic flow chart of a method for marking a pile foundation in the embodiment of the present application is shown. In this embodiment, the method may include, for example, the steps of:
step 201: any pile foundation on the drawing image is marked by clicking or frame selection, the marked pile foundation is used as a pile foundation template, and the type of the pile foundation, the code of the pile foundation and the number N of the pile foundations matched with the pile foundation template are determined, wherein N is a positive integer greater than or equal to 2.
Because the drawing image of the construction project comprises a large number of pile foundations, if the pile foundations are marked by adopting a manual mode in the prior art, the work is tedious and tedious, the workload is huge, more manpower and energy are required to be consumed, and manual marking errors are very easy to occur; therefore, in the embodiment of the application, the workload of manually marking the pile foundation is reduced, and the automatic marking of the pile foundation on the drawing image is realized. Firstly, a pile foundation on a drawing image needs to be marked, the pile foundation is used as a pile foundation template, and the type, the code and the number N of the pile foundation corresponding to the pile foundation template are determined. Wherein, the pile foundation code indicates the preset code that this pile foundation type corresponds, and pile foundation quantity N indicates the preset quantity of the pile foundation that this pile foundation type that needs the mark corresponds.
Since the common pile foundation types in construction engineering include manual earth-digging piles, pipe-sinking cast-in-place piles, bored cast-in-place piles, anchor static pressure piles, long auger drilling cast-in-place piles, precast tubular piles and the like, the pile foundation in the pile foundation formwork belongs to one of the above pile foundation types. That is, in an alternative embodiment of the present application, the pile foundation type is an artificial earth pile, a pipe-sinking cast-in-place pile, a bolt-static pressure pile, a long auger drilling cast-in-place pile, or a precast tubular pile.
As an example, determining that the type of a pile foundation corresponding to a pile foundation template is 'manually excavated pile', the code of the pile foundation is 'a', and the number of the pile foundations is '63'; as another example, it is determined that the type of the pile foundation corresponding to the pile foundation template is "driven cast-in-place pile", the pile foundation number is "b", the pile foundation number is "13", and the like, and other examples are similar and are not described herein again.
Step 202: selecting a target area within the drawing image range, and setting an image in the target area as a target image;
step 203: intercepting a region image with the same size as the pile foundation template from one side boundary of the target region, moving a preset step length to the other side direction each time until moving to a second side boundary of the target region until traversing the target region, so as to identify and extract all region images with the same size as the pile foundation template in the target image;
step 204: carrying out image recognition processing on the region images in the target image based on the pile foundation template, and obtaining N non-overlapping region images containing pile foundations in all the region images through similarity calculation to serve as N pile foundation images;
in this application embodiment, after the pile foundation template is determined in step 201, the whole drawing image or part of the drawing image that can be selected in a flexible way is used as the target image, and the pile foundation is automatically identified on the target image through the image identification technology according to the pile foundation template until N non-overlapping area images including the pile foundation, which are the size of the pile foundation template, are obtained, and the mode greatly reduces the workload of selecting each pile foundation by manually finding the position frame of each pile foundation on the drawing image, so that a large amount of manpower and energy are saved.
Specifically, the image recognition processing of the target image based on the pile foundation template means: firstly, calculating the similarity between the area image with the size of each pile foundation template on the target image and the pile foundation template, wherein the higher the similarity between the area image with the size of the pile foundation template and the pile foundation template is, the higher the possibility that the area image with the size of the pile foundation template comprises the pile foundation to be marked is; then, the area images of the sizes of the multiple pile foundation templates are sequenced according to the similarity, the higher the similarity between the area image of the size of the pile foundation template closer to the front of the sequencing and the pile foundation template, the area images of the sizes of the front N non-overlapping pile foundation templates are selected from the area images of the sizes of the multiple pile foundation templates after the sequencing, the area images represent the area images of the sizes of the pile foundation templates which are most similar to and non-overlapping with the pile foundation templates, and the area images can be used as the N pile foundation images. Therefore, in an alternative implementation manner of this embodiment of the present application, the step 202 may include the following steps:
step A: obtaining the similarity between each area image on the target image and the pile foundation template;
and B: sequencing all the area images from high to low according to the similarity, and determining the first N non-overlapping area images which contain pile foundations in the sequencing;
and C: and extracting the first N non-overlapped area images containing the pile foundations in the sequence as N pile foundation images.
The image recognition processing mode can be OpenCV, and OpenCV is a cross-platform computer vision library issued based on BSD license (open source), and can be run on Linux, Windows, Android and Mac OS operating systems. The method is light and efficient, is composed of a series of C functions and a small number of C + + classes, provides interfaces of languages such as Python, Ruby, MATLAB and the like, and realizes a plurality of general algorithms in the aspects of image processing and computer vision. Of course, the embodiment of the present application does not limit the specific manner of the image recognition processing, and the manner of the image recognition processing may be freeform, CImg, CxImage, or the like.
The area images of the sizes of the plurality of pile foundation templates are actually obtained by sequentially intercepting the area images according to the size of the pile foundation template and a preset step length through traversing the target image. That is, in an optional implementation manner of the embodiment of the present application, the area images with the size of the pile foundation templates are obtained by first intercepting the area images with the size of the pile foundation templates from the upper left corner of the drawing image, and moving the area images with the size of the pile foundation templates to the right by a preset step length each time until the area images move to the right boundary of the drawing image; and moving the preset step length downwards from the left boundary of the drawing image every time, and repeatedly executing the step of moving the preset step length to the right boundary of the drawing image every time until the target image is traversed to obtain the target image.
In the embodiment of the application, the preset step length may be a preset moving step length, and for a drawing image, the preset step length is greater than or equal to one pixel; that is, in an optional implementation manner of the embodiment of the present application, the preset step size includes at least one pixel. For example, the preset step size may be specifically one pixel, or the preset step size may be specifically five pixels, and so on. The smaller the preset step length is, the more the number of the area images with the sizes of the pile foundation templates is, and the more accurate the image identification effect is.
As an example, a schematic diagram of obtaining multiple pile foundation form sized area images is shown in fig. 3. Firstly, intercepting an area image with the size of a pile foundation template from the upper left corner of a drawing image, and moving the area image to the right by one pixel each time until the area image moves to the right boundary of the drawing image; and moving downwards one pixel from the left boundary of the drawing image every time, and repeatedly executing the steps of moving downwards one pixel from the right boundary of the drawing image every time until the right boundary of the drawing image is moved until the target image is traversed to obtain a plurality of area images with the size of the pile foundation template.
It should be noted that, when the target image is large, the image recognition processing is directly performed on the target image, and it takes much time to traverse the entire target image, so as to further improve the efficiency of image recognition; in the embodiment of the application, the target image is divided into the plurality of target sub-images in a recursion mode, image recognition processing is conducted on the plurality of target sub-images based on the pile foundation templates in parallel, the recursion division parallel processing mode can obtain N non-overlapping area images with the size of the pile foundation templates as N pile foundation images, and processing time of image recognition is greatly shortened. Therefore, in an optional implementation manner of the embodiment of the present application, for example, the method may further include step C: recursively dividing the target image into a plurality of target sub-images; correspondingly, the step 201 may be, for example: and performing parallel image recognition processing on the plurality of target sub-images based on the pile foundation template to obtain N non-overlapping area images with the size of the pile foundation template as N pile foundation images.
As an example, the recursively dividing the target image into a plurality of target sub-images may be dividing the target image into 2 images, dividing each of the 2 divided images into 2 images, and so on to obtain a plurality of target sub-images.
Step 205: and marking the pile foundations in the N pile foundation images based on the pile foundation code and the pile foundation quantity N.
In this application embodiment, after obtaining N pile foundation images, the pile foundation that this pile foundation image includes is the pile foundation that needs to be marked, and the pile foundation indicates to carry out the pile foundation name according to pile foundation code and pile foundation quantity N in the pile foundation image in the mark pile foundation image substantially.
As an example, when the type of the pile foundation corresponding to the pile foundation template is "manual soil excavation pile", the code of the pile foundation is "a", and the number N of the pile foundations is "63", the pile foundations in the N pile foundation images may be sequentially marked as "a 1", "a 2", "a 3", … … "and" a63 "according to a preset sequence, and based on this, 63 manual soil excavation piles are represented on the image drawing; the preset sequence may be, for example, from left to right, or from top to bottom.
It should be further noted that, after marking the pile foundations in the N pile foundation images based on the pile foundation code and the number N of the pile foundations in step 205, marking data of the pile foundations in the N pile foundation images is formed, where the marking data is not fixed and may be adjusted accordingly based on requirements. Therefore, in an optional implementation manner of this embodiment of the present application, after the step 205, for example, a step D may further be included: and adjusting the marking data of the pile foundations in the N pile foundation images.
According to various implementation modes provided by the embodiment, firstly, a pile foundation template is determined in response to the operation of a pile foundation on a marking drawing image, and the type, the code and the number N of the pile foundation corresponding to the pile foundation template are determined; n is a positive integer greater than or equal to 2; then, carrying out image recognition processing on the target image based on the pile foundation template to obtain N non-overlapping area images which are the size of the pile foundation template and comprise the pile foundation as N pile foundation images; the target image is the whole drawing image or a part of the drawing image; and finally, marking the pile foundations in the N pile foundation images based on the pile foundation code and the pile foundation quantity N. Therefore, the automatic identification of the pile foundation on the target image is carried out through the image identification technology after the pile foundation template is determined, the pile foundation on the target image is automatically marked, the workload of marking the pile foundation in a manual mode is greatly reduced, labor is saved, manual marking errors are avoided, and the efficiency of marking the pile foundation is greatly improved.
Exemplary devices
Referring to fig. 4, a schematic structural diagram of a device for marking a pile foundation in the embodiment of the present application is shown. In this embodiment, the apparatus may specifically include:
the determining unit 401 is configured to mark any pile foundation on the drawing image by clicking or framing, and determine a pile foundation type, a pile foundation code and a pile foundation number N which are matched with the pile foundation template by using the marked pile foundation as a pile foundation template, where N is a positive integer greater than or equal to 2;
an obtaining unit 402, configured to select a target area within the drawing image range, and set an image in the target area as a target image; intercepting a region image with the same size as the pile foundation template from one side boundary of the target region, moving a preset step length to the other side direction each time until moving to a second side boundary of the target region until traversing the target region, so as to identify and extract all region images with the same size as the pile foundation template in the target image; carrying out image recognition processing on the region images in the target image based on the pile foundation template, and obtaining N non-overlapping region images containing pile foundations in all the region images through similarity calculation to serve as N pile foundation images;
and a marking unit 403, configured to mark the pile foundations in the pile foundation images based on the pile foundation code and the pile foundation number N.
In an optional implementation manner of the embodiment of the present application, the obtaining unit 402 includes an obtaining subunit and a determining subunit:
the obtaining subunit is configured to obtain similarity between an area image of the size of each pile foundation template on the target image and the pile foundation template;
and the determining subunit is configured to sort the plurality of area images with the size of the pile foundation template from high to low according to the similarity, and determine the first N non-overlapping area images with the size of the pile foundation template as N pile foundation images.
In an optional implementation manner of the embodiment of the application, the area images with the size of the pile foundation templates are obtained by firstly cutting the area images with the size of the pile foundation templates from the upper left corner of the drawing image, and moving the area images with the size of the pile foundation templates to the right by a preset step length each time until the area images move to the right boundary of the drawing image; and moving the preset step length downwards from the left boundary of the drawing image every time, and repeatedly executing the step of moving the preset step length to the right boundary of the drawing image every time until the target image is traversed to obtain the target image.
In an optional implementation manner of the embodiment of the present application, the preset step size includes at least one pixel.
In an optional implementation manner of the embodiment of the present application, the pile foundation type is an artificial earth pile, a pipe sinking cast-in-place pile, a anchor static pressure pile, a long auger drilling cast-in-place pile, or a precast tubular pile.
In an optional implementation manner of the embodiment of the present application, the apparatus further includes a dividing unit:
the dividing unit is used for recursively dividing the target image into a plurality of target sub-images;
correspondingly, the obtaining unit 402 is specifically configured to:
and performing parallel image recognition processing on the plurality of target sub-images based on the pile foundation template to obtain N non-overlapping area images with the size of the pile foundation template as N pile foundation images.
In an optional implementation manner of the embodiment of the present application, the apparatus further includes an adjusting unit:
and the adjusting unit is used for adjusting the marking data of the pile foundations in the N pile foundation images.
Through various implementation manners provided by the embodiment, the device for marking the pile foundation comprises a determining unit, an obtaining unit and a marking unit, wherein the determining unit determines a pile foundation template in response to the operation of one pile foundation on a marking drawing image, and determines the pile foundation type, the pile foundation code and the pile foundation number N corresponding to the pile foundation template; n is a positive integer greater than or equal to 2; the acquisition unit performs image recognition processing on the target image based on the pile foundation template to acquire N non-overlapping area images which are the size of the pile foundation template and comprise the pile foundation as N pile foundation images; the target image is the whole drawing image or a part of the drawing image; the marking unit marks the pile foundations in the N pile foundation images based on the pile foundation code and the pile foundation quantity N. Therefore, the automatic identification of the pile foundation on the target image is carried out through the image identification technology after the pile foundation template is determined, the pile foundation on the target image is automatically marked, the workload of marking the pile foundation in a manual mode is greatly reduced, labor is saved, manual marking errors are avoided, and the efficiency of marking the pile foundation is greatly improved.
In addition, an embodiment of the present application further provides a terminal device, where the terminal device includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is used for executing the method for marking the pile foundation according to the instructions in the program code.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is used for storing a program code, and the program code is used for executing the method for marking a pile foundation according to the above method embodiment.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application in any way. Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application. Those skilled in the art can now make numerous possible variations and modifications to the disclosed embodiments, or modify equivalent embodiments, using the methods and techniques disclosed above, without departing from the scope of the claimed embodiments. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present application still fall within the protection scope of the technical solution of the present application without departing from the content of the technical solution of the present application.

Claims (9)

1. A method of marking a pile foundation, comprising:
marking any pile foundation on a drawing image by clicking or framing, and determining the type of the pile foundation, the code number of the pile foundation and the number N of the pile foundations which are matched with the pile foundation template by taking the pile foundation as a pile foundation template, wherein N is a positive integer which is more than or equal to 2;
selecting a target area within the drawing image range, and setting an image in the target area as a target image;
intercepting a region image with the same size as the pile foundation template from one side boundary of the target region, moving a preset step length to the other side direction each time until moving to a second side boundary of the target region until traversing the target region, so as to identify and extract all region images with the same size as the pile foundation template in the target image;
carrying out image recognition processing on the region images in the target image based on the pile foundation template, and obtaining N non-overlapping region images containing pile foundations in all the region images through similarity calculation to serve as N pile foundation images;
and marking the pile foundations in the pile foundation images based on the pile foundation code and the pile foundation quantity N.
2. The method according to claim 1, wherein the image recognition processing is performed on the area images in the target image based on the pile foundation template, and through similarity calculation, N non-overlapping area images including pile foundations are obtained in all the area images and are used as N pile foundation images, specifically including:
obtaining the similarity between each area image on the target image and the pile foundation template;
sequencing all the area images from high to low according to the similarity, and determining the first N non-overlapping area images which contain pile foundations in the sequencing;
and extracting the first N non-overlapped area images containing the pile foundations in the sequence as N pile foundation images.
3. The method according to claim 2, wherein the intercepting of the area image having the same size as the pile foundation template from one side boundary of the target area, moving the area image to the other side direction each time by a preset step length until moving to the second side boundary of the target area until traversing the target area, so as to identify and extract all the area images having the same size as the pile foundation template in the target image specifically comprises:
sequentially intercepting a plurality of area images from the upper left corner of the target area, and moving the area images to the right by a preset step length each time in the intercepting process until the area images move to the right boundary of the target area;
and moving the preset step length downwards from the left boundary of the target area each time, and repeatedly executing the step of moving the preset step length to the right boundary of the drawing image each time until the target area is traversed.
4. The method of claim 3, wherein the preset step size comprises at least one pixel.
5. The method of any one of claims 1-4, wherein the pile foundation type is one of an artificially excavated pile, a driven cast-in-place pile, a bolt-and-static pile, a long auger bored cast-in-place pile, or a pre-cast tubular pile.
6. The method according to claim 1, further comprising, after said marking pile foundations in each of said pile foundation images based on said pile foundation code and said pile foundation number N:
and adjusting the marking data of the pile foundations in the N pile foundation images.
7. A device for marking a pile foundation, comprising:
the determining unit is used for marking any pile foundation on the drawing image through point selection or frame selection, and determining the type, code number and number N of the pile foundations matched with the pile foundation template by taking the pile foundation as the pile foundation template, wherein N is a positive integer greater than or equal to 2;
the obtaining unit is used for selecting a target area in the drawing image range and setting an image in the target area as a target image; intercepting a region image with the same size as the pile foundation template from one side boundary of the target region, moving a preset step length to the other side direction each time until moving to a second side boundary of the target region until traversing the target region, so as to identify and extract all region images with the same size as the pile foundation template in the target image; carrying out image recognition processing on the region images in the target image based on the pile foundation template, and obtaining N non-overlapping region images containing pile foundations in all the region images through similarity calculation to serve as N pile foundation images;
and the marking unit is used for marking the pile foundations in the pile foundation images based on the pile foundation code and the pile foundation quantity N.
8. A terminal device, comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the method of marking a pile foundation according to any one of claims 1 to 6 according to instructions in the program code.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium is configured to store a program code for performing the method of marking a pile foundation according to any one of claims 1-6.
CN202010783736.6A 2020-08-06 2020-08-06 Method and device for marking pile foundation Active CN111738232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010783736.6A CN111738232B (en) 2020-08-06 2020-08-06 Method and device for marking pile foundation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010783736.6A CN111738232B (en) 2020-08-06 2020-08-06 Method and device for marking pile foundation

Publications (2)

Publication Number Publication Date
CN111738232A CN111738232A (en) 2020-10-02
CN111738232B true CN111738232B (en) 2020-12-15

Family

ID=72658156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010783736.6A Active CN111738232B (en) 2020-08-06 2020-08-06 Method and device for marking pile foundation

Country Status (1)

Country Link
CN (1) CN111738232B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465050B (en) * 2020-12-04 2024-02-09 广东拓斯达科技股份有限公司 Image template selection method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902974A (en) * 2012-08-23 2013-01-30 西南交通大学 Image based method for identifying railway overhead-contact system bolt support identifying information
CN103020605A (en) * 2012-12-28 2013-04-03 北方工业大学 Bridge identification method based on decision-making layer fusion
CN104200238A (en) * 2014-09-22 2014-12-10 北京酷云互动科技有限公司 Station caption recognition method and station caption recognition device
CN106778541A (en) * 2016-11-28 2017-05-31 华中科技大学 A kind of identification in the multilayer beam China and foreign countries beam hole of view-based access control model and localization method
CN108182383A (en) * 2017-12-07 2018-06-19 浙江大华技术股份有限公司 A kind of method and apparatus of vehicle window detection

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324030B (en) * 2011-09-09 2013-11-06 广州灵视信息科技有限公司 Target tracking method and system based on image block characteristics
CN104408735B (en) * 2014-12-12 2017-07-14 电子科技大学 A kind of rectangular area recognition methods calculated based on improvement shape angle
CN108399621B (en) * 2018-03-29 2023-10-13 招商局重庆交通科研设计院有限公司 Engineering test piece rapid identification method and system
CN108898198A (en) * 2018-06-28 2018-11-27 深圳市有钱科技有限公司 The whole wooden house ornamentation method for customizing of one kind and terminal and a kind of storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902974A (en) * 2012-08-23 2013-01-30 西南交通大学 Image based method for identifying railway overhead-contact system bolt support identifying information
CN103020605A (en) * 2012-12-28 2013-04-03 北方工业大学 Bridge identification method based on decision-making layer fusion
CN104200238A (en) * 2014-09-22 2014-12-10 北京酷云互动科技有限公司 Station caption recognition method and station caption recognition device
CN106778541A (en) * 2016-11-28 2017-05-31 华中科技大学 A kind of identification in the multilayer beam China and foreign countries beam hole of view-based access control model and localization method
CN108182383A (en) * 2017-12-07 2018-06-19 浙江大华技术股份有限公司 A kind of method and apparatus of vehicle window detection

Also Published As

Publication number Publication date
CN111738232A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
US11210433B2 (en) System and method for construction estimation using aerial images
CN105547151B (en) Point Cloud Data from Three Dimension Laser Scanning acquires the method and system with processing
AU2014295972B2 (en) System and method for detecting features in aerial images using disparity mapping and segmentation techniques
US7515153B2 (en) Map generation device, map delivery method, and map generation program
CN111738232B (en) Method and device for marking pile foundation
CN113627264A (en) Modeling method, system, terminal and medium based on drawing identification
CN111144487B (en) Method for establishing and updating remote sensing image sample library
JP2010525491A (en) Geospatial modeling system and associated method for providing data decimation of geospatial data
CN112360525A (en) Bolting machine net laying control method and control system
CN115035297A (en) Automatic recording method, system, device and medium for drilling core RQD
CN113158856A (en) Processing method and device for extracting target area in remote sensing image
CN112215864A (en) Contour processing method and device of electronic map and electronic equipment
Castellazzi et al. A mesh generation method for historical monumental buildings: an innovative approach
CN115546221B (en) Reinforcing steel bar counting method, device, equipment and storage medium
CN111881882A (en) Medical bill rotation correction method and system based on deep learning
US11885624B2 (en) Dynamically modelling objects in map
CN116167582A (en) Intelligent construction progress monitoring system based on BIM technology
CN110458238A (en) A kind of method and system of certificate arc point detection and positioning
CN115731431A (en) Tunnel blast hole identification method, device, equipment and storage medium
CN111382645B (en) Method and system for identifying overdue building in electronic map
CN113239432B (en) Regional block detection recommendation method for panoramic image of subway tunnel
CN113158855B (en) Remote sensing image auxiliary processing method and device based on online learning
CN114840899B (en) BIM-based three-dimensional scanning earthwork balance analysis method and device
CN117830809A (en) Working face construction stage identification method, device, equipment and medium
NL2031937B1 (en) Method and device for automatically identifying water level

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant