CN113554355B - Road engineering construction management method and system based on artificial intelligence - Google Patents

Road engineering construction management method and system based on artificial intelligence Download PDF

Info

Publication number
CN113554355B
CN113554355B CN202111092847.3A CN202111092847A CN113554355B CN 113554355 B CN113554355 B CN 113554355B CN 202111092847 A CN202111092847 A CN 202111092847A CN 113554355 B CN113554355 B CN 113554355B
Authority
CN
China
Prior art keywords
area
image
operated
neural network
construction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111092847.3A
Other languages
Chinese (zh)
Other versions
CN113554355A (en
Inventor
王闪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengjin Decoration Group Co ltd
Original Assignee
Jiangsu Zhengjin Architectural Decoration Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Zhengjin Architectural Decoration Engineering Co ltd filed Critical Jiangsu Zhengjin Architectural Decoration Engineering Co ltd
Priority to CN202111092847.3A priority Critical patent/CN113554355B/en
Publication of CN113554355A publication Critical patent/CN113554355A/en
Application granted granted Critical
Publication of CN113554355B publication Critical patent/CN113554355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06314Calendaring for a resource
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Molecular Biology (AREA)
  • Tourism & Hospitality (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a road engineering construction management method and a system based on artificial intelligence, wherein the method comprises the following steps: taking an image of an area to be operated in the engineering and a set image of a standard basic effect operation area as a group of input images, inputting the input images into a trained first branch neural network, outputting a first generated image, and performing a plurality of circulation steps, wherein each circulation step is as follows: in the last circulation step, each ith generation image output by the first branch neural network and the standard basic efficacy operation area image are used as a group of input images to be input to the first branch neural network, each (i +1) th generation image is output, and a G generation image meeting set conditions is selected and used as a predicted operation area track generation image; and taking the central points of all basic efficacy operation areas in the image as nodes, selecting one of the nodes as an initial node, and determining an initial predicted road construction track by using the initial node and the residual nodes based on a single-source shortest path algorithm.

Description

Road engineering construction management method and system based on artificial intelligence
Technical Field
The invention relates to the field of artificial intelligence and engineering management, in particular to a road engineering construction management method and system based on artificial intelligence.
Background
In the prior art, a road engineering construction track is generally planned on a construction area, construction is performed by constructors according to the planned track, the construction progress requirement is not changed within a fixed time (such as one day), then, engineering managers periodically survey the construction progress, estimate the progress deviation between the construction progress and the planned track, select a proper management method according to the progress deviation, and perform deviation rectification treatment, for example, improvement is performed from the aspects of an organization structure, a work flow and the like, so as to accelerate the construction progress.
The management method has the defects that the construction track is unreasonable in early planning, namely the construction track is often very simple, for example, an S-shaped planning track is adopted, and the construction progress specified every day is the size of a certain fixed construction area, so that large progress deviation is easily generated in the later construction process, more means are required to be adopted for deviation correction in the later period, the construction progress can be accelerated, and the construction efficiency is improved.
Disclosure of Invention
In order to solve the problem of low construction efficiency caused by unreasonable construction track planning of the conventional management method, the invention aims to provide a road engineering construction management method and system based on artificial intelligence, and the adopted technical scheme is as follows:
on one hand, the road engineering construction management method based on artificial intelligence has the specific scheme that:
step S10, acquiring an image of the whole area to be operated of the project as an initial image; performing binarization processing on the initial image to obtain a binary image of the region to be operated, wherein the pixel value of a pixel point in the region to be operated in the binary image is a first pixel value, and the pixel value of a pixel point in a non-operation region in the binary image is a second pixel value; multiplying the pixel values of the pixel points at the corresponding positions in the initial image and the binary image of the area to be operated to obtain an image of the area to be operated of the project;
step S20, acquiring a trained first branch neural network, wherein the first branch neural network is used for demarcating a basic efficacy operation area with radius r in the area to be operated according to the learned inverse correlation relationship between the terrain complexity of the area to be operated and the radius of the basic efficacy operation area and the terrain complexity of the area to be operated in the engineering area image to be operated; the basic effect operation area is set to be circular, and the basic effect operation area is
Figure 100002_DEST_PATH_IMAGE002
The sum of the working areas of the operators in the actual working environment within the set time;
taking the image of the area to be operated in the engineering and the image of the set standard basic effect operation area as a group of input images, wherein the radius of the standard basic effect operation area in the image of the standard basic effect operation area is fixed, inputting the input images into the trained first branch neural network, and outputting H1A first generated image H1Is an integer which is the number of the whole,the positions of basic efficacy operation areas demarcated in the areas to be operated of each first generated image are different;
step S30, performing a plurality of loop steps, each loop step being: in the last circulation step, each ith generation image output by the first branch neural network and a standard basic efficacy operation area image are used as a group of input images which are input to the first branch neural network, each i +1 generation image is output, i =1,2, … G and G is the circulation frequency; until the G-th circulation step and when the output loss value of the first branch neural network is greater than the set limit value, the circulation is stopped, and one G-th generated image meeting the set conditions is selected from the G-th generated images output by the first branch neural network in the G-1-th circulation step and is used as a predicted operation area track generated image;
and step S40, generating an image according to the operation area track selected in the step S30, selecting one of the nodes as an initial node by taking the central point of each basic efficacy operation area in the image as the node, and determining an initial predicted road construction track by using the initial node and the residual nodes based on a single-source shortest path algorithm.
Preferably, the structure of the first branch neural network includes:
the device comprises an encoder for extracting the characteristics of the area to be operated, a data acquisition module, a data processing module and a data processing module, wherein the encoder is used for inputting an image of the area to be operated of the project and outputting a characteristic tensor of the area to be operated;
the basic efficacy area characteristic extraction encoder is used for inputting a set standard basic efficacy operation area image and outputting a basic efficacy area characteristic tensor;
and the construction area planning decoder is used for inputting the connection diagram of the feature tensor of the area to be operated and the feature tensor of the basic efficacy area and outputting a construction area planning image, namely each ith generation image.
Preferably, the training process of the first branch neural network is as follows:
(1) the project waiting operation area with set number of groupsImage I1And standard basic efficacy working area image I2As a training data set, wherein an image I of the area to be worked on of the project1And a standard basic efficacy working area image I2Forming a group of training samples;
(2) setting a label of a training sample, wherein the specific method comprises the following steps:
the project area image I to be operated1Selecting a circle center at any position in a region to be operated to generate a single standard basic effect operation region with radius r', ensuring that the single standard basic effect operation region is completely in the region to be operated, setting the pixel value of each pixel point in the region to be operated as a first pixel value, setting the pixel value of each pixel point in the standard basic effect operation region as a third pixel value, and engineering a region to be operated image I1Setting the pixel value of each pixel point in other non-to-be-operated areas as a second pixel value, and acquiring an image I of the engineering to-be-operated area1Single label image of4
According to the image I of the area to be worked on in the project1The circle centers of different positions in the medium to-be-operated area are selected, so that n label images can be determined
Figure 100002_DEST_PATH_IMAGE004
Denotes the first
Figure 414938DEST_PATH_IMAGE002
A sheet of label image;
(3) and training the first branch neural network by using the training sample and the label image, wherein the training loss function is as follows:
Figure 100002_DEST_PATH_IMAGE006
wherein L is the total loss function value,
Figure 100002_DEST_PATH_IMAGE008
the loss value is increased for the planned area,
Figure 100002_DEST_PATH_IMAGE010
Figure 100002_DEST_PATH_IMAGE012
planning an image for the construction area output by the network,
Figure 744288DEST_PATH_IMAGE004
is a label image, N is the number of label images, N =1,2, …, N,
Figure 100002_DEST_PATH_IMAGE014
solving the Euclidean distance between two images; l is2In order to adapt to the loss value of the terrain environment,
Figure 100002_DEST_PATH_IMAGE016
wherein L iscTo plan for regional circular loss values, LxLoss value for planned area exceeding area to be worked, LbA terrain complexity loss value.
Preferably, the terrain complexity loss value LbThe calculation formula of (a) is as follows:
Figure 100002_DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE020
the overall terrain complexity in the basic efficacy work area in the construction area planning image is represented by the superposition of the terrain complexity of each position in the basic efficacy work area in the construction area planning image,
Figure 100002_DEST_PATH_IMAGE022
standard terrain complexity within a single basic power operation area.
Preferably, in the process of training the first branch neural network, the construction area planning image output by the first branch neural network needs to pass through the trained second branch neural networkSupervised via network to determine overall terrain complexity
Figure 612362DEST_PATH_IMAGE020
The structure of the second branch neural network comprises:
the semantic segmentation encoder is used for inputting an initial image and outputting a semantic segmentation feature map;
the semantic segmentation decoder is used for inputting a semantic segmentation feature map and outputting a semantic segmentation annotation image;
the determining of global terrain complexity
Figure 15662DEST_PATH_IMAGE020
The method comprises the following steps: and obtaining the integral terrain complexity by solving the sum of pixel values of all pixel points in the corresponding region of the basic efficiency operation region in the construction region planning image in the semantic segmentation annotation image.
Preferably, the training process of the second branch neural network is as follows:
(1) acquiring a set number of initial images as training samples;
(2) setting a label of a training sample, wherein the specific method comprises the following steps:
acquiring a depth map and an RGB map of the initial image, wherein the initial image is an RGB-D four-channel image; dividing each pixel point into a set terrain complexity grade according to the depth gradient information of each pixel point in the depth map and the semantic information in the RGB image, and taking each terrain complexity grade as the pixel value of each pixel point of the initial image to obtain a semantic segmentation annotation image corresponding to the initial image as a label of a training sample;
(3) and training the second branch neural network, wherein the loss function of the network adopts a cross entropy loss function.
Preferably, the planned area circular loss value LcThe calculation formula of (a) is as follows:
Figure 100002_DEST_PATH_IMAGE024
in the formula (I), the compound is shown in the specification,
Figure 100002_DEST_PATH_IMAGE026
newly adding the area of a basic efficiency operation area in the construction area planning image output by the first branch neural network,
Figure 100002_DEST_PATH_IMAGE028
the edge length of the increased basic efficacy work area in the construction area planning image,
Figure 100002_DEST_PATH_IMAGE030
is the radius when the increased base power operating area is a circular area.
Preferably, the loss value L of the planned area exceeding the area to be workedxThe calculation formula of (a) is as follows:
Figure 100002_DEST_PATH_IMAGE032
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE034
the intersection ratio of the basic efficacy operation area and the area to be operated in the construction area planning image output by the first branch neural network,
Figure 100002_DEST_PATH_IMAGE036
the area of the newly added basic efficiency operation area in the image is planned for the construction area,
Figure 100002_DEST_PATH_IMAGE038
and planning the area of the area to be operated in the image for the construction area.
Preferably, the method further comprises the following steps:
step S50, obtaining
Figure 884392DEST_PATH_IMAGE002
Actual work of individual operator in construction processThe actual operation track is used for representing the current engineering construction progress;
step S60, comparing the actual operation track with the initial predicted road construction track, judging the deviation degree of the construction progress according to the compared track deviation result, re-determining the unfinished to-be-operated area image according to the content in step S10 when the deviation degree of the construction progress is judged to be larger than the set degree, repeating the content in the steps S20-S40, and updating the predicted road construction track;
and step S70, sending the construction progress deviation degree and the track deviation result to a monitoring platform, wherein the monitoring platform is used for carrying out engineering deviation correction measure processing according to the obtained track deviation result and providing engineering construction management measure suggestions under the current construction progress.
On the other hand, for the road engineering construction management system based on artificial intelligence, the concrete scheme is as follows:
the road engineering construction management method comprises a memory, a processor and a computer program which is run on the memory and the processor, wherein the processor is coupled with the memory, and the processor realizes the road engineering construction management method when executing the computer program.
The invention has the following beneficial effects:
the trained neural network is utilized, the area of a basic efficiency operation area defined by each position can be adjusted according to the complexity of the specific terrain where different positions of the operation area are located, the road construction track can be automatically predicted to be planned, the planned construction track is more reasonable, the progress deviation generated in the construction process can be reduced after workers carry out construction operation according to the predicted road construction track, the progress deviation rectification processing in the later period is facilitated, and therefore the efficiency of road engineering construction is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of steps S10 to S40 in the road construction management method of embodiment 1;
FIG. 2 is a structural diagram of a first branched neural network and a second branched neural network in embodiment 1;
FIG. 3 is a flowchart of steps S50 to S70 in the road construction management method of embodiment 1;
fig. 4 is a schematic diagram of hardware devices of the road engineering construction management system in embodiment 2.
Detailed Description
The following describes the specific embodiments of the present invention with reference to the drawings.
Example 1:
referring to fig. 1, a flowchart of a road engineering construction management method based on artificial intelligence according to an embodiment of the present invention is shown, where the method includes the following steps:
step S10, acquiring an image of the whole area to be operated of the project as an initial image; performing binarization processing on the initial image to obtain a binary image of the area to be operated, wherein the pixel value of a pixel point in the area to be operated in the binary image is a first pixel value, and the pixel value of a pixel point in a non-operation area in the binary image is a second pixel value; and multiplying the pixel values of the pixel points at the corresponding positions in the initial image and the binary image of the area to be operated to obtain the image of the area to be operated of the project.
Understandably, the method for acquiring the initial image in the step comprises the following steps: the unmanned aerial vehicle carrying the RGB-D camera with the overlooking visual angle is used for carrying out image acquisition processing on the project to-be-operated area, so that the problem that a single shot image cannot cover the whole project to-be-operated area is solved, and therefore the image of the complete project to-be-operated area is obtained and serves as an initial image through feature point matching and image splicing operation on multiple shot RGB-D four-channel images.
And the binary image of the area to be worked after the binarization processing has the same size as the initial image. Optionally, the pixel value of the pixel point in the region to be operated in the binary image is set to 1, and the pixel value of the pixel point in the non-operation region is set to 0.
Step S20, acquiring a trained first branch neural network, wherein the first branch neural network is used for demarcating a basic efficacy operation area with radius r in the area to be operated according to the learned inverse correlation relationship between the terrain complexity of the area to be operated and the radius of the basic efficacy operation area and the terrain complexity of the area to be operated in the engineering area image to be operated; the basic effect operation area is set to be circular, and the area of the basic effect operation area is
Figure 373142DEST_PATH_IMAGE002
The sum of the working areas corresponding to the daily working amount of each operator in the actual working environment.
Taking the image of the area to be operated in the project and the image of the set standard basic efficacy operation area as a group of input images, wherein the area of the standard basic efficacy operation area in the image of the standard basic efficacy operation area is known
Figure 110154DEST_PATH_IMAGE002
The sum of the working areas of the working staff corresponding to the workload of one day under the standard working environment (
Figure DEST_PATH_IMAGE040
) The radius of the standard basic efficacy operation area is r',
Figure DEST_PATH_IMAGE042
and S is the working area corresponding to the daily working amount of a single operator in the standard working environment.
Inputting the input image into the trained first branch neural network, and outputting H1The planning region positions defined in the region areas to be operated of each first generated image are different.
And the pixel value of the pixel point in the basic efficacy operation area in the standard basic efficacy operation area image is a third pixel value, and the pixel value of the pixel point in the non-basic efficacy operation area in the basic efficacy operation area image is a second pixel value.
As shown in fig. 2, the structure of the first branch neural network includes:
the device comprises an encoder for extracting the characteristic features of the area to be operated, a data processing module and a data processing module, wherein the encoder is used for inputting an image of the area to be operated and outputting a characteristic tensor of the area to be operated;
a basic efficacy area feature extraction encoder for inputting a set standard basic efficacy operation area image (i.e., a binary image of the basic efficacy area in fig. 2) and outputting a feature tensor of the basic efficacy area;
the construction area planning decoder is configured to input a connection diagram of the feature tensor of the to-be-worked area and the feature tensor of the basic efficacy area, and output a construction area planning image, which is the ith generation image (including the first generation image) mentioned in this embodiment.
In this step, the construction area planning image output by the first branch neural network needs to be supervised by the trained second branch neural network. As shown in fig. 2, the structure of the second branch neural network includes:
the semantic segmentation coder is used for inputting an initial image and outputting a semantic segmentation feature map;
and the semantic segmentation decoder is used for inputting the semantic segmentation feature map and outputting the semantic segmentation annotation image.
Because the training process of the first branch neural network needs to use the output result of the second branch neural network, the second branch neural network needs to be trained before the training of the first branch neural network, and based on this consideration, the training process of the second branch neural network is first introduced as follows:
(1) acquiring a plurality of initial images as training samples;
(2) setting a label of a training sample, wherein the specific method comprises the following steps: the method comprises the steps of obtaining a depth map and an RGB map of an initial image as the initial image is an RGB-D four-channel initial image shot by an RGB-D camera, dividing each pixel into a certain terrain complexity grade according to depth gradient information of each pixel in the depth map and semantic information in the RGB image, and then taking each terrain complexity grade as a pixel value of each pixel in the initial image to obtain a semantic segmentation annotation image corresponding to the initial image, namely a label.
In the embodiment, the depth gradient of each pixel point is divided into five grades, and the pixel points in the first depth gradient range are marked as grade one; marking the pixel points in the second depth gradient range as grade two; marking the pixel points in the third depth gradient range as grade three; marking the pixel points in the fourth depth gradient range as grade four; and marking the pixel points in the fifth depth gradient range as grade five. After the division into five levels, the level one corresponds to the terrain complexity level under the ideal condition, and the terrain complexity is sequentially increased from the level two to the level five.
When five grades are divided, the adjacent depth gradient ranges are not overlapped, namely the upper limit value of the first depth gradient range is smaller than the lower limit value of the second depth gradient range, the upper limit value of the second depth gradient range is smaller than the lower limit value of the third gradient range, and the like.
(3) And training the second branch neural network by taking a plurality of groups of initial images and corresponding semantic segmentation labeling images as training samples and labels, wherein the loss function of the network adopts a cross entropy loss function.
A second branch neural network is obtained, and the training process of the first branch neural network is described as follows:
(1) a plurality of groups of engineering to-be-operated area images I1And standard basic efficacy working area image I2As a training data set, wherein an image I of the area to be worked on of the project1And a standard basic efficacy working area image I2A set of training samples is constructed.
(2) Setting a label of a training sample, wherein the specific method comprises the following steps:
manually marking, specifically, randomly selecting a circle center in a to-be-operated area of an image of the to-be-operated area of the project, generating a single standard basic efficacy operation area with the radius r', and ensuring that the single area is singleSetting the pixel value of each pixel point in the area to be operated to be 1 (namely a first pixel value), setting the pixel value of each pixel point in the area to be operated to be 2 (namely a third pixel value), setting the pixel value of each pixel point in other areas not to be operated to be 0 (namely a second pixel value), and acquiring a single-sheet label image I4
Similarly, a plurality of circle centers at different positions are randomly selected in the area to be operated, and a plurality of label images I of a group of training samples are obtained according to the method4Collectively, these label data are taken as training label data, where each individual label image is represented as
Figure 712168DEST_PATH_IMAGE004
I.e. first
Figure 142012DEST_PATH_IMAGE002
A sheet of label image.
(3) And training the first branch neural network by using the training sample and the label image, wherein the training loss function is as follows:
Figure DEST_PATH_IMAGE006A
wherein L is the total loss function value,
Figure 852479DEST_PATH_IMAGE008
the loss value is increased for the planned area,
Figure 986657DEST_PATH_IMAGE010
Figure 364549DEST_PATH_IMAGE012
planning an image for the construction area output by the network,
Figure 434136DEST_PATH_IMAGE004
is a label image, N is the number of label images, N =1,2, …, N,
Figure 163058DEST_PATH_IMAGE014
to find the euclidean distance between the two images (i.e., the L2 norm), the loss function is used to supervise that the euclidean distance of the network output image and any of the label images is close enough; l is2In order to adapt to the loss value of the terrain environment,
Figure 976293DEST_PATH_IMAGE016
wherein L iscTo plan for regional circular loss values, LxLoss value for planned area exceeding area to be worked, LbAnd (3) adding the terrain complexity loss value, namely the terrain environment adaptive loss value which is the superposition of the circular loss value of the planning area, the loss value of the planning area exceeding the area to be operated and the terrain complexity loss value.
Specifically, the increased planning region is obtained by subtracting the pixel value of each pixel point in the input image point by point from the pixel value of each pixel point in the output image of the first branch neural network, and the area of the increased planning region is determined
Figure 84057DEST_PATH_IMAGE026
And edge length
Figure 855704DEST_PATH_IMAGE028
The circular loss value L of the planning area is determined according to the ratio of the first to the secondcThe loss value is calculated as follows:
Figure DEST_PATH_IMAGE024A
in the formula (I), the compound is shown in the specification,
Figure 9605DEST_PATH_IMAGE026
the area of a planning area is added in a construction area planning image output by the network,
Figure 751165DEST_PATH_IMAGE028
the edge length of the planned area added in the image is planned for the construction area,
Figure 103649DEST_PATH_IMAGE030
is the radius when the added planning area is a circular area. It can be understood that if the increased planning area is a circular area, the increased planning area in the construction area planning image corresponds to a basic efficiency work area.
Understandably, the loss value L of the planned area beyond the area to be workedxThe calculation formula of (a) is as follows:
Figure DEST_PATH_IMAGE032A
wherein the content of the first and second substances,
Figure 515039DEST_PATH_IMAGE034
the intersection ratio of the planning area and the area to be worked in the construction area planning image output by the network,
Figure 687394DEST_PATH_IMAGE036
the area of the newly added planned area in the image is output for the network,
Figure 714868DEST_PATH_IMAGE038
is the area of the area to be operated; the loss value is used for ensuring that the planning area in the output image does not exceed the area to be operated.
Understandably, the terrain complexity loss value LbThe calculation formula of (a) is as follows:
Figure DEST_PATH_IMAGE018A
wherein the content of the first and second substances,
Figure 656280DEST_PATH_IMAGE020
representing the superposition of the terrain complexity of each position in the planned area in the planned image of the construction area for the overall terrain complexity in a single planned area in the planned image of the construction area,
Figure 504150DEST_PATH_IMAGE022
standard terrain complexity within a single planned area.
In the above formula, the complexity of the overall terrain
Figure 429380DEST_PATH_IMAGE020
The determination method comprises the following steps: before the training of the first branch neural network by using the training sample, the initial image in the training sample is input into the trained first branch neural network, the semantic segmentation annotation image corresponding to the initial image is output, and the pixel value of each pixel point in the image corresponds to the terrain complexity level of the pixel point at the corresponding position in the planning image of the construction area output by the first neural network, so that the overall terrain complexity in the planning area is obtained by solving the sum of the pixel values of each pixel point in the planning area in the semantic segmentation annotation image (namely the area corresponding to the planning area in the output planning image of the construction area).
In the above formula, the method for setting the standard terrain complexity in a single planning area is as follows: in the five levels of terrain complexity, a certain level of terrain complexity is taken as the standard terrain complexity, for example, the grade one terrain complexity is taken as the standard terrain complexity, at this level, the pixel value (i.e., complexity) of each pixel point in a single planning region is set to 1, and the pixel values of all the pixel points in the planning region are summed to obtain the standard terrain complexity.
In the above formula, when
Figure DEST_PATH_IMAGE044
Time, terrain complexity loss value LbThe goal of setting the loss value is therefore not to allow the sum of the terrain complexity categories of the pixels in the added planning region to exceed the sum of the terrain complexity categories of the underlying efficacy region.
In this step, the circular loss value L of the planning regioncLoss value L of planned area exceeding area to be operatedxAnd a terrain complexity loss value LbThe equal proportion superposition forms the loss value L adaptive to the terrain environment2. As other implementation manners, weights can also be distributed to the three-part loss values, for example, the terrain complexity loss value LbAs the main loss, it is given the greatest weight, w1, the plan area circular loss LcAnd the loss L of the planned area beyond the area to be workedxAs minor losses, they are given greater weights w2 and w3, respectively, and w1+ w2+ w3=1, specifically w1=0.4, w2=0.3, w3=0.3, respectively.
The loss value LcThe function of the training process of the first branch neural network is that the local form complexity loss value LbWhen the set requirement is not met, the first branch neural network needs to adjust the area size of the newly-added planning area so as to reduce the terrain complexity loss, for example, the terrain complexity loss value LbIf the area is larger than the set value, the area of the increased planning area is reduced, and the reduction of the terrain complexity loss value L is ensuredb(ii) a At the same time, the value of the circular loss L is determined by the planning regioncAnd a loss value L of the planned area exceeding the area to be operatedxAnd adjusting and monitoring are carried out, and after the area of the planning region is adjusted, the planning region is still required to be ensured to be circular and not to exceed the region to be operated, so that the first branch neural network can reduce the terrain complexity loss only by adjusting the radius and the circle center position of the newly-increased planning region.
(4) Taking a label image used by the first training of the network as a new project to-be-operated area image I1And combining the standard basic efficacy operation area image I2As a new training data set, then determining a label image, specifically: random image I of work waiting area in engineering1Selecting a circle center in the area to be operated to generate a single basic efficacy operation area (namely a planning area), and ensuring that the single basic efficacy operation area is completely in the area to be operated and has no overlapping area with the existing basic efficacy operation area; similarly, a plurality of new label images can be determined as the label data for training by randomly selecting the circle centers of a plurality of different positions in the area to be operated; for the first branchThe neural network is trained next time, and the same procedure is adopted in step (3)
Figure DEST_PATH_IMAGE046
As a function of the loss.
According to the method, the first branch neural network is trained for multiple times until a new label image generated after a certain network training can not meet the following conditions: the single basic efficacy operation area is completely in the area to be processed, and no overlapping area exists between the single basic efficacy operation area and the existing basic efficacy operation area; at this point, the training of the first branch neural network is stopped. That is, after a certain network training, the newly added single basic function operation area in the new label image exceeds the area to be processed, or the newly added single basic function operation area coincides with the existing basic function operation area, that is, the training is stopped.
Step S30, performing a plurality of loop steps, each loop step being: in the last circulation step, each ith generation image output by the first branch neural network and a standard basic efficacy operation area image are used as a group of input images which are input to the first branch neural network, and each (i +1) generation image is output, wherein i =1,2 and …; and stopping the circulation until the G-th circulation step and when the output loss value of the first branch neural network is greater than the set limit value, and selecting a G-th generated image meeting the set condition from the G-th generated images output by the first branch neural network in the G-1-th circulation step as a working area track generated image.
The specific cycle process is as follows:
step S301, a first loop is performed, the loop including: inputting each first generated image and the standard basic efficacy operation area image as a group of input images into a first branch neural network, and outputting H corresponding to each group of input images by the network2A second generated image, then for H1Group input images, the network outputting a total of H1*H2A second generated image, each secondTwo basic efficacy work areas (namely planning areas) with different positions are defined in the to-be-processed area of the generated image.
Step S302, performing a second cycle, wherein the cycle comprises the following steps: and inputting each second generated image and the operation area image into a first branch neural network as a group of input images, and outputting the input images in each group by the network correspondingly
Figure DEST_PATH_IMAGE048
A third generated image, for H1*H2Group input images, the network outputting a total of H1*H2*H3The third generated images define three working area areas with different positions in the area of the area to be processed of each third generated image;
step S303, performing an i-th loop, i =3,4, …, the loop including: and inputting each ith generated image and the standard basic efficacy operation area image as a group of input images into a first branch neural network, and correspondingly outputting each group of input images by the network
Figure DEST_PATH_IMAGE050
For the (i +1) th generated image, then for H1*H2*…*HiGroup input images, the network outputting a total of H1*H2*…*Hi+1Generating (i +1) th generation images, and dividing (i +1) basic efficacy operation areas with different positions in the to-be-processed areas of the (i +1) th generation images;
step S304, until the G-th cycle is carried out, the output loss value of the first branch neural network is larger than the set limit value, and H output by the first branch neural network in the G-1-th cycle1*H2*…*HGAnd selecting one G-th generation image meeting set conditions from the G-th generation images as a working area track generation image. The setting conditions are as follows: and after G basic efficacy operation areas are defined by the area of the area to be processed of the G-th generated image, the remaining area of the idle area which is not defined is the minimum.
It should be noted that, in each cycle, a standard basic efficacy operation area image needs to be input to the first branch neural network, and the purpose is that after the i-1 th cycle, if the output loss value of the first branch neural network is greater than a set limit value, in the ith cycle, the network needs to automatically adjust the radius r' of the standard basic efficacy operation area on the basis of the standard basic efficacy operation area image, adjust to obtain a basic efficacy operation area with an appropriate size, and place the basic efficacy operation area in the ith generation image to obtain the (i +1) th generation image.
And step S40, generating an image according to the operation area track selected in the step S30, selecting one node as an initial node by taking the central point of each basic efficacy operation area in the image as the node, and determining an initial prediction road construction track by using the initial node and the residual nodes based on the existing path planning method, such as a single-source shortest path algorithm (Dijkstra algorithm).
Further, as shown in fig. 3, the road engineering construction management method further includes the following steps:
step S50, obtaining
Figure 716005DEST_PATH_IMAGE002
And the actual operation track of each operator in the construction process is used for representing the current engineering construction progress.
Understandably, the actual working trajectory determination process is as follows:
the method comprises the steps of obtaining a key point thermodynamic diagram of a step center of a constructor through a key point detection network (a neural network in the prior art) through continuous time sequence panoramic images, superposing the thermodynamic diagrams, and obtaining an actual operation track.
And step S60, comparing the actual operation track with the initial predicted road construction track, judging the degree of deviation of the construction progress according to the compared track deviation result, re-determining the image of the area to be processed which is not finished according to the content in the step S10 when the degree of deviation of the construction progress is judged to be larger than the set degree, repeating the content in the steps S20-S40, and updating the predicted road construction track.
Understandably, the actual operation track and the initial predicted road construction track are compared specifically as follows:
according to the key point thermodynamic diagram of the construction footstep center of each day constructors, comparing with a thermodynamic distribution diagram generated based on an initial predicted road construction track, determining whether a progress deviation condition exists, for example, the construction progress deviation degree is the area behind i planning areas, the set degree is the area of 5 planning areas, and when i is greater than 5, the road construction track needs to be predicted again.
And step S70, sending the construction progress deviation degree and the track deviation result to a monitoring platform, wherein the monitoring platform is used for carrying out engineering deviation correction measure processing according to the obtained track deviation result and providing engineering construction management measure suggestions under the current construction progress.
Understandably, the monitoring platform provides engineering construction management measure suggestions under the current construction progress, and the suggestions comprise:
organization measures, such as changing organization structures, dividing tasks into work, dividing management functions into work, and organizing workflow; management measures such as a method and means for adjusting progress management, changing construction management, strengthening management, and the like; economic measures such as capital required for implementing and accelerating the project construction progress and the like; technical measures such as design adjustment, construction method improvement, construction machine change and the like. And an implementer selects a proper engineering deviation rectifying measure to carry out road engineering construction management according to the actual situation.
The road engineering construction management method has the following advantages:
(1) according to the terrain complexity of different positions of the to-be-operated area, the area of the basic efficiency operation area defined by each position is adjusted, the planning of the road construction track is automatically predicted, the planned construction track is more reasonable, and the efficiency of road engineering construction is improved.
(2) The corresponding construction risk management strategy is obtained by performing comparative analysis on the actual operation track and the planned operation track (namely, the predicted road construction track), manual field observation or analysis is not needed, and the labor and time cost is saved.
Example 2:
the embodiment provides an artificial intelligence based road engineering construction management system, which includes a memory, a processor, and a computer program running on the memory and running on the processor, wherein the processor is coupled with the memory, and when executing the computer program, the processor implements the artificial intelligence based road engineering construction management method in embodiment 1.
As shown in fig. 4, the apparatus 600 of the road construction management system may include a CPU611, which may be a general-purpose CPU, a special-purpose CPU, or an execution unit for processing and executing other information. Further, the device 600 may also include a mass storage 612 and/or a read only memory ROM613, wherein the mass storage 612 may be configured to store various types of data including image data, algorithm data, intermediate results, and various programs required to operate the device 600, and the ROM613 may be configured to store power on self-test for the device 600, initialization of various functional modules in the system, drivers for basic input/output of the system, and data required to boot the operating system.
Optionally, the device 600 may also include other hardware platforms or components, such as one or more of the illustrated TPU (tensor processing unit) 614, GPU (graphics processing unit) 615, FPGA (field programmable gate array) 616, and MLU (machine learning unit) 617. It is to be understood that although various hardware platforms or components are shown in the device 600, this is by way of illustration and not of limitation, and one skilled in the art can add or remove corresponding hardware as may be desired. For example, the apparatus 600 may include only a CPU to implement the road construction trajectory prediction and construction management of the present invention.
The device 600 of the present invention may also include a communication interface 618 such that it may be connected to a local area network/wireless local area network (LAN/WLAN) 605 via the communication interface 618, which in turn may be connected to a local server 606 via the LAN/WLAN or to the Internet ("Internet") 607. Alternatively or additionally, device 600 of the present invention may also be directly connected to the internet or a cellular network based on wireless communication technology, such as third generation ("3G"), fourth generation ("4G"), or 5 generation ("5G") based wireless communication technology, through communication interface 618. In some application scenarios, the apparatus 600 of the present invention may also access the servers 608 and databases 609 of the external network as needed to obtain various known image models (e.g., keypoint detection networks, etc.), data, and modules, and may remotely store various data, such as the actual work trajectory of the road engineering worker for subsequent computational analysis.
The peripheral devices of the apparatus 600 may include a display device 602, an input device 603, and a data transmission interface 604. In one embodiment, the display device 602 may, for example, include one or more speakers and/or one or more visual displays configured to provide voice prompts and/or visual displays of the process monitoring results of the present invention. The input device 603 may include other input buttons or controls, such as a keyboard, mouse, microphone, gesture capture camera, etc., configured to receive input of lesion area image data and/or user instructions. The data transfer interface 604 may include, for example, a serial interface, a parallel interface, or a universal serial bus interface ("USB"), a small computer system interface ("SCSI"), serial ATA, FireWire ("FireWire"), PCI Express, and a high-definition multimedia interface ("HDMI"), which are configured for data transfer and interaction with other devices or systems.
The aforementioned CPU611, mass memory 612, read only memory ROM613, TPU614, GPU615, FPGA616, MLU617 and communication interface 618 of the device 600 of the present invention may be interconnected via a bus 619 and enable data interaction with peripheral devices via the bus. Through the bus 619, the CPU611 may control other hardware components and their peripherals in the device 600, in one embodiment.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A road engineering construction management method based on artificial intelligence is characterized by comprising the following steps:
step S10, acquiring an image of the whole area to be operated of the project as an initial image; performing binarization processing on the initial image to obtain a binary image of the region to be operated, wherein the pixel value of a pixel point in the region to be operated in the binary image is a first pixel value, and the pixel value of a pixel point in a non-operation region in the binary image is a second pixel value; multiplying the pixel values of the pixel points at the corresponding positions in the initial image and the binary image of the area to be operated to obtain an image of the area to be operated of the project;
step S20, acquiring a trained first branch neural network, wherein the first branch neural network is used for demarcating a basic efficacy operation area with radius r in the area to be operated according to the learned inverse correlation relationship between the terrain complexity of the area to be operated and the radius of the basic efficacy operation area and the terrain complexity of the area to be operated in the engineering area image to be operated; the basic effect operation area is set to be circular, and the basic effect operation area is
Figure DEST_PATH_IMAGE002
The sum of the working areas of the operators in the actual working environment within the set time;
taking the image of the area to be operated in the engineering and the image of the set standard basic effect operation area as a group of input images, wherein the radius of the standard basic effect operation area in the image of the standard basic effect operation area is fixed, inputting the input images into the trained first branch neural network, and outputting H1A first generated image H1The positions of basic efficacy operation areas demarcated in the areas to be operated of each first generated image are different;
step S30, performing a plurality of loop steps, each loop step being: in the last circulation step, each ith generation image output by the first branch neural network and a standard basic efficacy operation area image are used as a group of input images which are input to the first branch neural network, each i +1 generation image is output, i =1,2, … G and G is the circulation frequency; until the G-th circulation step and when the output loss value of the first branch neural network is greater than the set limit value, the circulation is stopped, and one G-th generated image meeting the set conditions is selected from the G-th generated images output by the first branch neural network in the G-1-th circulation step and is used as a predicted operation area track generated image;
and step S40, generating an image according to the operation area track selected in the step S30, selecting one of the nodes as an initial node by taking the central point of each basic efficacy operation area in the image as the node, and determining an initial predicted road construction track by using the initial node and the residual nodes based on a single-source shortest path algorithm.
2. The artificial intelligence based road engineering construction management method according to claim 1, wherein the structure of the first branch neural network comprises:
the device comprises an encoder for extracting the characteristics of the area to be operated, a data acquisition module, a data processing module and a data processing module, wherein the encoder is used for inputting an image of the area to be operated of the project and outputting a characteristic tensor of the area to be operated;
the basic efficacy area characteristic extraction encoder is used for inputting a set standard basic efficacy operation area image and outputting a basic efficacy area characteristic tensor;
and the construction area planning decoder is used for inputting the connection diagram of the feature tensor of the area to be operated and the feature tensor of the basic efficacy area and outputting a construction area planning image, namely each ith generation image.
3. The method for managing road engineering construction based on artificial intelligence of claim 2, wherein the training process of the first branch neural network is as follows:
(1) setting the number of groups of images I of the project area to be operated1And standard basic efficacy working area image I2As a training data set, wherein an image I of the area to be worked on of the project1And a standard basic efficacy working area image I2Forming a group of training samples;
(2) setting a label of a training sample, wherein the specific method comprises the following steps:
the project area image I to be operated1Selecting a circle center at any position in a region to be operated to generate a single standard basic effect operation region with radius r', ensuring that the single standard basic effect operation region is completely in the region to be operated, setting the pixel value of each pixel point in the region to be operated as a first pixel value, setting the pixel value of each pixel point in the standard basic effect operation region as a third pixel value, and engineering a region to be operated image I1Setting the pixel value of each pixel point in other non-to-be-operated areas as a second pixel value, and acquiring an image I of the engineering to-be-operated area1Single label image of4
According to the image I of the area to be worked on in the project1The circle centers of different positions in the medium to-be-operated area are selected, so that n label images can be determined
Figure DEST_PATH_IMAGE004
Denotes the first
Figure 611523DEST_PATH_IMAGE002
A sheet of label image;
(3) and training the first branch neural network by using the training sample and the label image, wherein the training loss function is as follows:
Figure DEST_PATH_IMAGE006
wherein L is the total loss function value,
Figure DEST_PATH_IMAGE008
the loss value is increased for the planned area,
Figure DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE012
planning an image for the construction area output by the network,
Figure 604362DEST_PATH_IMAGE004
is a label image, N is the number of label images, N =1,2, …, N,
Figure DEST_PATH_IMAGE014
solving the Euclidean distance between two images; l is2In order to adapt to the loss value of the terrain environment,
Figure DEST_PATH_IMAGE016
wherein L iscTo plan for regional circular loss values, LxLoss value for planned area exceeding area to be worked, LbA terrain complexity loss value.
4. The artificial intelligence based road engineering construction management method according to claim 3, wherein the terrain complexity loss value LbThe calculation formula of (a) is as follows:
Figure DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE020
the overall terrain complexity in the basic efficacy work area in the construction area planning image is represented by the superposition of the terrain complexity of each position in the basic efficacy work area in the construction area planning image,
Figure DEST_PATH_IMAGE022
standard terrain complexity within a single basic power operation area.
5. The method as claimed in claim 4, wherein in the training of the first neural network, the construction area planning image outputted from the first neural network needs to be supervised by the trained second neural network to determine the overall terrain complexity
Figure 649679DEST_PATH_IMAGE020
The structure of the second branch neural network comprises:
the semantic segmentation encoder is used for inputting an initial image and outputting a semantic segmentation feature map;
the semantic segmentation decoder is used for inputting a semantic segmentation feature map and outputting a semantic segmentation annotation image;
the determining of global terrain complexity
Figure 310467DEST_PATH_IMAGE020
The method comprises the following steps: and obtaining the integral terrain complexity by solving the sum of pixel values of all pixel points in the corresponding region of the basic efficiency operation region in the construction region planning image in the semantic segmentation annotation image.
6. The artificial intelligence based road engineering construction management method according to claim 5, wherein the training process of the second branch neural network is as follows:
(1) acquiring a set number of initial images as training samples;
(2) setting a label of a training sample, wherein the specific method comprises the following steps:
acquiring a depth map and an RGB map of the initial image, wherein the initial image is an RGB-D four-channel image; dividing each pixel point into a set terrain complexity grade according to the depth gradient information of each pixel point in the depth map and the semantic information in the RGB image, and taking each terrain complexity grade as the pixel value of each pixel point of the initial image to obtain a semantic segmentation annotation image corresponding to the initial image as a label of a training sample;
(3) and training the second branch neural network, wherein the loss function of the network adopts a cross entropy loss function.
7. The artificial intelligence based road engineering construction management method according to claim 3, wherein the planned area circular loss value LcThe calculation formula of (a) is as follows:
Figure DEST_PATH_IMAGE024
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE026
newly adding the area of a basic efficiency operation area in the construction area planning image output by the first branch neural network,
Figure DEST_PATH_IMAGE028
the edge length of the increased basic efficacy work area in the construction area planning image,
Figure DEST_PATH_IMAGE030
is the radius when the increased base power operating area is a circular area.
8. The artificial intelligence based road engineering construction management method according to claim 3, wherein the planned area exceeds a loss value L of an area to be workedxThe calculation formula of (a) is as follows:
Figure DEST_PATH_IMAGE032
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE034
the intersection ratio of the basic efficacy operation area and the area to be operated in the construction area planning image output by the first branch neural network,
Figure DEST_PATH_IMAGE036
the area of the newly added basic efficiency operation area in the image is planned for the construction area,
Figure DEST_PATH_IMAGE038
and planning the area of the area to be operated in the image for the construction area.
9. The artificial intelligence based road engineering construction management method according to claim 1, further comprising the steps of:
step S50, obtaining
Figure 205DEST_PATH_IMAGE002
The actual operation track of each operator in the construction process is used for representing the current engineering construction progress;
step S60, comparing the actual operation track with the initial predicted road construction track, judging the deviation degree of the construction progress according to the compared track deviation result, re-determining the unfinished to-be-operated area image according to the content in step S10 when the deviation degree of the construction progress is judged to be larger than the set degree, repeating the content in the steps S20-S40, and updating the predicted road construction track;
and step S70, sending the construction progress deviation degree and the track deviation result to a monitoring platform, wherein the monitoring platform is used for carrying out engineering deviation correction measure processing according to the obtained track deviation result and providing engineering construction management measure suggestions under the current construction progress.
10. An artificial intelligence based road engineering construction management system, comprising a memory and a processor, and a computer program running on the memory and on the processor, the processor being coupled to the memory, the processor, when executing the computer program, implementing the road engineering construction management method according to any one of claims 1-9.
CN202111092847.3A 2021-09-17 2021-09-17 Road engineering construction management method and system based on artificial intelligence Active CN113554355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111092847.3A CN113554355B (en) 2021-09-17 2021-09-17 Road engineering construction management method and system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111092847.3A CN113554355B (en) 2021-09-17 2021-09-17 Road engineering construction management method and system based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN113554355A CN113554355A (en) 2021-10-26
CN113554355B true CN113554355B (en) 2021-12-03

Family

ID=78134646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111092847.3A Active CN113554355B (en) 2021-09-17 2021-09-17 Road engineering construction management method and system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN113554355B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116957308B (en) * 2023-09-21 2023-11-24 武汉市规划研究院 Urban road section planning method and system
CN117372880B (en) * 2023-12-07 2024-02-09 天津市祥途测绘科技有限公司 Road engineering supervision system and method based on remote sensing image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109375633A (en) * 2018-12-18 2019-02-22 河海大学常州校区 River course clear up path planning system and method based on global state information
CN110727288A (en) * 2019-11-13 2020-01-24 昆明能讯科技有限责任公司 Point cloud-based accurate three-dimensional route planning method for power inspection
CN110977767A (en) * 2019-11-12 2020-04-10 长沙长泰机器人有限公司 Casting defect distribution detection method and casting polishing method
WO2020219303A1 (en) * 2019-04-26 2020-10-29 Nvidia Corporation Intersection pose detection in autonomous machine applications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109375633A (en) * 2018-12-18 2019-02-22 河海大学常州校区 River course clear up path planning system and method based on global state information
WO2020219303A1 (en) * 2019-04-26 2020-10-29 Nvidia Corporation Intersection pose detection in autonomous machine applications
CN110977767A (en) * 2019-11-12 2020-04-10 长沙长泰机器人有限公司 Casting defect distribution detection method and casting polishing method
CN110727288A (en) * 2019-11-13 2020-01-24 昆明能讯科技有限责任公司 Point cloud-based accurate three-dimensional route planning method for power inspection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Zeli Wang 等.Construction waste recycling robot for nails and screws: Computer vision technology and neural network approach.《Automation in Construction》.2018,第97卷第220-228页. *
王利明 等.基于模糊神经网络的筑路机械生产率预测方法.《筑路机械与施工机械化》.2001,第18卷(第3期),第10-13页. *

Also Published As

Publication number Publication date
CN113554355A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN113554355B (en) Road engineering construction management method and system based on artificial intelligence
Bang et al. Image augmentation to improve construction resource detection using generative adversarial networks, cut-and-paste, and image transformation techniques
CN111160469B (en) Active learning method of target detection system
CN110544293B (en) Building scene recognition method through visual cooperation of multiple unmanned aerial vehicles
CN107872644A (en) Video frequency monitoring method and device
CN110070059B (en) Unstructured road detection method based on domain migration
CN115358413A (en) Point cloud multitask model training method and device and electronic equipment
CN112132258A (en) Multi-task learning model construction and optimization method based on deformable convolution
CN110569709A (en) Scene analysis method based on knowledge reorganization
CN113255533B (en) Method for identifying forbidden zone intrusion behavior, storage device and server
CN113449878B (en) Data distributed incremental learning method, system, equipment and storage medium
CN114758180A (en) Knowledge distillation-based light flower recognition method
Idjaton et al. Transformers with YOLO network for damage detection in limestone wall images
US20230315745A1 (en) Information pushing method, apparatus, device, storage medium, and computer program product
CN113657663A (en) Civil engineering construction risk management method and system based on artificial intelligence
CN115454654B (en) Adaptive resource matching obtaining method and device
CN111539401A (en) Lane line detection method, device, terminal and storage medium based on artificial intelligence
CN111445024A (en) Medical image recognition training method
CN110633641A (en) Intelligent security pedestrian detection method, system and device and storage medium
CN104778468A (en) Image processing device, image processing method and monitoring equipment
CN114140551A (en) Expressway bifurcation merging point conjecture method and system based on track image
CN114037856A (en) Identification method based on improved MSDNET and knowledge distillation
CN113159234B (en) Method and device for marking category of inspection picture, electronic equipment and storage medium
CN112668673B (en) Data preprocessing method and device, computer equipment and storage medium
CN115879747B (en) Digital flood prevention drought resistance scheduling method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 226000 room 104, building 19, chengjiayuan, Qinghe Road, high tech Zone, Nantong City, Jiangsu Province

Patentee after: Zhengjin Decoration Group Co.,Ltd.

Country or region after: China

Address before: 226000 room 104, building 19, chengjiayuan, Qinghe Road, high tech Zone, Nantong City, Jiangsu Province

Patentee before: Jiangsu Zhengjin Architectural Decoration Engineering Co.,Ltd.

Country or region before: China