Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Road maintenance is the maintenance of roads. The road and the structures and facilities on the road are maintained, the service performance of the road is maintained as much as possible, the damaged part is recovered in time, the driving safety, comfort and smoothness are ensured, and the transportation cost and time are saved; correct technical measures are adopted, the engineering quality is improved, the service life of the road is prolonged, and the reconstruction time is delayed.
At present, road maintenance work mainly depends on road maintenance workers to patrol and inspect roads of various road sections through naked eyes or machines and check whether road diseases needing to be removed exist in the roads. The inspection in this form cannot reflect the sudden damage of the road in time, wastes a large amount of effective working time of road maintenance workers, and wastes the investment of labor cost for road maintenance departments.
Based on this, the embodiment of the application provides a road maintenance method and equipment based on 5G, which are used for reducing the human input cost of road maintenance and improving the efficiency of road maintenance.
Various embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a road maintenance method based on 5G according to an embodiment of the present application. As shown in fig. 1, the 5G-based road maintenance method provided in the embodiment of the present application may include steps 101-106:
step 101, the server determines tracing information corresponding to the road image.
The road image is obtained by extracting image data sent by the user terminal through the 5G network. The tracing information includes the position where the road image is captured and the capturing time of the road image.
In the embodiment of the application, according to the operation of a user on the user terminal, the user terminal can send the collected information containing the image data to the server through the 5G network, and the server can analyze the received information containing the image data sent by the user terminal to obtain the image data. Information containing image data includes, but is not limited to: video, pictures, photographs.
If the information sent by the user terminal to the server is a video, the server may process the video, for example, extract each frame data in the video, or intercept the video frame in the video at a preset time interval, to obtain the image data of the video.
The 5G network is a fifth generation mobile communication network, and the peak theoretical transmission speed of the 5G network can reach 20Gbp steps, and compared with the transmission speed of the 4G network in the 21 st century, the transmission speed of the 5G network is more than ten times that of the 4G network. For the user, the transmission speed does not have a significant impact on the user's usage. However, for the transmission between devices and the operation of the devices to process data, the influence of the transmission speed is very great.
According to the method and the system, the data transmission speed between the user terminal and the server is improved through the 5G network, the road image can be timely and quickly sent and received, and great help is provided for follow-up determination of road diseases and maintenance of roads.
In the process that the user terminal sends the image data to the server, the user terminal can automatically or actively provide the shooting time and the shooting position of the image data, wherein the shooting position is the position of the user terminal, but not the position of a shot object in an image acquired by the user terminal, and the positions of the shot object and the shooting position are different. The server takes the shooting time and the shooting position of the image data as the traceability information for the use of the subsequent road maintenance method.
The user terminal may be a mobile phone, a computer, an ipad, or other terminal device, which is not specifically limited in this application.
The server is only an exemplary entity for implementing the 5G-based road maintenance method, and the implementation entity is not limited to the server, which is not specifically limited in this application.
And 102, splicing the road images by the server to obtain the images to be identified corresponding to the road images.
In the embodiment of the present application, the road image may include multiple images, for example, a video is sent by the user terminal, and the server may obtain images of multiple video frames from the video as the road image. For another example, the user terminal transmits images taken at a plurality of different angles, and the server takes the images as road images.
In the present application, the multiple images in the road image may be images including a common area captured, for example, images of video frames, and since the video is consecutive images, the common area (overlapping area) exists between the images of the frames. The method and the device can splice a plurality of images to obtain the image to be recognized with a larger view field.
Due to the limited shooting range of the lens, the user terminal may shoot a video or a plurality of images to completely represent the images of the road diseases. Therefore, the image to be identified which is complete, has a wide view angle range and contains the road diseases can be obtained by splicing the multiple images. If the splicing processing is not performed, the information (road disease image) contained in a single image is limited, which may cause that the road disease identified by the server subsequently has one-sidedness. If the server identifies the information contained in a single image in sequence, the calculation amount of the server is increased, and incoherent results may exist among the identification results of the server on each image. Through the splicing processing, the calculation amount of the server can be saved, and the accuracy of identifying the road image by the server is improved.
In this embodiment of the application, the server may splice the road images by executing the following method, as shown in fig. 2, specifically including the following steps:
in step 201, the server determines the shooting time of each image in the road image.
The server can determine the shooting time of each image in the road image according to the tracing information of the road image. Taking the road image obtained from the video as an example, according to the shooting time of the video and the time length of the video, the images can be sequentially ordered in time, and then the shooting time of each image is obtained. If the video shooting time is A time, the time length is B time,the server obtains x images from the video according to a preset time interval t, and according to the image acquisition time, the shooting time of the ith image in the x images is as follows:
wherein, the video shooting time is the shooting starting time.
It is known to those skilled in the art that the video capturing time a may also be the video capturing termination time, and when the video capturing time is the video capturing termination time, the capturing time of the image is calculated, and the capturing time of each image can be obtained by simply adjusting the above formula.
Step 202, the server sequences the road images in sequence according to the shooting time of each image so as to generate an image sequence to be spliced.
And sequencing the images in sequence according to the shooting time of each image obtained in the step 201 to obtain an image sequence to be spliced.
Step 203, the server selects an initial splicing image in the image sequence to be spliced.
In the embodiment of the application, the initial stitched image in the image sequence to be stitched is an image at a middle position in the image sequence to be stitched. For example, the image sequence to be stitched is [ a, b, c, d, e ], and the server selects c as the initial stitched image. When two images exist in the middle position, such as [ a, b, c, d, e, f ], the server can select one image as an initial splicing image.
And 204, the server determines an image to be stitched corresponding to the initial stitched image in the image sequence to be stitched as a first stitched image according to the initial stitched image and a preset rule.
After obtaining the initial splicing image, the server determines a plurality of images to be spliced on any side of the initial splicing image, such as an image sequence to be spliced of [ a, b, c, d, e ], according to a preset rule, wherein c is the initial splicing image, and then determines d, e or a, b; and when the determined multiple images to be spliced are d and e, determining d adjacent to the initial spliced image c as a first spliced image. Similarly, when the determined multiple images to be stitched are a and b, taking b as the first stitched image.
Step 205, the server identifies an overlapping region of the initial stitched image and the first stitched image, and performs image fusion on the initial stitched image and the first stitched image according to a pixel value corresponding to the overlapping region and a preset weight formula.
The weight formula is used for expressing the relation between the pixel point in the overlapping area and the boundary of the overlapping area.
The server may identify an overlap region of the initial stitched image and the first stitched image, which may be determined by an image registration technique. After the overlapping area is determined, according to a preset weight formula, according to the pixel value in the overlapping area:
wherein,
is a weight, d
maxIs the maximum value of the boundary of the overlap region, d
minAnd d is the position of the pixel point in the overlapping area. Taking a plane coordinate system as an example, the minimum value of the boundary of the overlapping region is 3, the maximum value of the boundary of the overlapping region is 6, and the position of the pixel point is 4.
Determining the weight of each pixel point in the overlapping area according to the weight formula, and then determining the pixel value of each pixel point in the spliced overlapping area according to the following formula:
wherein Q (x, y) is the pixel value of the pixel point at the (x, y) position in the overlapping region after image fusion, and Q1(x, y) is the pixel value of the pixel point at the position of the initial splicing image (x, y) before image fusion, Q2(x, y) is the image of the first stitched image (x, y) position before image fusionPixel value of pixel point, R1To belong to the region of the initially stitched image, R2To belong to the overlapping region of the initial stitched image and the first stitched image, R3Is the region belonging to the first stitched image.
The server performs image fusion on the initial splicing image and the first splicing image through the formula, so that the effect of gradually transitioning from the initial splicing image to the first splicing image can be realized, and the splicing effect is good.
And step 206, the server takes the initial spliced image after the image fusion and the first spliced image as initial spliced images, and determines the next first spliced image until the frequency of determining the first spliced image is greater than or equal to a third preset threshold value, so as to obtain a spliced image of the road image.
And after the server finishes the image fusion of the initial splicing image and the first splicing image, the server updates the initial splicing image and the first splicing image which are subjected to the fusion into a new initial splicing image, and determines the first splicing image again. In the process of determining the first stitched image again, the server may determine a new first stitched image according to the rule of determining the first stitched image last time.
For example, the image d to be stitched is selected last in step 204, and this time e can be taken as the first stitched image. In the application, the server can firstly complete the splicing of the image to be spliced on one side of the initial spliced image and the initial spliced image, and then splice the image to be spliced on the other side of the initial spliced image and the initial spliced image. Through two image splicing in turn of the initial splicing, the overlapping rate can be improved, such as the overlapping area exists between the images b and d to be spliced, when the image to be spliced on one side d is spliced, the overlapping area between the images b and the initial splicing can be improved, and the overlapping rate is increased.
Until it is determined that the number of times of the first stitched image is greater than or equal to a third preset threshold, the third preset threshold may be set to be the number of the road images minus 1, so as to meet the requirement that all the road images are subjected to image stitching, or the third preset threshold is set to be an optimal threshold according to the actual stitching use condition, which is not specifically limited herein.
And step 207, the server takes the spliced image as an image to be identified.
In the process of determining the image to be identified, the technical scheme can reduce the splicing error between the spliced images in the image splicing process. The intermediate image is used as the initial splicing image, so that the problem of the distance between the images considered in image splicing is reduced, and the calculation complexity of the server in the image splicing process can be reduced; by sequentially splicing the images on the two sides of the initial spliced image, the overlapping rate during splicing can be improved, and a better image splicing effect is realized.
And 103, the server determines whether the image to be identified has a matched contrast image according to the tracing information.
Wherein the contrast image is an image of a road segment.
After the server obtains the image to be recognized, whether a matched contrast image exists in the image to be recognized can be determined according to the traceability information of the road image spliced with the image to be recognized. In the present application, whether there is a matching comparison image in the image to be recognized may be determined by performing the following method, specifically as follows:
firstly, the server determines an undetermined area which takes the position where the camera is located as the center and takes the preset distance as the radius according to the source tracing information.
And the server determines the shooting position of each road image corresponding to the image to be identified according to the tracing information, and determines an area to be determined by taking the shooting position as a center and a preset distance as a radius. The undetermined area contains a plurality of roads or a single road.
The road images corresponding to the images to be recognized come from different user terminals, and the geometric centers of the positions where the road images are shot can be determined according to the positions where the road images are shot corresponding to the road images. And the geometric center is taken as the center of a circle of the region to be determined.
The server then determines a number of candidate road segments within the area to be determined.
And the server determines the road sections contained in the undetermined area, for example, the road sections containing a plurality of transverse buses in the undetermined area are taken as candidate road sections by the server.
Next, the server acquires a history road image of each candidate link.
The server may acquire a pre-stored historical road image, or acquire the historical road image through the internet, and the acquisition mode of the historical road image is not particularly limited in the present application.
In the embodiment of the application, the shooting time of the historical road image is close to the shooting time of the road image for splicing the images to be recognized, and as can be known by a person skilled in the art, the closer the shooting time of the historical road image is to the shooting time of the road image for splicing the images to be recognized, the higher the data accuracy is.
And then, the server inputs the image to be recognized into the second image recognition model to obtain the background feature of the image to be recognized.
In this embodiment, the server may train a second image recognition model in advance before determining the comparison image, where the second image recognition model takes each road image as sample data, performs training, and outputs an image background feature of each road image.
After the server inputs the image to be recognized into the second image recognition model, the second image recognition model processes the image to be recognized, and the background feature of the image to be recognized can be input.
And secondly, the server calculates a feature comparison value of the background feature of the image to be identified and the feature of the historical road image.
The server can generate a feature vector of the background feature of the image to be recognized and feature vectors of the features of the historical road images, and the server can calculate the similarity of the background feature and the feature vectors of the historical road images to serve as feature comparison values.
And finally, the server takes the historical road image with the characteristic comparison value larger than the first preset threshold value as a comparison image.
The server may compare the calculated feature comparison values with a first preset threshold, and determine a historical road image corresponding to the feature comparison value greater than the first preset threshold as a comparison image. If the server determines that two or more feature comparison values satisfy the condition that the feature comparison values are larger than the first preset threshold, the server may further determine, as the comparison image, the historical road image corresponding to the feature comparison value with the largest value among the plurality of feature comparison values.
In the embodiment of the application, the road section corresponding to the image to be recognized can be more accurately and efficiently determined by determining the comparison image, and the road maintenance personnel are not required to further confirm the position of the road section to which the image to be recognized belongs. By the scheme, the efficiency of road maintenance is further improved, and the consumption of human resources is saved.
And step 104, under the condition that the image to be recognized has the matched contrast image, the server inputs the image to be recognized into a preset first image recognition model for recognition so as to determine a road disease label in the image to be recognized.
Wherein, the road disease label at least comprises one or more of the following items: cracks, web breaks, pits, ruts, subsidence.
According to the embodiment of the application, the road disease label of the image to be identified can be identified through the first image identification model. The method comprises the following specific steps:
firstly, the server inputs the image to be recognized into the feature extraction module of the first image recognition model through the input layer so as to obtain the feature image corresponding to the image to be recognized.
The server can identify an image containing a road disease in the image to be identified through the first image identification model, such as: the images of cracks, network cracks, pits, ruts and subsidence are extracted, and the characteristic extraction module can adopt a VGG (variable gradient G) model and can also adopt a model of a residual error network. In the embodiment of the application, more identifiable disease tags may exist in the road disease tag along with the evolution of the road.
Then, the server inputs the feature image into the region recommendation network module of the first image recognition model to determine a candidate region of the feature image.
The candidate areas comprise road defect areas and non-road defect areas.
The feature image is processed through a Region recommendation Network (RPN), the RPN takes the feature image after feature extraction as input, each point in each feature image is processed, and then a candidate Region is obtained. The candidate areas may include road damaged areas, non-road damaged areas.
Secondly, the server inputs the candidate area into an interested area module of the first image recognition model for regression processing so as to output a road disease area of the image to be recognized.
And the server performs pooling treatment on the obtained candidate areas by using the region-of-interest model, and then performs classification and accurate regression of the boundary positions of the candidate areas after the pooling treatment, so as to determine the road disease areas of the images to be identified.
And finally, the server determines a road disease label in the image to be identified according to the road disease area.
The server compares the road disease area of the image to be recognized with a plurality of sample images for training the first image recognition model, and determines a road disease label corresponding to the road disease area. Or,
the server sends the road disease area of the image to be identified to the user terminals through the 5G network, and road disease labels corresponding to the road disease area are determined according to the first feedback information of the user terminals.
In the actual use process, the way of determining the road damage label corresponding to the road damage area is various, the two reference ways are provided in the application, and the application does not specifically limit other embodiments on the premise that the road damage label corresponding to the road damage area can be determined.
In addition, the server can send the road disease area of the image to be identified to a plurality of user terminals through a 5G network, so that the user can confirm whether the identified road disease area is correct and whether the determined road disease label is accurate. And based on the operation of the user on the user terminal, inputting the road disease area determined by the user and the road disease label into the first image recognition model, and retraining the first image recognition model. And the road disease label sent by the user terminal is used as the road disease label of the image to be identified.
By the scheme, the road disease image included in the image to be identified can be accurately identified, and reliable data support is provided for subsequent road maintenance. In addition, the first image recognition model is a neural network model, and R-CNN can be adopted, and other neural network models can also be adopted. The second image recognition model is also a neural network model, which may be the same as or different from the first image recognition model, and the models selected by the two do not directly influence the scheme.
According to the scheme, the road disease label is determined under the condition that the matched contrast image exists in the image to be identified, and the road disease label can be determined through the following method under the condition that the matched contrast image does not exist in the image to be identified.
Specifically, when the image to be recognized does not have a matching comparison image, the server first sends prompt information to the user terminal corresponding to the image to be recognized through the 5G network, so as to determine the road section corresponding to the image to be recognized based on the second feedback information of the user terminal, and uses the image to be recognized as the comparison image.
Wherein, the prompt message at least comprises one or more of the following items: pictures, words, sounds.
The server may send prompt information to the user terminal that sends the image to be recognized, such as: and the user can operate the user terminal on the road section corresponding to the image and respond to the prompt information sent by the server. The second feedback information may be text, picture, sound, such as text: road a, lane B, etc.
Then, the image to be recognized of the determined road section is input into the first image recognition model so as to determine the road disease label of the image to be recognized.
After the road section of the image to be identified sent by the user terminal is determined, the user terminal can be given corresponding rewards, and the user is encouraged to shoot the road image containing the road diseases. The method comprises the following specific steps:
and after determining the road disease label in the image to be identified, the server generates the contribution degree corresponding to the road disease label.
And the server updates the contribution degree of the acquisition terminal according to the contribution degree and sends the exchange goods with the updated contribution degree to the acquisition terminal.
In the application, the server may record the contribution degree of the user terminal, for example, the user terminal sends the image to be identified with the road fault label a to the server, and after the server confirms the image, the recorded contribution degree of the user terminal may be updated. The updating method may be to increase, for example, if the current contribution degree of the user terminal a is 90, after the server confirms the image to be identified with the road fault label as a, the current contribution degree is updated to 95.
In another embodiment of the present application, the server determines that the user terminal has transmitted a road image that does not include a road disease more than a certain number of times, and the server may update the contribution degree of the user terminal in a reduced manner.
In the embodiment of the application, after the road disease label corresponding to the image to be identified is determined, the severity of the road disease can be further determined, and a corresponding response can be made to the disease in time.
Specifically, the server generates and transmits a damage level and a damage level reference map corresponding to the road damage label to the user terminal according to the road damage label.
Then, the server generates a road maintenance priority of the road damage label according to the damage level selected by the user terminal and/or a reference of the damage level.
And finally, the server sequentially sends the road disease labels to the corresponding maintenance management terminals according to the road maintenance priority, so that the road maintenance personnel corresponding to the maintenance management terminals sequentially carry out maintenance work on the road diseases.
The road maintenance priority is generated according to the disease level of the road disease, so that road maintenance personnel can be helped to select the road disease with higher severity degree for treatment, the influence of the serious road disease on traffic safety is reduced, and the serious traffic safety problem is avoided. Meanwhile, the working efficiency of the road maintenance personnel is also improved. The road disease grade is confirmed by the user terminal, so that road maintenance procedures can be saved, procedures such as in-person confirmation of road maintenance personnel and the like are not needed, and labor waste is saved to a certain extent.
And 105, the server generates road maintenance information based on the road disease label, the tracing information and the comparison image.
The road maintenance information comprises maintenance time and a construction mode.
The method for generating the road maintenance information comprises the following steps of:
and 301, the server determines the maintenance operation time of the road to be maintained corresponding to the road defect label according to the road defect label.
After the road damage label is determined, the server may determine, according to the road damage label, the required maintenance operation time for the road maintenance worker to treat the road damage corresponding to the road damage label through the 5G network or the pre-stored operation time comparison table.
Step 302, the server determines the road to be maintained according to the shooting position, the shooting time and the comparison image, and determines a passing frequency curve of the road section at the shooting position.
According to the shooting position and the comparison image, a road section such as the section i of the road at the shooting position can be determined. After the photographing time, the traffic frequency curve of the predicted section i may be determined by the recorded data of the historical driving of the section i according to the photographing time of the road image, as shown in fig. 4, y is traffic efficiency, and x is time.
Step 303, the server determines a time period in the traffic frequency curve, in which the traffic frequency is smaller than a second preset threshold, as the time to be maintained.
According to the passing frequency curve, the server may determine the maintenance time to be determined, as shown in fig. 4, a dotted line is the passing frequency corresponding to the second preset threshold. In fig. 4, the time corresponding to the part of the traffic frequency curve between the dotted line and the x-axis is the time to be maintained.
And step 304, the server matches the maintenance time to be determined with the maintenance operation time to obtain the maintenance time.
The server can match the maintenance waiting time with the maintenance operation time, for example, the maintenance operation time is t1, and the maintenance waiting time has a plurality of time periods with different time lengths. For example, the undetermined maintenance time includes t2, t3 and t4, and three time periods, wherein t2 is less than t1, t3 is more than t1, and t4 is less than t1, the server can use t3 as the maintenance time.
And 305, generating road maintenance information by the server according to the maintenance time and the construction mode corresponding to the road defect label.
The server can obtain a corresponding construction mode through a 5G network according to the road disease label, and generate road maintenance information according to the maintenance time and the obtained construction mode.
And 106, the server sends the road maintenance information to a corresponding maintenance management terminal through a 5G network.
The maintenance management terminal may correspond to a road maintenance department, and may also be a hand-held terminal of a patrol officer of the road maintenance department, or the like. The specific type of the maintenance management terminal is not specifically limited herein.
The method and the device receive image data sent by a user terminal through a 5G network, determine a road image, and determine the shooting position and the shooting time of the road image. And splicing the road images to obtain an image to be identified, and further determining a road disease label corresponding to the image to be identified. And generating and sending road maintenance information to a maintenance management terminal according to the road disease label. The user can participate in the road maintenance work, the workload of a road maintenance department can be saved, and the labor cost is reduced. The 5G network is used for receiving and sending data, so that information in road maintenance work can be provided more timely, and serious traffic safety accidents caused by the fact that the road maintenance information is not timely enough are avoided.
In addition, the first image recognition model is used for recognizing the road diseases, so that one-sided factors brought by artificial judgment of the diseases can be avoided, and the accuracy of recognizing the road diseases can be ensured. The road diseases are accurately identified, and the road maintenance work efficiency can be accelerated to a certain extent.
Fig. 5 is a schematic structural diagram of a 5G-based road maintenance device 500 provided in an embodiment of the present application, where the device includes:
at least one processor 501. And a memory 502 communicatively coupled to the at least one processor 501. Wherein the memory 502 stores instructions executable by the at least one processor 501, the instructions being executable by the at least one processor 501 to enable the at least one processor 501 to:
and determining the tracing information corresponding to the road image. The road image is obtained by extracting image data sent by the user terminal through the 5G network. The tracing information includes the position where the road image is captured and the capturing time of the road image. And splicing the road images to obtain the images to be identified corresponding to the road images. And determining whether the image to be identified has a matched contrast image or not according to the tracing information. Wherein the contrast image is an image of a road segment. And under the condition that the image to be recognized has a matched contrast image, inputting the image to be recognized into a preset first image recognition model for recognition so as to determine a road disease label in the image to be recognized. Wherein, the road disease label at least comprises one or more of the following items: cracks, web breaks, pits, ruts, subsidence. And generating road maintenance information based on the road disease label, the tracing information and the comparison image. The road maintenance information comprises maintenance time and a construction mode. And sending the road maintenance information to a corresponding maintenance management terminal through a 5G network.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The devices and the methods provided by the embodiment of the application are in one-to-one correspondence, so the devices also have beneficial technical effects similar to the corresponding methods.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.