CN115880260A - Method, device and equipment for detecting base station construction and computer readable storage medium - Google Patents

Method, device and equipment for detecting base station construction and computer readable storage medium Download PDF

Info

Publication number
CN115880260A
CN115880260A CN202211624085.1A CN202211624085A CN115880260A CN 115880260 A CN115880260 A CN 115880260A CN 202211624085 A CN202211624085 A CN 202211624085A CN 115880260 A CN115880260 A CN 115880260A
Authority
CN
China
Prior art keywords
image
construction
base station
detection
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211624085.1A
Other languages
Chinese (zh)
Inventor
李宁
白国涛
孙昊
李天波
赵永刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202211624085.1A priority Critical patent/CN115880260A/en
Publication of CN115880260A publication Critical patent/CN115880260A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a detection method, a device and equipment for base station construction and a computer readable storage medium, wherein the detection method for base station construction comprises the following steps: pre-storing a trained construction result detection model; receiving a first image of a construction site of a base station; and detecting the first image based on the construction result detection model to obtain target content information in the first image, and determining whether the construction of the base station meets the construction specification or not according to the target content information. According to the detection method, the construction result detection model is trained in advance based on the artificial intelligence technology, and the quality of the target content information is detected through the construction result detection model, so that the accuracy of the detection result is improved, the problem of low working efficiency of manual judgment is solved, and the quality inspection efficiency of the base station construction engineering is improved.

Description

Method, device and equipment for detecting base station construction and computer readable storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for detecting base station construction.
Background
With the large-scale construction of 4G and 5G, each base station supervisor needs to strictly examine the construction of 4G and 5G base stations, but it is difficult for limited supervisors to perform comprehensive quality inspection on large-scale base station construction, and especially the quality inspection of the 4G and 5G construction antennas which are numerous and stand high is difficult.
In the prior art, a supervisor judges whether the construction quality of antenna installation of each base station meets the standard requirement of antenna installation or not in a visual inspection mode according to own experience, but the standard of manual judgment cannot ensure certain scientificity and objectivity. And the construction engineering system of the base station has thousands of construction pictures to be audited by the supervision personnel every day, the whole quality inspection is difficult to realize manually, the real-time detection is more difficult, and finally, the feedback speed of the whole engineering quality inspection is slow, the feedback period is long and the working efficiency is low.
Disclosure of Invention
The embodiment of the application provides a detection method, a detection device and a computer-readable storage medium for base station construction, which can automatically perform full quality detection on the base station construction based on a construction result detection model.
In a first aspect, an embodiment of the present application provides a detection method for base station construction, which is applied to a central processing platform, and the detection method includes: pre-storing the trained construction result detection model; receiving a first image of a construction site of a base station; and detecting the first image based on the construction result detection model to obtain target content information in the first image, and determining whether the construction of the base station meets the construction specification or not according to the target content information.
According to an embodiment of the first aspect of the present application, after receiving the first image of the construction site of the base station, the detection method further comprises: performing preliminary examination on the first image, wherein the preliminary examination comprises at least one of image blurring detection, image repeatability detection, image shooting angle detection and detection of whether the image is shielded; detecting the first image based on the construction result detection model to obtain target content information in the first image, which specifically includes: the construction result detection model is a machine learning model constructed based on a target detection algorithm, under the condition that the first image is qualified in preliminary examination, the construction result detection model detects a target area and a target type in the first image qualified in preliminary examination based on the target detection algorithm, and identifies target content information in the first image according to the target area and the target type.
According to any of the preceding embodiments of the first aspect of the present application, the image blur detection comprises the steps of: detecting an image edge of the first image based on a Laplace algorithm, and calculating a variance corresponding to the image edge; when the variance is larger than or equal to a first preset threshold value, determining that the fuzziness detection result of the first image is qualified; and when the variance is smaller than a first preset threshold value, determining that the fuzziness detection result of the first image is unqualified.
According to any one of the preceding embodiments of the first aspect of the present application, the image reproducibility testing comprises the steps of: extracting angular points of each image in the first image based on an image similarity detection algorithm; for any two images, matching the corner points in the first image with the corner points in the second image based on a target matching algorithm, and recording the number of the matched corner points; when the number of the matched corner points is larger than or equal to a second preset threshold value, determining that the first image and the second image are repeated; and when the number of the matched corner points is less than a second preset threshold value, determining that the first image and the second image are not repeated.
According to any one of the previous embodiments of the first aspect of the present application, the first image has a watermark representing watermark identification time of the first image, a location of the base station, and item information of the base station; before the first image is detected based on the construction result detection model to obtain the target content information in the first image, the detection method further comprises the following steps: detecting whether the watermark identification time carried on the watermark of the first image, the position of the base station and the project information of the base station are correct or not according to the current time and the pre-stored project information; detecting the first image based on the construction result detection model to obtain target content information in the first image, which specifically includes: under the conditions that the watermark identification time carried on the watermark of the first image, the position of the base station and the project information of the base station are correct and the first image is qualified in preliminary examination, the construction result detection model detects a target area and a target type in the first image qualified in preliminary examination based on a target detection algorithm and identifies target content information in the first image according to the target area and the target type.
According to any one of the foregoing embodiments of the first aspect of the present application, determining whether construction of a base station meets a construction specification according to target content information specifically includes: for the first image qualified by the preliminary examination, detecting whether the type of the base station can be identified in the first image, detecting whether the distance between the top of the holding pole and the top of the antenna box in the first image and/or the height of the antenna box in the first image is smaller than a third preset threshold value, and detecting whether the lightning rod is installed on the base station; the base station type can be identified in the first image, the distance between the top of the pole and the top of the antenna box in the first image and/or the height of the antenna box in the first image are/is smaller than a third preset threshold value, and under the condition that the lightning rod is installed in the base station, the construction of the base station is determined to be in accordance with the construction specification.
According to any one of the foregoing embodiments of the first aspect of the present application, before the trained construction result detection model is stored in advance, the detection method further includes: obtaining a sample image of a sample base station construction site; turning over the sample image; summarizing the sample image before overturning and the sample image after overturning to obtain a training data set; and training the construction result detection model based on the training data set to obtain the trained construction result detection model.
In a second aspect, an embodiment of the present application provides a detection apparatus for base station construction, which is applied to a central processing platform, and the detection apparatus includes: the storage module is used for pre-storing the trained construction result detection model; the receiving module is used for receiving a first image of a construction site of the base station; and the detection module is used for detecting the first image based on the construction result detection model to obtain target content information in the first image and determining whether the construction of the base station meets the construction specification or not according to the target content information.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes: a processor, a memory and a computer program stored on the memory and being executable on the processor, the computer program, when executed by the processor, implementing the steps of the detection method of base station construction as provided by the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the detection method for base station construction provided in the first aspect.
The detection method, the detection device, the detection equipment and the computer readable storage medium for the base station construction of the embodiment of the application pre-store the trained construction result detection model; receiving a first image of a construction site of a base station; and detecting the first image based on the construction result detection model to obtain target content information in the first image, and determining whether the construction of the base station meets the construction specification or not according to the target content information. A construction result detection model is trained in advance based on an artificial intelligence technology, target content information needing quality inspection in the first image is determined through the construction result detection model, and omnibearing intelligent detection is carried out on the target content information. Because construction result detection model has introduced artificial intelligence technique and has trained, consequently, utilize construction result detection model to detect the base station construction quality and can make the result of quality detection more accurate to replace artificial mode with artificial intelligence and can avoid artifical judgement standard to the influence that the testing result played decisive action, with the accuracy that further improves the testing result. The construction quality of the base station is detected based on the construction result detection model, the problem that the manual judgment work efficiency is low is effectively solved, the quality inspection efficiency of the base station construction engineering is improved, and therefore the full quality inspection of the base station construction is further achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a detection method for base station construction according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a convolutional neural network provided in an embodiment of the present application;
fig. 3 is a schematic flow chart of another detection method for base station construction according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a detection apparatus for base station construction according to an embodiment of the present disclosure;
fig. 5 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative only and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising 8230; \8230;" comprises 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application cover the modifications and variations of this application provided they come within the scope of the corresponding claims (the claimed technology) and their equivalents. It should be noted that the embodiments provided in the embodiments of the present application can be combined with each other without contradiction.
Before explaining the technical solutions provided in the embodiments of the present application, in order to facilitate understanding of the embodiments of the present application, the present application first specifically describes the problems existing in the related art:
as described above, the inventor of the present application finds that in the related art, quality detection of base station construction requires that a constructor take a picture of construction completion on site, and upload the picture to a management department periodically, and the management department audits the picture by means of manual sampling and feeds back the audit result to the constructor. However, the number of the supervision personnel is limited, and it is difficult to check the actual situation of each picture of the construction service site working point of each base station environment and the actual situation of construction. Firstly, because the photos uploaded by constructors have the problems of unclear photo materials, repeated photos and no watermark identification time, position, affiliated project and other information of the photo materials, the auditing difficulty of supervision personnel is increased; secondly, because the auditing workload of the supervision personnel is large, six to eight photos can be reserved for each construction of each construction business site, and more than thousands of photos can be reserved for the existing scene every day; thirdly, because the auditing time of the proctoring personnel is long, multiple detections are needed for each photo, the time consumption for detecting a single photo is long, and the repeated detection of the photos cannot be realized by manpower; and fourthly, because the auditing proportion of the supervision personnel is low, because the number of the photos is too large, sampling detection can only be carried out by manpower, and problems existing in the construction business field can be missed.
In addition, after the supervision personnel feed back the audit result to the constructor, the constructor can determine whether the field needs to be rectified according to the audit result, if the field needs to be rectified, the constructor needs to return to the field for the second time, the rectified picture is uploaded to a management department again, the supervision personnel carry out the audit in a manual sampling mode, and the audit result is fed back to the constructor again. The whole auditing process is excessively draggy and repeated, the feedback speed is low, the period is long, the working efficiency is low, and the requirement of rapid increase of base station installation business cannot be met.
In view of the above research by the inventors, embodiments of the present application provide a method, an apparatus, a device, and a computer-readable storage medium for detecting base station construction, which can solve the technical problems that manual review cannot achieve full quality inspection of base station construction, feedback speed is slow, feedback period is long, and working efficiency is low in the related art.
In order to solve the problem of the prior art, embodiments of the present application provide a method, an apparatus, a device, and a computer-readable storage medium for detecting base station construction.
First, a method for detecting base station construction provided in the embodiment of the present application is described below.
Fig. 1 is a schematic flowchart of a detection method for base station construction according to an embodiment of the present disclosure. As shown in fig. 1, the method may include the following steps S101 to S103.
And S101, pre-storing the trained construction result detection model.
The base station construction quality detection system establishes a construction result detection model in advance based on an artificial intelligence technology, trains the model, and stores the trained model on an artificial intelligence central processing platform.
S102, receiving a first image of a construction site of the base station.
A constructor shoots a constructed live-action picture through a terminal device at a construction site of a base station, built-in software of the terminal device can automatically identify watermarks such as time, the position of the base station, project information of the base station and the like on the picture, the constructor uploads the identified picture to an artificial intelligent central processing platform, and a base station construction quality detection system receives the identified picture through the artificial intelligent central processing platform and uses the identified picture as a first image of the construction site of the base station.
S103, detecting the first image based on the construction result detection model to obtain target content information in the first image, and determining whether the construction of the base station meets the construction specification according to the target content information.
The base station construction quality detection system detects the first image based on the construction result detection model, identifies target content information needing to be detected in the first image, detects whether the target content information meets the construction specification or not, and returns a detection result in the millisecond level of 1.
According to the detection method for base station construction, a trained construction result detection model is stored in advance; receiving a first image of a construction site of a base station; and detecting the first image based on the construction result detection model to obtain target content information in the first image, and determining whether the construction of the base station meets the construction specification according to the target content information. A construction result detection model is trained in advance based on an artificial intelligence technology, target content information needing quality inspection in the first image is determined through the construction result detection model, and omnibearing intelligent detection is carried out on the target content information. Because construction result detection model has introduced artificial intelligence technique and has trained, consequently, utilize construction result detection model to detect the base station construction quality and can make the result of quality detection more accurate to replace artificial mode with artificial intelligence and can avoid artifical judgement standard to the influence that the testing result played decisive action, with the accuracy that further improves the testing result. The construction quality of the base station is detected based on the construction result detection model, the problem that the manual judgment work efficiency is low is effectively solved, the quality inspection efficiency of the base station construction engineering is improved, and therefore the full quality inspection of the base station construction is further achieved.
In some embodiments, after receiving the first image of the job site of the base station, the detection method further comprises: and performing preliminary examination on the first image, wherein the preliminary examination comprises at least one of image blurring detection, image repeatability detection, image shooting angle detection and image occlusion detection.
Illustratively, the base station construction quality detection system performs preliminary examination on the first image based on a multidimensional artificial intelligence picture processing technology. Judging whether the first image is fuzzy or not based on a Fast Fourier Transform (FFT) algorithm and a Laplacian (Laplacian) algorithm; judging whether the first image is repeated or not based on an image similarity detection algorithm and a target matching algorithm, and performing full-library duplicate checking in a base station construction site live-action photo library; and labeling the first image by using an image labeling tool labelImg, comparing the size of the specific area in the first image with the size of a reference object, and judging whether the shooting angle of the first image is normal and whether the first image is shielded. And returning the unqualified photos with the problems, informing constructors of the unqualified reasons, and photographing again for uploading. The invalid images in the full quality inspection are reduced, the standard consciousness of constructors is improved, the uploading quality of photos is guaranteed, the auditing target is more targeted, and the quality inspection efficiency is improved.
In some embodiments, detecting the first image based on the construction result detection model to obtain the target content information in the first image specifically includes: the construction result detection model is a machine learning model constructed based on a target detection algorithm, under the condition that the first image is qualified in preliminary examination, the construction result detection model detects a target area and a target category in the first image qualified in preliminary examination based on the target detection algorithm, and identifies target content information in the first image according to the target area and the target category.
Illustratively, the construction result detection model combines a large number of artificial intelligence target detection algorithms in terms of model algorithms, relates to various processing modes such as target extraction, regional target positioning and photo enhancement, and realizes overall detection of the model after integrating the multi-dimensional deep learning model. The base station construction quality detection system detects a target area and a target type in the first image based on a target detection algorithm of the construction result detection model, and identifies target content information in the first image. The target detection algorithm is mainly divided into two types, namely a one-stage algorithm and a two-stage algorithm, and in consideration of actual production construction requirements, the YOLOv3 algorithm of the one-stage is selected in the embodiment of the application, so that the production construction requirements can be met in accuracy, and the speed is better than that of the two-stage algorithm.
The first image is segmented into SxS grid sizes according to the target detection algorithm, on which grid the center of the target falls, which grid is responsible for predicting the object. And performing point multiplication calculation on the image by using different filters to obtain different convolution characteristics, wherein the larger the convolution depth is, the larger the receptive field obtained by the convolution characteristics is, namely the more the characteristics of the obtained image are, and the richer semantic information is. And presetting candidate frames with different sizes, so that the candidate frames continuously approach to the target position through regression calculation. The candidate frames have nine different sizes, and the requirements of objects with different sizes and shapes are met. And combining downsampling and upsampling to fuse the feature maps of a plurality of scales so as to obtain different receptive field information, wherein the detection capabilities of small, medium and large targets are respectively obtained by feature fusion of three scales. Finally, each grid predicts B bounding boxes and the categories of the B bounding boxes, namely the target area and the target category in the first image, wherein the bounding boxes comprise target center coordinates, width and height and confidence coefficients. The target detection algorithm can realize detection and identification in the same process, and an end-to-end effect is achieved.
In order to measure the distance between the prediction frame and the real target, an IOU (interaction Over Union) algorithm is adopted to calculate the ratio of the Intersection and the Union of the prediction frame and the real frame, and the closer the prediction frame is to the real frame, the larger the ratio of the Intersection and the Union is. Suppose the coordinates of the upper left corner of the real frame are (x) a1 ,y a1 ) The coordinate of the lower right corner is (x) a2 ,y a2 ) The coordinate of the upper left corner of the predicted frame is (x) b1 ,y b1 ) The coordinate of the lower right corner is (x) b2 ,y b2 ) The calculation process using the IOU algorithm is as follows:
the coordinate of the intersecting upper left corner is (x) 1 ,y 1 ) As shown in equation (1):
x 1 =max(x a1 ,x b1 ),y 1 =max(y a1 ,y b1 ) (1)
the coordinate of the intersecting lower right corner is (x) 2 ,y 2 ) As shown in equation (2):
x 2 =max(x a2 ,x b2 ),y 2 =max(y a2 ,y b2 ) (2)
the intersection area is intersection, as shown in formula (3):
intersecion=max(x 2 -x 1 ,0)*max(y 2 -y 1 ,0) (3)
the union area is unity, as shown in equation (4):
Area_a=(x a2 -x a1 )*(y a2 -y a1 )
Area_b=(x b2 -x b1 )*(y b2 -y b1 )
union=Area_a+Area_b-intersection (4)
where Area _ a is the Area of the real frame, and Area _ b is the Area of the predicted frame.
Calculating the ratio IOU of the intersection and union of the predicted frame and the real frame, as shown in formula (5):
Figure BDA0004000194080000092
and under the condition that the first image is qualified in the preliminary examination, further detecting a target area and a target type in the first image which is qualified in the preliminary examination based on a target detection algorithm, and identifying target content information which needs to be detected in the first image according to the target area and the target type, thereby detecting whether the target content information meets the construction specification.
In some embodiments, the image blur detection comprises the steps of: detecting an image edge of the first image based on a Laplace algorithm, and calculating a variance corresponding to the image edge; when the variance is larger than or equal to a first preset threshold value, determining that the fuzziness detection result of the first image is qualified; and when the variance is smaller than a first preset threshold value, determining that the fuzziness detection result of the first image is unqualified.
Illustratively, a Convolutional Neural Network (CNN) is widely used in the field of images, which can automatically learn features of an image, such as color, texture, etc., without destroying spatial information of the image, and convolution is an operation of performing matrix dot multiplication by using a filter sliding with the image as a matrix. The image blurring comprises the situations of image foreground blurring, background blurring, local blurring, global blurring and the like, and a clear and fuzzy accurate classification can be achieved by utilizing the autonomous learning characteristics of the convolutional neural network. As shown in fig. 2, the convolutional neural network is divided into two convolution processes, the first step is to perform 1 × 1 convolution on the channel, and the second step is to acquire a plurality of feature maps by using a convolution kernel of 3 × 3 and then to merge the feature maps. The structure of the convolutional neural network shown in fig. 2 reduces the amount of parameters and improves the characteristics of the image.
The base station construction quality detection system in the embodiment of the application can perform image ambiguity detection on the first image based on a fast Fourier transform algorithm and a Laplace algorithm. When the detection is performed based on the fast fourier transform algorithm, how low and how high the high frequency quantity is needed to distinguish the fuzzy degree of the image, and the operation is troublesome. When detection is performed based on the laplacian algorithm, a floating point number can be output to represent the blurring degree of an image by taking a single channel (approximate gray) of the image and the convolution kernel of 3x3 in fig. 2.
Taking the laplacian algorithm to perform image blur detection as an example, the laplacian operator is a second derivative used for measuring an image, and can emphasize a region with rapidly changing density in the image, namely a boundary, so that the laplacian operator is commonly used for boundary detection. When the boundary in the normal image is clear, the variance is large, and when the boundary information contained in the blurred image is small, the variance is small, and the blurring degree of the image can be judged according to the size of the variance. Therefore, a proper variance threshold needs to be predetermined to determine the blur of the image, and the size of the threshold may be adjusted step by step according to the recognition condition of the other service scene photos, which is not specifically limited in this embodiment. Since the laplacian is the simplest isotropic differential operator, with rotational invariance, the laplacian transform of a two-dimensional image function is the isotropic second derivative. Detecting the edge of the image by using a Laplace algorithm, calculating a detection variance, for example, setting a first preset threshold value to be 40, and when the variance is greater than or equal to 40, determining that the detection result is qualified, namely the first image is clear; when the variance is less than 40, the detection result is unqualified, i.e. the first image is blurred.
In some embodiments, the image repeatability detection comprises the steps of: extracting angular points of each image in the first image based on an image similarity detection algorithm; for any two images, matching the corner points in the first image with the corner points in the second image based on a target matching algorithm, and recording the number of the matched corner points; when the number of the matched corner points is larger than or equal to a second preset threshold value, determining that the first image and the second image are repeated; and when the number of the matched corner points is less than a second preset threshold value, determining that the first image and the second image are not repeated.
Illustratively, the corner points of each of the first images may be extracted by ORB (organized FAST and Rotated BRIEF) algorithm. The ORB algorithm is an algorithm for rapid feature point extraction and description, and an image pyramid is constructed by pictures, and the pyramid is composed of a series of versions of original images with different resolutions. Each level of the pyramid consists of a down-sampled version of the image at the previous level, which reduces the resolution of the image. Extracting angular point information from images of different levels, wherein an angular point is a more remarkable point in an image, and classifying and comparing the angular point with surrounding pixel points respectively by taking one point as a reference, wherein if the point is not in the same class as most surrounding points, the point is a key point. After finding keypoints in all levels of the pyramid, the ORB algorithm assigns each keypoint a direction, e.g., left or right, depending on how the intensity around the keypoint varies. The pyramid and the direction of the key point enable the image to have scale invariance and rotation invariance, namely the size and the direction of an object in the image do not influence the detection result.
After the corner information of different images is extracted through the ORB algorithm, the distance between different corners can be calculated through a K-nearest neighbor (KNN) classification algorithm, the distance boundary value of the matched corners is set, matching between the corners of different images is carried out through a Scale-Invariant Feature Transform (SIFT) algorithm, the number of the matched corners is recorded, and whether different images are repeated or not is judged according to the number of the matched corners. Setting a threshold value of the number of corner points for judging the repeatability of the images as a second preset threshold value, and when the number of the corner points matched with the two images is greater than or equal to the second preset threshold value, showing that the two images are repeated; and when the number of the corner points matched by the two images is less than a second preset threshold value, the two images are not repeated.
In some embodiments, the first image has a watermark thereon that characterizes watermark identification time of the first image, location of the base station, and item information of the base station; before the first image is detected based on the construction result detection model to obtain the target content information in the first image, the detection method further includes: detecting whether the watermark identification time carried on the watermark of the first image, the position of the base station and the project information of the base station are correct or not according to the current time and the pre-stored project information of the engineering; detecting the first image based on the construction result detection model to obtain target content information in the first image, which specifically includes: and under the conditions that the watermark identification time carried on the watermark of the first image, the position of the base station and the project information of the base station are correct and the first image is qualified in preliminary examination, the construction result detection model detects the target area and the target type in the first image qualified in preliminary examination based on a target detection algorithm and identifies the target content information in the first image according to the target area and the target type.
Illustratively, a first image uploaded to the artificial intelligence central processing platform by a constructor is provided with a watermark representing the watermark identification time of the first image, the position of a base station and the project information of the base station, and a base station construction quality detection system detects whether the watermark identification time carried on the watermark of the first image, the position of the base station and the project information of the base station are correct or not according to the current time of the system and the pre-stored project information after receiving the first image with the watermark identification through the artificial intelligence central processing platform. And under the condition that all information is correct and the primary audit of the first image is qualified, the base station construction quality detection system detects the target area and the target type in the first image based on the construction result detection model and identifies the target content information in the first image according to the target area and the target type.
In some embodiments, determining whether the construction of the base station meets the construction specification according to the target content information specifically includes: for the first image qualified by the preliminary examination, detecting whether the base station type can be identified in the first image, detecting whether the distance between the top of the holding pole and the top of the antenna box and/or the height of the antenna box is smaller than a third preset threshold value, and detecting whether a lightning rod is installed; the base station type can be identified in the first image, the distance between the top of the pole and the top of the antenna box in the first image and/or the height of the antenna box in the first image are/is smaller than a third preset threshold value, and under the condition that the lightning rod is installed on the base station, the construction of the base station is determined to be in accordance with the construction specification.
Illustratively, after the base station construction quality detection system identifies the target content information in the first image, the base station type, the holding pole, the antenna box, the lightning rod and the like in the first image are identified by a construction result detection model on the artificial intelligent central processing platform in a 4G or 5G mode. If the lightning rod and the base station type in the first image can be identified, and the distance between the top of the pole and the top of the antenna box and/or the height of the antenna box is detected to be smaller than a third preset threshold, it is determined that the construction of the base station meets the construction specification, wherein the base station type may be a 4G base station or a 5G base station, and the third preset threshold may be 0.3 m, which is not specifically limited in this embodiment. If can not discern lightning rod, basic station type or whole pole of embracing, or discern and embrace the pole, but the antenna box top is higher than embracing the pole top, or because of the photo is incomplete, pole top and image edge do not leave empty, and the unusual shooting angle of bowing directly over or pitching directly under leads to unable discernment antenna box to embracing the distance on pole top, then confirms that the construction of basic station is not conform to the construction standard.
Fig. 3 is a schematic flow chart of another detection method for base station construction according to an embodiment of the present application. As shown in fig. 3, according to some embodiments of the present application, optionally, before storing the trained construction result detection model in S101 in advance, the method for detecting base station construction provided in the embodiments of the present application may further include the following steps S301 to S304.
S301, obtaining a sample image of a sample base station construction site.
And acquiring a sample image of a sample base station construction site through terminal equipment of a constructor, wherein the sample image is derived from a live-action photo shot by the constructor at the base station construction site, screening the live-action photo, and labeling the sample image by using an image labeling tool, namely, labelImg, so as to be used for model training and parameter adjustment.
And S302, turning over the sample image.
The existing sample images are turned, more sample images can be manufactured, and since the image data are important for training of the neural network, the abundant image data are beneficial to improving the precision and generalization capability of the neural network. The structure of the neural network may include, but is not limited to, a pyramid, a saltatory shortcut, and an Attention.
And S303, summarizing the sample image before the turning and the sample image after the turning to obtain a training data set.
Summarizing the sample images before turning and the sample images after turning, and labeling the widened images to form a training data set, wherein the sample images in any proportion are determined as a training set and a testing set. For image fuzziness detection, image repeatability detection, image shooting angle detection and image occlusion detection, a training data set is required to be manufactured for model training.
S304, training the construction result detection model based on the training data set to obtain the trained construction result detection model.
Firstly, according to the over-fitting state and the under-fitting state, the number of layers and nodes of the neural network is adjusted, or a drop layer is added properly, a Graphics Processing Unit (GPU) is used for deep learning model training, the accuracy, loss and the like of a training set and a testing set are calculated and recorded, and the neural network parameters are adjusted according to the accuracy, loss and the like, so that the detection model is optimized.
Secondly, model training is carried out on the construction result detection model by using a YOLOv3 algorithm of a migration learning one stage, optimizers such as a random Gradient Descent (SGD) algorithm, a Momentum Gradient Descent (Momentum) algorithm, a forward Root Mean Square Gradient Descent (Root Mean Square prediction) algorithm or an Adaptive Estimation algorithm (Adam) are selected, and superparameters in loss functions such as a cross entropy loss function, a Square loss function and a logarithmic loss function are optimized, wherein the superparameters can include but are not limited to learning rate, batch size (batch size) and training times (epoch).
And finally forming a construction result detection model group, and storing the construction result detection model group on an artificial intelligent central processing platform. When the detection model is trained, the accelerated training of the model can be realized through GPU resources, and after the model training is finished, the model group can fully utilize the bearing resources of the artificial intelligent central processing platform.
In some embodiments, after determining whether the construction of the base station meets the construction specification according to the target content information, the detection method further includes: sending a detection result of whether the construction of the base station meets the construction specification to terminal equipment used by constructors; and under the condition that the construction of the base station does not accord with the construction specification, sending the reason which does not accord with the construction specification to the terminal equipment used by the constructor.
Illustratively, after the base station construction quality detection system detects the target content information, the detection result of whether the base station construction meets the construction specification is fed back to the terminal equipment of constructors in real time through an artificial intelligent central processing platform. And if the construction of the base station does not accord with the construction specification, informing constructors of the reason of the non-conformity of the construction specification so that the constructors can correct the construction specification according to the detection result, photographing again after the correction and uploading the photographed image to the artificial intelligent central processing platform, and detecting the image again by the base station construction quality detection system based on the construction result detection model until the construction of the base station accords with the construction specification.
In some embodiments, after determining whether the construction of the base station meets the construction specification according to the target content information, the detection method further includes: and under the condition that the construction of the base station conforms to the construction specification, forming an engineering project electronic archive of the base station by the first image according to the watermark identification time of the first image, the position of the base station and the project information of the base station, and storing the first image to a qualified base station construction site live-action photo library.
Illustratively, a first image which meets the construction specification is used as a material, an artificial intelligence central processing platform forms the images into an engineering project electronic file of the base station according to the watermark identification time, the position of the base station and the project information of the base station, and the base station construction quality detection system can realize online and offline examination of base station construction from multiple dimensions such as time, service scenes and units according to the engineering project electronic file. In addition, the first image is stored in a qualified base station construction site live-action photo library and can be used as a reference basis for repeated detection of subsequent images.
Based on the detection method for the base station construction provided by the embodiment, correspondingly, the application also provides a specific implementation mode of the detection device for the base station construction. Please see the examples below.
Referring first to fig. 4, a detection apparatus 40 for base station construction provided in the embodiment of the present application includes the following modules:
the storage module 401 is used for storing the trained construction result detection model in advance;
a receiving module 402, configured to receive a first image of a construction site of a base station;
the detecting module 403 is configured to detect the first image based on the construction result detection model, obtain target content information in the first image, and determine whether the construction of the base station meets the construction specification according to the target content information.
The detection device for base station construction of the embodiment of the application stores a trained construction result detection model in advance; receiving a first image of a construction site of a base station; and detecting the first image based on the construction result detection model to obtain target content information in the first image, and determining whether the construction of the base station meets the construction specification according to the target content information. A construction result detection model is trained in advance based on an artificial intelligence technology, target content information needing quality inspection in the first image is determined through the construction result detection model, and all-dimensional intelligent detection is conducted on the target content information. Because construction result detection model has introduced artificial intelligence technique and has trained, consequently, utilize construction result detection model to detect the base station construction quality and can make the result of quality detection more accurate to replace artificial mode with artificial intelligence and can avoid artifical judgement standard to the influence that the testing result played decisive action, with the accuracy that further improves the testing result. The construction quality of the base station is detected based on the construction result detection model, the problem that the manual judgment work efficiency is low is effectively solved, the quality inspection efficiency of the base station construction engineering is improved, and therefore the full quality inspection of the base station construction is further achieved.
In some embodiments, the detecting device 40 may further include: a preliminary review module 404, configured to perform a preliminary review on the first image, where the preliminary review includes at least one of image blur detection, image repeatability detection, image shooting angle detection, and detection whether the image is blocked; detecting the first image based on the construction result detection model to obtain target content information in the first image, which specifically includes: and under the condition that the first image is qualified in preliminary examination, detecting the first image qualified in preliminary examination based on the construction result detection model to obtain target content information.
In some embodiments, the preliminary review module 404 is specifically configured to: image blur detection comprising the steps of: detecting an image edge of the first image based on a Laplace algorithm, and calculating a variance corresponding to the image edge; when the variance is larger than or equal to a first preset threshold value, determining that the fuzziness detection result of the first image is qualified; and when the variance is smaller than a first preset threshold value, determining that the fuzziness detection result of the first image is unqualified.
In some embodiments, the preliminary review module 404 may be further configured to: the image repeatability detection comprises the following steps: extracting angular points of each image in the first image based on an image similarity detection algorithm; for any two images, matching the corner points in the first image with the corner points in the second image based on a target matching algorithm, and recording the number of the matched corner points; when the number of the matched corner points is larger than or equal to a second preset threshold value, determining that the first image and the second image are repeated; and when the number of the matched corner points is less than a second preset threshold value, determining that the first image and the second image are not repeated.
In some embodiments, the detecting device 40 may further include: a watermark detection module 405, configured to detect whether the watermark identification time carried on the watermark of the first image, the position of the base station, and the project information of the base station are correct according to the current time and the pre-stored project information; detecting the first image based on the construction result detection model to obtain target content information in the first image, which specifically includes: and under the condition that the watermark identification time carried on the watermark of the first image, the position of the base station and the project information of the base station are correct, detecting the first image based on a construction result detection model to obtain target content information.
In some embodiments, the detection module 403 may be further configured to: detecting whether the base station type can be identified in the first image; detecting whether the distance between the top of the holding pole and the top of the antenna box and/or the height of the antenna box is smaller than a third preset threshold value or not; and detecting whether the lightning rod is installed or not.
In some embodiments, the detection module 403 may be further configured to: the construction result detection model detects a target area and a target type in the first image based on a target detection algorithm, and identifies target content information in the first image according to the target area and the target type.
In some embodiments, the detecting device 40 may further include: a model training module 406, configured to obtain a sample image of a sample base station construction site; turning over the sample image; summarizing the sample image before overturning and the sample image after overturning to obtain a training data set; and training the construction result detection model based on the training data set to obtain the trained construction result detection model.
In some embodiments, the detecting device 40 may further include: a feedback module 407, configured to send a detection result of whether the construction of the base station meets the construction specification to a terminal device used by a constructor; and under the condition that the construction of the base station does not accord with the construction specification, sending the reason which does not accord with the construction specification to the terminal equipment used by the constructor.
In some embodiments, the detecting device 40 may further include: and the archive forming module 408 is used for forming the first image into an engineering project electronic archive of the base station according to the watermark identification time of the first image, the position of the base station and the project information of the base station and storing the first image into a qualified base station construction site live-action photo library under the condition that the construction of the base station meets the construction specifications.
Each module in the apparatus shown in fig. 4 has a function of implementing each step in the method for detecting base station construction provided in the foregoing method embodiment, and can achieve corresponding technical effects, and for brevity, details are not repeated here.
Based on the detection method for base station construction provided by the embodiment, correspondingly, the application further provides a specific implementation manner of the electronic equipment. Please see the examples below.
Fig. 5 shows a hardware structure diagram of an electronic device provided in an embodiment of the present application.
The electronic device may comprise a processor 501 and a memory 502 in which computer program instructions are stored.
Specifically, the processor 501 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 502 may include a mass storage for data or instructions. By way of example, and not limitation, memory 502 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, magnetic tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. In one example, memory 502 can include removable or non-removable (or fixed) media, or memory 502 is non-volatile solid-state memory. The memory 502 may be internal or external to the integrated gateway disaster recovery device.
In one example, memory 502 may be a Read Only Memory (ROM). In one example, the ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically Alterable ROM (EAROM), or flash memory, or a combination of two or more of these.
Memory 502 may include Read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors), it is operable to perform operations described with reference to the methods according to an aspect of the application.
The processor 501 reads and executes the computer program instructions stored in the memory 502 to implement the method/steps in the above method embodiments, and achieve the corresponding technical effects achieved by the method/steps executed by the method embodiments, which are not described herein again for brevity.
In one example, the electronic device may also include a communication interface 503 and a bus 510. As shown in fig. 5, the processor 501, the memory 502, and the communication interface 503 are connected via a bus 510 to complete communication therebetween.
The communication interface 503 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiments of the present application.
Bus 510 includes hardware, software, or both to couple the components of the electronic device to each other. By way of example, and not limitation, a Bus may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (Front Side Bus, FSB), a Hyper Transport (HT) interconnect, an Industry Standard Architecture (ISA) Bus, an infiniband interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a Micro Channel Architecture (MCA) Bus, a Peripheral Component Interconnect (PCI) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a video electronics standards association local (VLB) Bus, or other suitable Bus or a combination of two or more of these. Bus 510 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
In addition, with the detection method for base station construction in the foregoing embodiment, the embodiment of the present application may provide a computer-readable storage medium to implement the method. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement the detection method for base station construction in any of the above embodiments. Examples of the computer-readable storage medium include non-transitory computer-readable storage media such as electronic circuits, semiconductor memory devices, ROMs, random access memories, flash memories, erasable ROMs (EROMs), floppy disks, CD-ROMs, optical disks, and hard disks.
It is to be understood that the present application is not limited to the particular arrangements and instrumentality described above and shown in the attached drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions or change the order between the steps after comprehending the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic Circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed at the same time.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As described above, only the specific embodiments of the present application are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.

Claims (10)

1. A detection method for base station construction is characterized by being applied to a central processing platform and comprising the following steps:
pre-storing the trained construction result detection model;
receiving a first image of a construction site of the base station;
and detecting the first image based on the construction result detection model to obtain target content information in the first image, and determining whether the construction of the base station meets the construction specification according to the target content information.
2. The inspection method of claim 1, wherein after said receiving a first image of a job site of said base station, said inspection method further comprises:
performing a preliminary examination on the first image, wherein the preliminary examination comprises at least one of image fuzziness detection, image repeatability detection, image shooting angle detection and image occlusion detection;
the detecting the first image based on the construction result detection model to obtain the target content information in the first image specifically includes:
the construction result detection model is a machine learning model constructed based on a target detection algorithm, and under the condition that the first image is qualified in preliminary examination, the construction result detection model detects a target area and a target category in the first image which are qualified in preliminary examination based on the target detection algorithm, and identifies target content information in the first image according to the target area and the target category.
3. The detection method according to claim 2, wherein the image blur detection comprises the steps of:
detecting an image edge of the first image based on a Laplace algorithm, and calculating a variance corresponding to the image edge;
when the variance is larger than or equal to a first preset threshold value, determining that the fuzziness detection result of the first image is qualified;
and when the variance is smaller than the first preset threshold value, determining that the fuzziness detection result of the first image is unqualified.
4. The inspection method of claim 2, wherein the image repeatability inspection comprises the steps of:
extracting angular points of each image in the first image based on an image similarity detection algorithm;
for any two images, matching the corner points in the first image with the corner points in the second image based on a target matching algorithm, and recording the number of the matched corner points;
when the number of the matched corner points is larger than or equal to a second preset threshold value, determining that the first image and the second image are repeated;
and when the number of the matched corner points is less than the second preset threshold value, determining that the first image and the second image are not repeated.
5. The detection method according to claim 2, wherein the first image has a watermark representing watermark identification time of the first image, a location of the base station and item information of the base station;
before the detecting the first image based on the construction result detection model to obtain the target content information in the first image, the detecting method further includes:
detecting whether the watermark identification time carried on the watermark of the first image, the position of the base station and the project information of the base station are correct or not according to the current time and pre-stored project information;
the detecting the first image based on the construction result detection model to obtain the target content information in the first image specifically includes:
and under the condition that the watermark identification time carried on the watermark of the first image, the position of the base station and the item information of the base station are correct and the first image is qualified in preliminary examination, the construction result detection model detects a target area and a target type in the first image qualified in preliminary examination based on the target detection algorithm and identifies target content information in the first image according to the target area and the target type.
6. The method according to claim 2, wherein the determining whether the construction of the base station meets a construction specification according to the target content information specifically includes:
for the first image qualified by the preliminary examination, detecting whether the type of the base station can be identified in the first image, detecting whether the distance between the top of the holding pole and the top of the antenna box in the first image and/or the height of the antenna box in the first image is smaller than a third preset threshold value, and detecting whether the lightning rod is installed on the base station;
the base station type can be identified in the first image, the distance between the top of the pole and the top of the antenna box in the first image and/or the height of the antenna box in the first image are/is smaller than a third preset threshold value, and under the condition that the lightning rod is installed in the base station, it is determined that the construction of the base station meets the construction specification.
7. The inspection method according to claim 1, wherein before the pre-storing of the trained construction result inspection model, the inspection method further comprises:
obtaining a sample image of a sample base station construction site;
turning the sample image;
summarizing the sample image before overturning and the sample image after overturning to obtain a training data set;
and training the construction result detection model based on the training data set to obtain the trained construction result detection model.
8. The utility model provides a detection device of basic station construction which characterized in that is applied to central processing platform, detection device includes:
the storage module is used for storing the trained construction result detection model in advance;
the receiving module is used for receiving a first image of a construction site of the base station;
and the detection module is used for detecting the first image based on the construction result detection model to obtain target content information in the first image and determining whether the construction of the base station meets the construction specification or not according to the target content information.
9. An electronic device, characterized in that the electronic device comprises: processor, memory and computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the detection method of base station construction according to any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of a method for detecting base station construction according to any one of claims 1 to 7.
CN202211624085.1A 2022-12-15 2022-12-15 Method, device and equipment for detecting base station construction and computer readable storage medium Pending CN115880260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211624085.1A CN115880260A (en) 2022-12-15 2022-12-15 Method, device and equipment for detecting base station construction and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211624085.1A CN115880260A (en) 2022-12-15 2022-12-15 Method, device and equipment for detecting base station construction and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115880260A true CN115880260A (en) 2023-03-31

Family

ID=85755114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211624085.1A Pending CN115880260A (en) 2022-12-15 2022-12-15 Method, device and equipment for detecting base station construction and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115880260A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116205905A (en) * 2023-04-25 2023-06-02 合肥中科融道智能科技有限公司 Power distribution network construction safety and quality image detection method and system based on mobile terminal
CN117252328A (en) * 2023-07-04 2023-12-19 南通理工学院 Project integrated management method and system based on BIM

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116205905A (en) * 2023-04-25 2023-06-02 合肥中科融道智能科技有限公司 Power distribution network construction safety and quality image detection method and system based on mobile terminal
CN117252328A (en) * 2023-07-04 2023-12-19 南通理工学院 Project integrated management method and system based on BIM

Similar Documents

Publication Publication Date Title
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN110060237B (en) Fault detection method, device, equipment and system
CN107944450B (en) License plate recognition method and device
CN111507958B (en) Target detection method, training method of detection model and electronic equipment
CN115880260A (en) Method, device and equipment for detecting base station construction and computer readable storage medium
CN110781836A (en) Human body recognition method and device, computer equipment and storage medium
CN108564579B (en) Concrete crack detection method and detection device based on time-space correlation
CN111210399B (en) Imaging quality evaluation method, device and equipment
CN111680705B (en) MB-SSD method and MB-SSD feature extraction network suitable for target detection
CN111078946A (en) Bayonet vehicle retrieval method and system based on multi-target regional characteristic aggregation
CN110164139B (en) System and method for detecting and identifying side parking
CN113435407B (en) Small target identification method and device for power transmission system
CN110852164A (en) YOLOv 3-based method and system for automatically detecting illegal building
CN111695373A (en) Zebra crossing positioning method, system, medium and device
CN113052170A (en) Small target license plate recognition method under unconstrained scene
CN112052702A (en) Method and device for identifying two-dimensional code
CN111881984A (en) Target detection method and device based on deep learning
CN116740758A (en) Bird image recognition method and system for preventing misjudgment
CN110826364B (en) Library position identification method and device
CN112784494B (en) Training method of false positive recognition model, target recognition method and device
CN110569921A (en) Vehicle logo identification method, system, device and computer readable medium
CN116580026B (en) Automatic optical detection method, equipment and storage medium for appearance defects of precision parts
CN110334703B (en) Ship detection and identification method in day and night image
CN113112479A (en) Progressive target detection method and device based on key block extraction
CN112329550A (en) Weak supervision learning-based disaster-stricken building rapid positioning evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination