WO2021000948A1 - Procédé et système de détection de poids de contrepoids, procédé et système d'acquisition, et grue - Google Patents

Procédé et système de détection de poids de contrepoids, procédé et système d'acquisition, et grue Download PDF

Info

Publication number
WO2021000948A1
WO2021000948A1 PCT/CN2020/100176 CN2020100176W WO2021000948A1 WO 2021000948 A1 WO2021000948 A1 WO 2021000948A1 CN 2020100176 W CN2020100176 W CN 2020100176W WO 2021000948 A1 WO2021000948 A1 WO 2021000948A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
counterweight
image
weight
detected
Prior art date
Application number
PCT/CN2020/100176
Other languages
English (en)
Chinese (zh)
Inventor
徐柏科
范卿
曾杨
谭智仁
雷美玲
Original Assignee
中联重科股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中联重科股份有限公司 filed Critical 中联重科股份有限公司
Publication of WO2021000948A1 publication Critical patent/WO2021000948A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/16Applications of indicating, registering, or weighing devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the invention relates to the field of counterweight identification, in particular to a method for detecting the weight of a counterweight, an obtaining method, a detection system, an obtaining system and a crane.
  • the weight of the crane counterweight is mainly recognized by humans. Workers use video devices or directly visually measure the weight of the counterweight and calculate the total weight of the counterweight and match it with the crane. This method requires manual execution and is easy Errors, especially when the counterweight calculation error leads to the wrong selection of working conditions, the corresponding lifting weight table corresponding to the working conditions is wrong, which is very easy to cause accidents such as overloading and overturning of the crane.
  • the paper "Research and Design of Embedded Crane Counterweight Automatic Identification System” discloses the following content: first detect the position and size of the white paper and the sign on the counterweight, and then use the white paper and the sign as the basis to determine the position and weight of the counterweight. The weight is detected and identified.
  • the white paper and the signs on the counterweight are easy to wear and fall off during the long-term use of the counterweight, the white paper and the signs are used as the premise, and the detection of the counterweight text lacks reliability and is not practical.
  • the purpose of the present invention is to provide a counterweight weight detection method, acquisition method, detection system, acquisition system and crane, which can quickly lock and extract the area where the counterweight weight is located, and has better reliability and robustness, So as to realize the automatic identification and high-precision detection of the weight of the counterweight.
  • the present invention provides a method for detecting the weight of a counterweight.
  • the detection method includes: acquiring the area to be detected in the image of the counterweight based on the structural features and color features in the image of the counterweight. Binarize the area to be detected; extract the quasi-target area in the area to be detected based on the to-be-detected area after the binarization processing; and use the trained classifier to process the extracted quasi-target area , To detect the weight of the counterweight.
  • the detection method further includes: before performing the step of binarizing the area to be detected, performing the following operations: calculating the average gray value of the area to be detected; When the gray average value of the area is less than the preset average value, the image texture enhancement is performed on the area to be detected.
  • said performing image texture enhancement on the area to be detected includes: using a first structural element to perform opening and closing operations on the area to be detected; obtaining the first image based on the area to be detected and the image after the opening operation ; Obtain a second image based on the area to be detected and the image after the closing operation; and Obtain a fusion image corresponding to the area to be detected based on the first image and the second image.
  • the obtaining the fused image corresponding to the area to be detected based on the first image and the second image includes: calculating edge information entropy of the first image and the second image respectively; and Weighted fusion is performed on the edge information entropy of the first image and the second image to obtain the fused image corresponding to the region to be detected.
  • the detection method further includes: before performing the step of calculating the average gray value of the area to be detected, using a second structural element to perform opening and closing operations on the area to be detected to achieve filtering and denoising .
  • the extracting the quasi-target area in the to-be-detected area based on the to-be-detected area after binarization processing includes: adopting an image processing method to obtain connected areas in the to-be-detected area after the binarization processing; And extracting the quasi-target area based on the location information of the connected area.
  • the detection method further comprises: before performing the step of extracting the quasi-target area based on the position information of the connected area, performing the following operation: based on the concave-convex curvature of the connected area, The regions are divided to remove interference points; the area and height-to-width ratio of each sub-connected region divided in the connected region are estimated; and the area and height-to-width ratio of the specific sub-connected region in each sub-connected region satisfy
  • the specific sub-connected area is removed: the area of the specific sub-connected area is smaller than a first preset area; the area of the specific sub-connected area is larger than a second preset area; and The height-to-width ratio of the specific sub-connected area is greater than the preset ratio, wherein the first preset area is smaller than the second preset area.
  • the obtaining the area to be detected in the image of the counterweight based on the structural feature and the color feature in the image of the counterweight includes: obtaining the image based on the structural feature of the image of the counterweight The part of the image in which includes the area to be detected; and based on the acquired color features of the part of the image, according to the magnitude of the horizontal grayscale gradient complexity mutation and the vertical grayscale gradient complexity mutation, the execution of the partial image And the cutting of columns to obtain the area to be inspected.
  • the performing row and column cutting of the partial image includes: calculating the horizontal gray gradient complexity and the vertical gray gradient complexity based on the color characteristics of the partial image; based on the horizontal gray gradient complexity
  • the maximum and minimum values of the horizontal gray-scale gradient complexity mutation and the maximum and minimum values of the vertical gray-scale gradient complexity mutation are respectively obtained based on the horizontal gray-scale gradient complexity.
  • the present invention creatively obtains the area to be inspected including the weight of the weight block based on the structural features and color characteristics in the image of the weight block, and then from the binarized area to be inspected
  • the quasi-target area about the weight of the counterweight is extracted in the, and finally, the extracted quasi-target area is processed by the classifier that has been trained in advance to detect the weight of the counterweight, which can quickly lock and extract the weight
  • the area where the heavy weight is located has good reliability and robustness, so as to realize the automatic identification and high-precision detection of the weight of the weight.
  • the present invention also provides a method for obtaining the weight of the counterweight, the obtaining method includes: detecting the counterweight weight of the first counterweight according to the above-mentioned method for detecting the weight of the counterweight; The detection method includes detecting the weight of the second weight; and obtaining the total weight of the weight based on the weight of the first weight and the weight of the second weight.
  • the acquisition method further comprises: acquiring images of the first weight and the second weight; after performing the step of detecting the weight of the first weight, and the acquired The image of shows that when the second weight is installed on the positioning tip, the column corresponding to the maximum value of the vertical gradient mutation and the row corresponding to the maximum value of the horizontal gradient mutation based on the image of the first weight block, Assign a value of 0 to the pixels of the image of the first weight block.
  • the present invention creatively detects the weight of the first and second weights through the above-mentioned method for detecting the weight of the weight, and obtains the total weight of the weight based on the weight of the first and second weights. Therefore, the total weight of the counterweight can be effectively identified with high accuracy, so that the automatic identification of the total counterweight can be realized in the process of assembling the counterweight.
  • the present invention also provides a detection system for the weight of a counterweight.
  • the detection system includes: an area to be detected acquisition device for obtaining the counterweight based on the structural features and color features in the image of the counterweight. The area to be detected in the image of the; binarization processing device for binarizing the area to be detected; quasi-target area extraction device for extracting the area to be detected based on the binarization processing A quasi-target area in the area to be detected; and a detection device for processing the extracted quasi-target area with a trained classifier to detect the weight of the counterweight.
  • the present invention also provides a system for obtaining the weight of the counterweight, the obtaining system includes: the detection system for the weight of the counterweight described above, for detecting the weight of the first counterweight and the second counterweight And a total counterweight weight obtaining device for obtaining the total counterweight weight of the counterweight based on the counterweight weight of the first counterweight and the second counterweight.
  • the acquisition system further includes: a collection device for collecting images of the first counterweight and the second counterweight; and an assignment device for detecting the first counterweight after the detection system After the weight of the counterweight of the block, and the image collected by the acquisition device shows that the second counterweight is installed on the positioning pin, the maximum value of the vertical gradient mutation based on the image of the first counterweight corresponds to In the column of and the row corresponding to the maximum value of the horizontal gradient mutation, the pixel of the image of the first weight block is assigned a value of 0.
  • the collection device includes: a camera for collecting images of the first counterweight and the second counterweight; and a telescopic control module for controlling the telescopic and/or rotation of the camera to make The viewing angle of the camera is greater than or equal to the range of the area where the first weight block and the second weight block are located.
  • the present invention also provides a crane, which is configured with the aforementioned system for obtaining the weight of the counterweight.
  • the present invention also provides a machine-readable storage medium having instructions stored on the machine-readable storage medium for causing a machine to execute the aforementioned method for detecting the weight of a counterweight or the aforementioned method for obtaining the weight of a counterweight .
  • FIG. 1 is a flowchart of a method for detecting the weight of a counterweight provided by an embodiment of the present invention
  • FIG. 2 is a flowchart of obtaining a region to be detected according to an embodiment of the present invention
  • Figure 3 is a schematic structural diagram of a counterweight provided by an embodiment of the present invention.
  • FIG. 5 is a flow chart of removing non-counterweight weight areas in the process of extracting quasi-target areas according to an embodiment of the present invention
  • FIG. 6 is a flowchart of a method for detecting the weight of a counterweight provided by an embodiment of the present invention
  • Figure 7 is a structural diagram of a system for detecting the weight of a counterweight provided by an embodiment of the present invention.
  • Figure 8 is a structural diagram of a system for obtaining a counterweight weight provided by an embodiment of the present invention.
  • FIG. 9 is a flowchart of a method for obtaining a weight of a counterweight provided by an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of the installation position of the camera and the counterweight provided by the embodiment of the present invention.
  • Fig. 1 is a flowchart of a method for detecting the weight of a counterweight provided by an embodiment of the present invention.
  • the detection method may include the following steps: step S101, based on the structural feature and color feature in the image of the weight block, obtain the area to be detected in the image of the weight block; step S102, right The area to be detected is subjected to binarization processing; step S103, based on the area to be detected after the binarization processing, a quasi-target area in the area to be detected is extracted; and step S104, a trained classifier is used to process the extracted Quasi-target area to detect the weight of the weight block.
  • the image of the weight block may be scaled and grayed before step S101 is executed.
  • the above detection method can be executed by a weight detection system, and the detection system can be an image analysis processor 800, as shown in FIG. 8.
  • the detection system may further include: a vehicle-mounted display 801 for real-time display of the weight of the counterweight, as shown in FIG.
  • the step S101 may include the following steps: based on the structural features in the image of the weight block, acquiring a part of the image including the area to be detected in the image; and based on the acquired color features of the part of the image, According to the magnitude of the horizontal gray gradient complexity mutation and the vertical gray gradient complexity mutation, row and column cutting is performed on the partial image to obtain the region to be detected.
  • the process of acquiring a part of the image including the area to be detected in the image includes the following content:
  • the area where the weight of the counterweight is located is generally at the buckle rigging (recess)
  • the counterweight block (weight block 1, counterweight block 2) is a symmetric feature, so the left half image or the right half image of the counterweight block image is selected as the research object (ie, partial image).
  • the embodiment of the present invention is mainly, but not limited to, taking the image of the left half (ie, the left side) of the column center line of the image as the research object (ie, partial image).
  • the process of performing row and column cutting on this part of the image may include the following steps, as shown in Fig. 2:
  • Step S201 based on the color features of the partial images, calculate the horizontal gray gradient complexity and the vertical gray gradient complexity respectively.
  • the vertical and horizontal gradient complexity are defined as follows:
  • I(x,y) is the image of the weight block (or original grayscale image)
  • row number i 1, 2,...m
  • column number j 1, 2,...n
  • C 1 , C 2 respectively Represents the vertical and horizontal gradient complexity of the image.
  • Step S202 based on the complexity of the horizontal gray gradient and the complexity of the vertical gray gradient, obtain the maximum and minimum of the horizontal gray gradient complexity mutation, and the maximum and the minimum of the vertical gray gradient complexity mutation. value.
  • the maximum and minimum values of the horizontal gray gradient complexity mutation Hori_grad(max j1 ,min j2 ) are filtered out, and Hori_grad(max j1 ,min j2 ) corresponds to the column numbers j1 and j2.
  • Filti_grad(max i1 ,min i2 ) corresponds to the column numbers j1 and j2.
  • Verti_grad(max i1 ,min i2 ) corresponds to the column numbers j1 and j2.
  • Step S203 Perform processing on the partial image based on the columns corresponding to the maximum and minimum values of the horizontal gray gradient complexity mutation, and the rows corresponding to the maximum and minimum values of the vertical gray gradient complexity mutation. Cutting to obtain the area to be inspected.
  • step S102 after acquiring the area to be detected, binarization can be performed on the area to be detected by analyzing the gray distribution of the area to be detected, for example, in the case where the gray level is greater than a preset gray level Next, it is assigned a value of 0; when the grayscale is less than or equal to the preset grayscale, it is assigned a value of 1.
  • the step S103 may include the following steps:
  • step S401 the image processing method is adopted to obtain the connected areas in the area to be detected after the binarization processing.
  • Step S402 Extract the quasi-target area based on the location information of the connected area.
  • the area where interference points such as dense impurities or stains are located ie, non-digital connected areas
  • the quasi-target area resulting in an increase in the area of the quasi-target area identified, and ultimately affecting the accuracy and timeliness of the weight. Sex. Therefore, in order to eliminate the adverse effects of the interference points, preferably, before performing step S402, the coordinate points of the bump hull of the connected area can be calculated, and the connected area is divided based on the calculated coordinate points to divide into several sub-connected areas . Finally, by analyzing the specific conditions of the sub-connected areas, a large number of non-digital connected areas are eliminated, so as to accurately identify the quasi-target area, and lay a solid foundation for quickly and accurately identifying the weight of the counterweight.
  • the above process may include the following steps:
  • Step S501 based on the concave-convex curvature of the connected area, divide the connected area to remove interference points. Analyze the concave and convex curvature of the connected region, extract peak points (for example, the points corresponding to the maximum value and the minimum value), and perform segmentation processing on the connected connected domains based on the coordinate points of the peak point, thereby removing impurities and stains Wait for interference. At the same time, the connected area is divided into sub-connected areas
  • Step S502 estimating the area and height-to-width ratio of each sub-connected area divided in the connected area.
  • Step S503 In the case where the area and the height-to-width ratio of the specific sub-connected area in each of the sub-connected areas meet any of the following elimination conditions, the specific sub-connected area is eliminated.
  • the elimination condition may be that the area of the specific sub-connected area is smaller than the first preset area; the area of the specific sub-connected area is greater than the second preset area; or the height-to-width ratio of the specific sub-connected area is greater than the preset Ratio, wherein the first predetermined area is smaller than the second predetermined area.
  • the area of the area where the weight of the counterweight (such as 8t) is located usually meets certain specifications.
  • the first preset area may be 150
  • the second preset area may be 3000
  • the preset ratio may be 1.5.
  • the first preset area, the second preset area, and the preset ratio in the embodiment of the present invention are not limited to the aforementioned values, and any other values within a reasonable range are feasible.
  • the sub-connected region is de-binarized to eliminate the sub-connected region. For example, if the sub-connected area is 1, it is reversed to 0, that is, the value of the sub-connected area and the background (non-weighted area) are the same.
  • a certain number of positive and negative samples of the weight of the counterweight ie, the digital area
  • the positive and negative samples are the area where the weight of the weight is located (ie, the target area) and the non- The area where the weight of the counterweight is located (that is, the non-target area).
  • Use the positive and negative samples to train the classifier (for example, vector machine SVM), and then use the trained classifier to process the quasi-target region extracted in step S103, so as to realize the allocation of the weight block Real-time detection of heavy weight.
  • the classifier for example, vector machine SVM
  • the thresholding decision can be used to enhance the counterweight text texture. Effectively highlight the texture change characteristics of the weighted text for easy detection.
  • the obtaining the fused image corresponding to the area to be detected based on the first image and the second image may include: calculating edge information entropy of the first image and the second image respectively; and Weighted fusion is performed on the edge information entropy of the first image and the second image to obtain the fused image corresponding to the region to be detected.
  • the grayMean value of the gray value of the area to be detected is calculated, and the preset average value (or gray value threshold grayValue thred ) is used to decide whether to perform image texture (detail) enhancement processing. If the average gray value is less than the gray threshold, the image texture detail enhancement is performed, otherwise it is not executed.
  • the second structural element before the step of calculating the average gray value of the area to be detected, is used to perform opening and closing operations on the area to be detected to achieve positive and negative filtering. noise.
  • the detection process of the counterweight weight is as follows:
  • step S601 the image of the weight block is scaled and grayed.
  • Step S602 Obtain the area to be detected in the image of the weight block by analyzing the gray gradient complexity of the zoomed and grayed image.
  • Step S603 Perform opening and closing operations on the area to be detected.
  • the purpose of this step is to filter and denoise the area to be detected.
  • Step S604 Obtain the average gray value of the area to be detected.
  • Step S605 It is judged whether the average gray value is greater than the preset average value, if it is greater, step S607 is executed, otherwise, step S606 is executed.
  • Step S606 Perform image texture enhancement processing on the area to be detected, and perform step S607.
  • Step S607 Binarize the area to be detected.
  • step S608 the connected area in the area to be detected after the binarization process is obtained, and the peak point of the bump hull in the connected area is calculated.
  • step S609 the connected area is divided based on the peak points of the bump hull in the connected area to obtain multiple sub-connected areas.
  • Step S610 It is judged whether the area and aspect ratio of each sub-connected region in the multiple sub-connected regions meet the elimination condition, if they are satisfied, step S611 is executed; otherwise, step S612 is executed.
  • step S611 the sub-connected regions meeting the elimination condition are eliminated, and step S612 is executed.
  • Step S612 extracting the area where the weight of the roughly positioned weight block is located.
  • Step S613 Use a trained classifier to process the extracted area where the weight of the roughly positioned weight block is located to detect the weight of the weight block.
  • the present invention is creatively based on the structural features and color features in the image of the weight block to obtain the area to be inspected including the weight of the weight block, and then from the binarized area to be inspected
  • the quasi-target area about the weight of the counterweight is extracted in the, and finally, the extracted quasi-target area is processed by the classifier that has been trained in advance to detect the weight of the counterweight, which can quickly lock and extract the weight
  • the area where the heavy weight is located has good reliability and robustness, so as to realize the automatic identification and high-precision detection of the weight of the weight.
  • the present invention also provides a weight detection system.
  • the detection system may include: an area-to-be-detected area acquisition device 70 for structural features and colors in the image based on the weight. Feature, obtain the area to be detected in the image of the weight block; binarization processing device 71, for binarizing the area to be detected; quasi-target area extraction device 72, for binarization based After the processed area to be detected, extracting the quasi-target area in the area to be detected; and the detection device 73 is used to process the extracted quasi-target area by using a trained classifier to detect the weight of the weight block weight.
  • the detection system further includes: a gray-scale mean value calculation device for calculating the gray-scale mean value of the area to be detected; and a texture enhancement device for performing the binarization processing device on the to-be-detected area. Before the region is binarized and the average gray value of the region to be detected is less than the preset average value, image texture enhancement is performed on the region to be detected.
  • the texture enhancement device includes: an arithmetic module, configured to use a first structural element to perform opening and closing operations on the region to be detected; a first image acquisition module, configured to perform opening and closing operations based on the region to be detected and After the image, the first image is acquired; the second image acquisition module is used to acquire the second image based on the area to be detected and the image after the closing operation; and the fusion image acquisition module is used to acquire the second image based on the first image and The second image acquires a fusion image corresponding to the area to be detected.
  • the fusion image acquisition module includes: an edge information entropy calculation unit for calculating the edge information entropy of the first image and the second image respectively; and a fusion image acquisition unit for calculating the first image and the second image.
  • the edge information entropy of an image and the second image is weighted and fused to obtain the fused image corresponding to the region to be detected.
  • the detection system further includes: an arithmetic device, configured to use a second structural element to perform a calculation on the area to be detected before the step of calculating the average gray value of the area to be detected by the gray average value calculation device Open and close operations to achieve filtering and denoising.
  • an arithmetic device configured to use a second structural element to perform a calculation on the area to be detected before the step of calculating the average gray value of the area to be detected by the gray average value calculation device Open and close operations to achieve filtering and denoising.
  • the device for extracting the quasi-target region includes: a connected region acquisition module for acquiring connected regions in the region to be detected after binarization processing by using an image processing method; and a quasi-target region extraction module for Extracting the quasi-target area based on the location information of the connected area.
  • the detection system further includes: a segmentation device, which is configured to: before the quasi-target region extraction module extracts the quasi-target region based on the position information of the connected region, based on the concave-convex curvature of the connected region, compare the connected region The area is divided to remove interference points; an estimation device is used to estimate the area and height-to-width ratio of each sub-connected area divided in the connected area; and a culling device is used for a specific sub-connected area in each sub-connected area.
  • a segmentation device which is configured to: before the quasi-target region extraction module extracts the quasi-target region based on the position information of the connected region, based on the concave-convex curvature of the connected region, compare the connected region The area is divided to remove interference points; an estimation device is used to estimate the area and height-to-width ratio of each sub-connected area divided in the connected area; and a culling device is used for a specific sub-connected area in each sub-connected area.
  • the specific sub-connected area is removed when the area and height-to-width ratio of the connected area meet any of the following removal conditions: the area of the specific sub-connected area is smaller than the first preset area; the area of the specific sub-connected area is greater than A second preset area; and the height-to-width ratio of the specific sub-connected area is greater than a preset ratio, wherein the first preset area is smaller than the second preset area.
  • the device for acquiring the region to be detected includes: a partial image acquisition module for acquiring a partial image of the image including the region to be detected based on structural features in the image of the weight block; and
  • the detection area module is used to perform row and column cutting of the partial image based on the color features of the acquired part of the image, according to the magnitude of the horizontal gray gradient complexity mutation and the vertical gray gradient complexity mutation, to obtain The area to be detected.
  • the acquisition module for the area to be detected includes: a complexity calculation unit, configured to calculate the horizontal gray gradient complexity and the vertical gray gradient complexity based on the color characteristics of the partial image; the gray gradient complexity A sudden change maximum value acquisition unit for acquiring the maximum and minimum values of the horizontal gray gradient complexity and the vertical gray gradient complexity respectively based on the horizontal gray gradient complexity and the vertical gray gradient complexity And the area to be detected acquisition unit, which is used for columns corresponding to the maximum and minimum of the horizontal grayscale gradient complexity mutation, the maximum and the vertical grayscale gradient complexity mutation The row corresponding to the minimum value is used to cut the partial image to obtain the area to be detected.
  • the above process is a detection process for the weight of a single counterweight, but in reality, multiple counterweights are often required to meet engineering needs.
  • two counterweights (as shown in FIG. 8) are mainly taken as an example to describe the process of obtaining the total counterweight weight of the counterweight.
  • the method for obtaining the weight of the counterweight may include the following steps: step S901, detecting the weight of the first counterweight according to the above-mentioned method for detecting the weight of the counterweight; step S902, according to the above-mentioned counterweight
  • the weight detection method is to detect the counterweight weight of the second counterweight; and step S903, based on the counterweight weight of the first counterweight and the second counterweight, obtain the total counterweight of the counterweight weight.
  • the acquisition method may further include: acquiring images of the first weight and the second weight; after performing the step of detecting the weight of the first weight, and The captured image shows that when the second counterweight is installed on the positioning pins A1 and A2 (as shown in Fig. 10), the column corresponding to the maximum value of the vertical gradient mutation based on the image of the first counterweight and In the row corresponding to the maximum value of the horizontal gradient mutation, the pixel of the image of the first weight block is assigned a value of 0.
  • the acquisition system may include: a weight detection system, the detection system 80 includes: an image analysis processor 800; and a vehicle-mounted display 801, a total weight acquisition device (not shown), It is used to obtain the total weight of the weight based on the weight 2 of the first weight 1 and the weight 2 of the second weight.
  • the acquisition system may further include: an acquisition device 81 for acquiring images of the first weight block 1 and the second weight block 2; and an assignment device (not shown), After the detection system detects the weight of the first counterweight 1, and the image collected by the collection device 81 shows that the second counterweight 2 is installed on the positioning pins A1, A2 (as shown in Fig. 10 Shown), based on the column corresponding to the maximum value of the vertical gradient sudden change of the image of the first weight block 1 and the row corresponding to the maximum value of the horizontal gradient sudden change, the first weight block 1 The pixels of the image are assigned the value 0.
  • the collecting device 81 collects the video image of the counterweight and transmits it to the image analysis processor 800 for real-time detection via wireless WiFi, and feeds back the detection result to the on-board display 801 to notify the operator of the total weight of the counterweight mounted.
  • the on-board display 801 shows that the counterweight is full, and then the counterweight cylinder of the crane is activated to mount the counterweight.
  • the collection device 81 may include a camera 810; and a telescopic control module (not shown) for controlling the telescopic and/or rotation of the camera so that the viewing angle of the camera is greater than or equal to the first The area where a counterweight 1 and the second counterweight (not shown) are located.
  • the camera 810 may be a webcam Camera.
  • the camera 810 is installed in a protective shell in the direction of the front of the car.
  • a telescopic control module (not shown) can control the up and down expansion and/or rotation of the camera 810.
  • the telescopic control module (Not shown) the camera 810 can be controlled to tilt upwards, and the video image of the counterweight is collected from the front. After the detection is completed, the camera 810 is controlled to be zoomed and placed in the protective shell, so as to realize the safety protection and effective operation of the camera.
  • the video image of the first counterweight 1 is collected by the collecting device 81, and based on the video image of the first counterweight 1, the above detection method is used to detect the counterweight of the first counterweight 1.
  • the above detection method is used to detect the counterweight of the first counterweight 1.
  • the second counterweight 2 is lifted.
  • T Horizontal_thred j
  • T Vertical_thred i.
  • Heavy weight is used to detect the weight of the second weight block 2
  • the total weight value of the first counterweight 1 and the second counterweight 2 is accumulated to calculate the total weight of the counterweight.
  • the above-mentioned method for obtaining the weight of the counterweight based on machine vision involves relatively low computational complexity and can achieve an effective and high-precision counterweight recognition effect.
  • the present invention is not limited to obtaining the counterweight weight of two counterweights, and the process of obtaining the counterweight weight of any other multiple counterweights is similar to the above-mentioned process, and will not be repeated here.
  • the present invention creatively detects the weight of the first and second counterweights through the above-mentioned method for detecting the weight of the counterweight, and obtains the total counterweight weight of the counterweight based on the weight of the first and second counterweights. Therefore, the total weight of the counterweight can be effectively identified with high accuracy, so that the automatic identification of the total counterweight can be realized in the process of assembling the counterweight.
  • the present invention also provides a crane, which is configured with the aforementioned system for obtaining the weight of the counterweight.
  • the present invention is not limited to the above crane, and is also applicable to any other construction machinery that requires counterweight and needs to obtain the weight of the counterweight.
  • the present invention also provides a machine-readable storage medium having instructions stored on the machine-readable storage medium for causing a machine to execute the aforementioned method for detecting the weight of a counterweight or the aforementioned method for obtaining the weight of a counterweight .
  • the machine-readable storage medium includes, but is not limited to, phase change memory (Phase Change Random Access Memory, PRAM, also known as RCM/PCRAM), static random access memory (SRAM), dynamic Random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory Technology, CD-ROM, Digital Versatile Disk (DVD) or other optical storage, magnetic cassette tape, magnetic tape disk storage or other magnetic storage devices and other media that can store program codes.
  • PRAM Phase Change Random Access Memory
  • SRAM static random access memory
  • DRAM dynamic Random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM electrically erasable programmable read only memory
  • flash memory or other memory Technology
  • CD-ROM Compact Disk
  • DVD Digital Versatile Disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention, qui appartient au domaine de l'identification de contrepoids, concerne un procédé et un système de détection de poids de contrepoids, un procédé et un système d'acquisition, et une grue. Le procédé de détection de poids de contrepoids comprend : l'acquisition d'une zone à détecter dans une image d'un bloc de contrepoids sur la base d'une caractéristique structurelle et d'une caractéristique de couleur dans l'image du bloc de contrepoids (S101) ; la réalisation d'un traitement de binarisation sur ladite zone (S102) ; l'extraction d'une zone quasi-cible dans ladite zone sur la base de la zone binarisée à détecter (S103) ; et le traitement de la zone quasi-cible extraite par adoption d'un classificateur entraîné afin de détecter le poids de contrepoids du bloc de contrepoids (S104). Le procédé de détection peut rapidement verrouiller et extraire la zone où le poids de contrepoids est localisé et présente une bonne fiabilité et une bonne robustesse, ce qui permet de mettre en œuvre une identification automatique et une détection de grande précision du poids de contrepoids, et de mettre en œuvre en outre une identification automatique du poids de contrepoids total dans un processus d'assemblage de contrepoids.
PCT/CN2020/100176 2019-07-04 2020-07-03 Procédé et système de détection de poids de contrepoids, procédé et système d'acquisition, et grue WO2021000948A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910598067.2 2019-07-04
CN201910598067.2A CN110956180B (zh) 2019-07-04 2019-07-04 配重重量的检测方法与系统、获取方法与系统及起重机

Publications (1)

Publication Number Publication Date
WO2021000948A1 true WO2021000948A1 (fr) 2021-01-07

Family

ID=69976153

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/100176 WO2021000948A1 (fr) 2019-07-04 2020-07-03 Procédé et système de détection de poids de contrepoids, procédé et système d'acquisition, et grue

Country Status (2)

Country Link
CN (1) CN110956180B (fr)
WO (1) WO2021000948A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956180B (zh) * 2019-07-04 2021-04-13 中联重科股份有限公司 配重重量的检测方法与系统、获取方法与系统及起重机
CN111860166B (zh) * 2020-06-18 2024-07-12 浙江大华技术股份有限公司 图像检测的方法、装置、计算机设备和存储介质
CN112191055B (zh) * 2020-09-29 2021-12-31 武穴市东南矿业有限公司 一种矿山机械用具有空气检测结构的降尘装置
CN113901600B (zh) * 2021-09-13 2023-06-02 杭州大杰智能传动科技有限公司 智能塔吊起升负载平衡性的自动监测控制方法和系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789080A (zh) * 2010-01-21 2010-07-28 上海交通大学 车辆车牌实时定位字符分割的检测方法
US20160253550A1 (en) * 2013-11-11 2016-09-01 Beijing Techshino Technology Co., Ltd. Eye location method and device
CN109871938A (zh) * 2019-01-21 2019-06-11 重庆大学 一种基于卷积神经网络的零部件喷码检测方法
CN110956180A (zh) * 2019-07-04 2020-04-03 中联重科股份有限公司 配重重量的检测方法与系统、获取方法与系统及起重机

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101092133B1 (ko) * 2009-11-27 2011-12-12 동명대학교산학협력단 컨테이너 성량 및 거리 검출 방법
CN103613015B (zh) * 2013-11-26 2015-08-26 中联重科股份有限公司 安全吊载控制方法、装置、系统及起重机
CN104299002B (zh) * 2014-10-11 2017-06-23 嘉兴学院 一种基于监控系统的塔吊图像检测方法
CN107066933B (zh) * 2017-01-25 2020-06-05 武汉极目智能技术有限公司 一种道路标牌识别方法及系统
CN109785240B (zh) * 2017-11-13 2021-05-25 中国移动通信有限公司研究院 一种低照度图像增强方法、装置及图像处理设备
CN109816641B (zh) * 2019-01-08 2021-05-14 西安电子科技大学 基于多尺度形态学融合的加权局部熵红外小目标检测方法
CN109934887B (zh) * 2019-03-11 2023-05-30 吉林大学 一种基于改进的脉冲耦合神经网络的医学图像融合方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789080A (zh) * 2010-01-21 2010-07-28 上海交通大学 车辆车牌实时定位字符分割的检测方法
US20160253550A1 (en) * 2013-11-11 2016-09-01 Beijing Techshino Technology Co., Ltd. Eye location method and device
CN109871938A (zh) * 2019-01-21 2019-06-11 重庆大学 一种基于卷积神经网络的零部件喷码检测方法
CN110956180A (zh) * 2019-07-04 2020-04-03 中联重科股份有限公司 配重重量的检测方法与系统、获取方法与系统及起重机

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUANG, WENWU: "Research Design on Embedded Automatic Weight Identification System for Crane", MECHANICAL AND ELECTRICAL INFORMATION, no. 18, 30 June 2016 (2016-06-30), XP055772596, DOI: 10.19514/j.cnki.cn32-1628/tm.2016.18.024 *
LIAN-YAN CUI, LIN XU, SHU-SHENG GU: "Eyes Location Based on the Calculation of the Complexity Extent and the Best Threshold Value", CONTROL ENGINEERING OF CHINA, vol. 15, no. 1, 31 January 2008 (2008-01-31), XP055772599 *
LIU, KUN, WEIDONG LIU, A. XUHAI K: "Detection Algorithm for Infrared Dim Small Targets Based on Weighted Fusion Feature and Ostu Segmentation", COMPUTER ENGINEERING, vol. 43, no. 7, 31 July 2017 (2017-07-31), XP055772600 *

Also Published As

Publication number Publication date
CN110956180B (zh) 2021-04-13
CN110956180A (zh) 2020-04-03

Similar Documents

Publication Publication Date Title
WO2021000948A1 (fr) Procédé et système de détection de poids de contrepoids, procédé et système d'acquisition, et grue
CN107463918B (zh) 基于激光点云与影像数据融合的车道线提取方法
CN108629775B (zh) 一种热态高速线材表面图像处理方法
CN108759973B (zh) 一种水位测量方法
US10620005B2 (en) Building height calculation method, device, and storage medium
CN106934795B (zh) 一种混凝土桥梁裂缝的自动检测方法和预测方法
CN111179243A (zh) 一种基于计算机视觉的小尺寸芯片裂纹检测方法及系统
WO2012169088A1 (fr) Appareil de traitement d'image, procédé de traitement d'image et système de traitement d'image
CN108229475B (zh) 车辆跟踪方法、系统、计算机设备及可读存储介质
CN113109368B (zh) 玻璃裂纹检测方法、装置、设备及介质
CN105718872B (zh) 两侧车道快速定位及检测车辆偏转角度的辅助方法及系统
CN109815822B (zh) 基于广义Hough变换的巡检图零部件目标识别方法
CN112686858A (zh) 一种手机充电器视觉缺陷检测方法、装置、介质及设备
JP2008286725A (ja) 人物検出装置および方法
CN105718931B (zh) 用于确定采集图像中的杂斑的系统和方法
CN111354047B (zh) 一种基于计算机视觉的摄像模组定位方法及系统
US20120207379A1 (en) Image Inspection Apparatus, Image Inspection Method, And Computer Program
CN108802051B (zh) 一种柔性ic基板直线线路气泡及折痕缺陷检测系统及方法
KR20180098945A (ko) 고정형 단일 카메라를 이용한 차량 속도 감지 방법 및 장치
CN111242888A (zh) 一种基于机器视觉的图像处理方法及系统
CN115861274A (zh) 一种融合三维点云与二维图像的裂缝检测方法
KR102242996B1 (ko) 자동차 사출제품의 비정형 불량 검출 방법
CN114049316A (zh) 一种基于金属光泽区域的钢丝绳缺陷检测方法
CN114359251A (zh) 一种混凝土表面破损的自动识别方法
JP6221283B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20834450

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20834450

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.06.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20834450

Country of ref document: EP

Kind code of ref document: A1