CN115272302B - Method, equipment and system for detecting parts in image - Google Patents

Method, equipment and system for detecting parts in image Download PDF

Info

Publication number
CN115272302B
CN115272302B CN202211161352.6A CN202211161352A CN115272302B CN 115272302 B CN115272302 B CN 115272302B CN 202211161352 A CN202211161352 A CN 202211161352A CN 115272302 B CN115272302 B CN 115272302B
Authority
CN
China
Prior art keywords
image
identification
historical
parts
comparison
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211161352.6A
Other languages
Chinese (zh)
Other versions
CN115272302A (en
Inventor
蔡加付
邓成呈
张猛
蔡晓东
冯垚伦
马灵涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shenhao Technology Co Ltd
Original Assignee
Hangzhou Shenhao Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Shenhao Technology Co Ltd filed Critical Hangzhou Shenhao Technology Co Ltd
Priority to CN202211161352.6A priority Critical patent/CN115272302B/en
Publication of CN115272302A publication Critical patent/CN115272302A/en
Application granted granted Critical
Publication of CN115272302B publication Critical patent/CN115272302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

The application provides a method for detecting parts in an image, and parts detection equipment and a system, and relates to the field of image detection. The method for detecting the parts in the image comprises the following steps: acquiring an identification image, wherein the identification image is an identification result obtained by identifying parts in an acquired image; carrying out duplicate removal processing on the identification image based on a historical identification image set to obtain a duplicate removal result; wherein the deduplication processing is used for removing parts which repeatedly appear in the recognition image; and obtaining a detection result based on the duplicate removal result. By the method, the subsequent repeated detection and statistics of the parts can be avoided, so that the detection efficiency of the parts is improved, and the statistical accuracy of the parts is improved.

Description

Method, equipment and system for detecting parts in image
Technical Field
The present disclosure relates to the field of image detection, and in particular, to a method for detecting a component in an image, a device for detecting a component, a system for detecting a component, and a computer-readable storage medium.
Background
When the railway track is detected, the inspection robot is usually used for detecting railway parts, and the completeness and the good state of the number of the parts are determined.
At present, the inspection robot is used for carrying out part detection, images including parts can be shot, and the detection of the parts is realized through the images. However, due to the navigation point of the image shot by the inspection robot, the shot images overlap, so that the parts in the image can be repeatedly detected, and the detection efficiency of the parts in the image and the statistics of the parts are influenced.
Disclosure of Invention
In view of the above, the present application aims to provide a method for detecting a part in an image, a device for detecting a part, a system for detecting a part in an image, and a computer-readable storage medium, so as to improve the efficiency of detecting a part in a railway track and improve the statistical accuracy of the part.
In a first aspect, an embodiment of the present application provides a method for detecting a component in an image, including: acquiring an identification image, wherein the identification image is an identification result obtained by identifying parts in an acquired image; carrying out duplicate removal processing on the identification image based on a historical identification image set to obtain a duplicate removal result; wherein the deduplication processing is used for removing parts which repeatedly appear in the recognition image; and obtaining a detection result based on the duplicate removal result.
In the embodiment of the application, after the identification image of the collected image is obtained, the identification image is subjected to duplicate removal processing, so that parts which are compared with a historical identification image set and repeatedly appear in the identification image can be removed, therefore, subsequent processing can be performed based on duplicate removal results, detection results are obtained, the problems of inaccurate, repeated detection, repeated statistics and the like of subsequent detection results caused by repeated parts are avoided, the detection efficiency of the parts is effectively improved, and the statistical accuracy of the parts is improved.
In one embodiment, the history identification image set includes a history identification image, and the performing the deduplication processing on the identification image based on the history identification image set to obtain the deduplication result includes: determining a comparison image based on the historical set of identification images; the comparison image is one or more historical identification images in the historical identification image set; comparing the comparison image with the identification image to determine repeated parts in the identification image; and removing the repeated parts in the identification image to obtain the duplicate removal result.
In the embodiment of the application, the comparison image is determined from the historical identification image set and compared with the identification image, so that the identification image does not need to be compared with each historical identification image, the efficiency of determining the repeated parts by comparing the images is effectively improved, and the detection efficiency of the parts is further improved.
In one embodiment, determining a contrast image based on the historical set of identified images comprises: updating the historical identification image set based on a preset constraint threshold value; determining the comparison image based on the updated historical recognition image set.
In this embodiment, the history identification image set is updated based on the preset constraint threshold, so that the history identification image set is limited by the constraint threshold, and the history image is not stored without limitation, thereby reducing the time and the operation resources required for determining the comparison image, and effectively improving the detection efficiency of the component.
In one embodiment, determining a contrast image based on the historical set of identified images comprises: determining the comparison image from the set of historical recognition images based on image acquisition location and image features; the image acquisition position comprises the position of an image acquisition device when each historical identification image in the historical identification image set is acquired and the position of the image acquisition device when the acquired image is acquired, and the image characteristics comprise the characteristics of each image in the historical identification image set and the characteristics of the acquired image.
In the embodiment of the application, the comparison image determined based on the image acquisition position can enable the image acquisition position of the comparison image and the image acquisition position of the identification image to have greater relevance, namely the closer the image acquisition positions of the two images are, so that the comparison image determined by the image acquisition positions and the identification image have greater possibility of repetition. The image characteristics are used for determining the comparison image from the comparison image set, so that the comparison file and the identification image have larger relevance. Therefore, the comparison file determined by integrating the image acquisition position and the image characteristics has high relevance with the identification image with high probability, so that irrelevant images can be prevented from being compared, and the detection efficiency of parts is effectively improved.
In one embodiment, the determining the comparison image from the historical set of identified images based on image acquisition location and image features comprises: for any history identification image in the set of history identification images: and when the distance between the image acquisition position of the historical identification image and the image acquisition position of the acquired image is smaller than a first distance threshold value and the image characteristics of the historical identification image are matched with the image characteristics of the identification image, determining the historical identification image as the comparison image.
In the embodiment of the application, the distance between the image acquisition position of the historical identification image and the image acquisition position of the acquired image is smaller than the first distance threshold, and the historical identification image with the image characteristics matched with the image characteristics of the historical identification image is determined as the comparison image, so that the comparison image and the identification image have higher relevance, the duplicate removal efficiency is effectively improved, and the accuracy of repeated part detection is improved.
In one embodiment, the comparing the comparison image with the identification image to determine the repeated parts in the identification image includes: respectively calculating perspective transformation matrixes of the comparison image and the identification image to obtain target frames of parts in the images of the comparison image and the identification image; matching the target frame of the comparison image with the target frame of the identification image in the same coordinate system to obtain a matched target frame pair; and when the intersection ratio of the matching target frame pair is larger than a preset intersection threshold value, determining that the parts in the matching target frame pair are repeated parts.
In the embodiment of the application, the identification image and the comparison image are converted into the same coordinate system, parts in different images can be unified into the same coordinate system, therefore, when the intersection ratio of the matching target frame pair is greater than the preset threshold value, the parts in the matching target frame pair can be confirmed to be the same parts to a certain extent, and the parts corresponding to the matching target frame pair with the intersection ratio greater than the preset intersection threshold value can be determined to be repeated parts, so that the efficiency of determining the repeated parts can be effectively improved on the premise of ensuring certain accuracy of judging the repeated parts.
In one embodiment, before the acquiring the identification image, the method further includes: shooting a positioning image based on a binocular camera; and determining the image acquisition position according to the positioning image and the binocular camera.
In the embodiment of the application, the binocular camera is used for determining the image acquisition position, and deviation of the image acquisition position caused by accumulated errors of navigation is avoided, so that the accuracy of duplicate removal processing is improved, and the statistical accuracy of parts is improved.
In one embodiment, after the determining the image capturing position according to the positioning image and the binocular camera, the method further comprises: calculating a moving distance based on the positioning image and a preset reference object; adjusting navigation positioning based on the image acquisition position when it is determined that the difference between the movement distance and the reference distance is greater than a second distance threshold; the reference distance is the distance between the starting point and the preset reference object.
In the embodiment of the application, the movement distance is calculated by using the preset reference object and the positioning image, and the deviation of the navigation position can be determined by the difference between the movement distance and the reference distance, so that the navigation position is adjusted, accumulated errors are avoided, and the accuracy of acquiring the image acquisition position is improved.
In a second aspect, an embodiment of the present application provides a component detection apparatus, including: the image acquisition equipment is used for acquiring an acquired image; the processor is used for acquiring an identification image of an acquired image, wherein the identification image is an identification result obtained by identifying parts in the acquired image; the processor is further configured to perform deduplication processing on the identification image pair based on a historical identification image set to obtain a deduplication result; wherein the deduplication processing is used for removing parts which repeatedly appear in the acquired image; the processor is further configured to obtain a detection result based on the deduplication result.
In a third aspect, an embodiment of the present application provides a component detection system, including: the mobile acquisition equipment is used for acquiring an acquired image in the moving process; a cloud server connected to the mobile acquisition device for performing the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which a computer program is stored, which, when run on a computer, causes the computer to perform the method according to any one of the first aspect.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part may be learned by the practice of the above-described techniques of the disclosure, or may be learned by practice of the disclosure.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a block diagram of a component detection apparatus according to an embodiment of the present application;
fig. 2 is a flowchart of a method for detecting a component in an image according to an embodiment of the present disclosure;
FIG. 3 is a difference diagram of a captured image and an identification image according to an embodiment of the present application;
FIG. 4 is a flowchart of a deduplication process provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of target box matching according to an embodiment of the present application;
fig. 6 is a block diagram of a component monitoring system according to an embodiment of the present application.
Icon: the component detecting apparatus 100; an image capture device 110; a processor 120; a mobile acquisition device 310; cloud server 320.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, fig. 1 is a diagram illustrating a component detecting apparatus according to an embodiment of the present disclosure. The part inspection apparatus 100 includes an image capture device 110 and a processor 120.
And an image capturing device 110 for acquiring a captured image.
In this embodiment, the image capturing device 110 at least includes a binocular camera and a capturing camera for capturing images of the components. The binocular camera is used for collecting positioning images so as to position the current position of the part detection equipment. The camera that captures the image of the part is used to capture the image of the railway track so that the part inspection apparatus 100 can inspect the part from the image of the railway track. Therefore, the camera for capturing the image of the part may be various types of cameras, and the kind of the camera is not limited herein.
And a processor 120, configured to identify the acquired image to obtain an identified image.
In this embodiment, the processor may operate a preset recognition program to recognize the captured image, and determine all components included in the captured image, thereby obtaining the recognition image.
The processor 120 further performs deduplication processing on the identification image pair based on the historical identification image set to obtain a detection result; wherein the deduplication process is used to remove parts that appear repeatedly in the recognition image.
In this embodiment, the parts appearing repeatedly in the captured image means that the parts in the captured image are repeated compared with the parts in the history recognition image set.
In one embodiment, the component detecting apparatus 100 further includes a moving part and a navigation device.
The moving part can be a movable structure such as a slide rail and a wheel. Based on the moving part, the part detection equipment can move in the railway track, so that the image acquisition equipment can shoot the whole line segment of the railway track.
And the navigation device is used for positioning the part detection equipment so as to determine the current position of the part detection equipment.
Through removal portion and navigation head, can make spare part check out test set remove to the shooting point position of predetermineeing and gather the image.
Next, a method of detecting the parts in the image by the processor 120 will be specifically described.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for detecting a component in an image according to an embodiment of the present disclosure. The method for detecting the parts in the image comprises the following steps:
s210, acquiring an identification image.
The captured image may be an image taken when the part inspecting apparatus 100 moves in a preset line. The preset line is a track line needing to be subjected to part detection, and the image can comprise a track and facilities around the track.
In some embodiments, the component inspection apparatus 100 may capture an image at a predetermined camera position. The collected images obtained by shooting at two adjacent shooting points usually have overlapped parts, so that the situation that the preset line is not completely shot is avoided.
Referring to fig. 3, fig. 3 is a diagram illustrating a difference between a captured image and an identification image according to an alternative embodiment of the present application.
The identification image is an identification result obtained by identifying the parts in the acquired image. The identification includes identifying the parts existing in the captured image to determine the positions of the parts in the captured image, for example, the parts may be labeled in a block manner. In some embodiments, the position of the component may also be recorded using a marker or coordinate scheme, or the like. In some recognition methods, the type, state, and the like of the component can be recognized. Thus, identifying an image may include the parts present in the captured image, the type, status, etc. of the parts.
In one embodiment, acquiring an identification image of a captured image comprises: and inputting the collected image into a pre-trained recognition model, and outputting the recognition image of the collected image by the recognition model.
In this embodiment, the pre-trained recognition model may be obtained by: and constructing an identification model based on a deep learning target detection algorithm. Marking an existing part data set, training a recognition model by using a training image set marked with parts, the types and the states of the parts so as to enable the recognition model to learn abstract and concrete model characteristics of the parts, and determining that the recognition model is trained when the accuracy of the output result of the recognition model reaches a preset threshold value to obtain a pre-trained recognition model. The target detection algorithm based on deep learning may refer to the prior art, which is not described herein again. The parts in the collected images are identified by using the pre-trained identification model, so that the identification efficiency and the identification accuracy can be effectively improved.
And S220, carrying out duplicate removal processing on the identification image based on the historical identification image set to obtain a duplicate removal result.
In this embodiment, the deduplication process is used to remove parts that appear repeatedly in the recognition image. Here, the parts appearing repeatedly means that the parts of the recognition image are overlapped compared with the parts of the images appearing in the history recognition image set, that is, the parts of the parts detection apparatus 100 are overlapped due to the overlapping of the photographed images.
For example, a part that is a reference object may be located to the right of the captured image at a first time, and the reference object may appear in the middle of the captured image at a second time when the part inspection apparatus 100 is traveling to the right, and may appear to the left of the captured image at a third time when the part inspection apparatus 100 continues to travel to the right. When counting, if all the reference objects appearing in the acquired image at each moment are counted, the number of the appearing reference objects is 3, and the actual number of the reference objects is 1, so that the identification image needs to be subjected to de-duplication processing to obtain the accurate number of the parts. It should be understood that the above description is only exemplary and should not be taken as limiting the present application.
In this embodiment, the history identification image set is an image set formed by identification images corresponding to each of the captured images captured before the current captured image when the component detection apparatus 100 detects a preset line. After detecting one collected image, storing the identification image corresponding to the collected image into a historical identification image set, wherein the identification image in the historical identification image set is a historical identification image. Illustratively, when the 5 th captured image is captured, the 5 th captured image is a current captured image, the identification image corresponding to the 5 th captured image is a current identification image, the 1 st to 4 th captured images are history identification images, and the image set formed by the 1 st to 4 th history identification images is a history identification image set.
In some embodiments, the historical identification image set further includes image features corresponding to the historical identification images, the image features include feature points, descriptors, identification results of parts, and the like, and the feature points and descriptors can be used for describing features in the images. After the identification image is acquired, the image may be subjected to 2-fold down-sampling processing, and then feature points, descriptors, and the like in the identification image are calculated by using a SURF operator and stored in the history identification image set. In this case, the resolution of the image can be reduced by using 2-fold down-sampling, and thus the extraction efficiency of the feature points and the descriptors can be effectively improved.
In some embodiments, the set of historical identification images may include historical detection data including historical identification images and historical feature data including image features corresponding to the historical identification images.
Referring to fig. 4, fig. 4 is a flowchart illustrating an alternative deduplication processing procedure according to an embodiment of the present disclosure. The duplicate removal processing process comprises the following steps:
and S221, judging whether the identification image needs to be subjected to duplicate removal processing.
S222, determining a comparison image based on the historical identification image set.
And S223, comparing the comparison image with the identification image, and determining repeated parts in the identification image.
And S224, traversing all the historical identification images to detect repeated parts.
Next, the entire process of the deduplication processing will be explained.
And S221, judging whether the identified image needs to be subjected to deduplication processing.
In one embodiment, before performing deduplication processing on duplicate part pairs in an identification image based on a history identification image set, it is determined whether the identification image needs to be subjected to deduplication processing.
In this embodiment, when it is determined that the historical recognition image set is empty, the recognition image is directly used as the detection result without performing deduplication processing on the recognition image. When the part detection device 100 starts to work, the image which is just started to be collected cannot have repeated parts, so that the identified image is not subjected to de-duplication processing, and processing resources are saved.
In one embodiment, the deduplication process may include: determining a comparison image based on the historical recognition image set; the comparison image is one or more historical identification images in the historical identification image set; comparing the comparison image with the identification image to determine repeated parts in the identification image; and removing the repeated parts in the identification image to obtain a detection result.
In this embodiment, since the history identification image set has a plurality of history identification images, a part of the history identification images are unlikely to be repeated, for example, the collected images corresponding to two history identification images are far apart from each other in the collecting position, the image collecting device 110 is unlikely to collect repeated features, and the performance of the component detecting device is limited. Therefore, the comparison image needs to be determined from the historical identification image set, and the repeated parts in the identification image are determined by using the comparison image, so that the efficiency of deduplication processing is effectively improved, and the waste of operation resources is reduced.
In an alternative embodiment of the present application, the historical recognition image set is updated based on a preset constraint threshold, and the comparison image is determined based on the updated historical recognition image set.
In this embodiment, the constraint threshold may be set reasonably according to the performance of the component detection apparatus 100, for example, according to an operating memory of a processor, a memory of a storage device in the component detection apparatus 100, and the like.
In some embodiments, when the constraint threshold is set according to the size of the memory space, updating the historical recognition image set based on the preset constraint threshold includes: upon determining that the data size of the set of historical recognition images is greater than the spatial constraint threshold, removing data of a first historical recognition image in the set of historical recognition images.
In this embodiment, the data of the first history identification image in the history identification image set refers to the data of the history identification image stored first in the history identification image set, and includes the identification image and the image feature of the identification image (the image feature of the identification image may include the feature point and/or the descriptor of the identification image).
In this embodiment, upon determining that the data size of the set of history identification images is greater than the spatial constraint threshold, data of a first history identification image in the set of history identification images is removed. Illustratively, assuming that the historical recognition image set only allows the data of 3 historical recognition images to be saved under the spatial constraint threshold, the 2 nd historical recognition image is removed from the historical recognition image set when the 5 th recognition image is subjected to the deduplication processing. Therefore, the influence of the historical identification image on the memory can be prevented, the repeated use of obvious irrelevant image detection can be avoided, and the calculation efficiency is effectively improved.
For example, when limited by the running memory of the processor, the history identification image set can only store three history identification images, and when there are only three history identification images in the history identification image set, all three history identification images can be reserved; and after the 4 th history identification image is acquired, deleting the 1 st history identification image in the history identification image set, and storing the 4 th history identification image in the history identification image set, and similarly, when the 5 th history identification image is acquired, deleting the 2 nd history identification image in the history identification image set, it can be understood that the 1 st image, the 2 nd image and the like can be understood as the acquisition sequence of the history identification images, not the arrangement sequence in the history identification image set.
It can be understood that the historical identification image set is not updated, and the comparison images are determined from all the historical identification images and can be reasonably set according to requirements.
S222, determining a comparison image based on the historical recognition image set.
In one embodiment, determining a comparison image based on the updated historical recognition image set comprises: a comparison image is determined from the historical recognition image set based on the image acquisition location and the image characteristics. The image acquisition position comprises the position of the image acquisition equipment when each historical identification image in the historical identification image set is acquired and compared with the position of the image acquisition equipment when each historical identification image in the historical identification image set is acquired, and the image characteristics comprise the characteristics of each historical identification image in the historical identification image set and the characteristics of the identification image.
In this embodiment, the history identification image set includes a plurality of history identification images, and a part of the history identification images may not have a relationship with the current identification image, and there is obviously no situation of repeating parts, so that a comparison image for comparing with the current identification image can be determined from the updated history identification image set. The number of the contrast images may be one or more.
For ease of understanding, the comparison image, the historical image set, and the updated historical image set will be described herein.
Illustratively, when the 6 th captured image is captured, the identification images corresponding to the 1 st to 5 th captured images are all history identification images, and together form a history image set.
The number of the history recognition images that can be stored is 3 due to the performance limitation of the component detecting apparatus 100, and the 3 rd to 5 th history recognition images collectively constitute an updated history image set.
During comparison, one image is determined as a comparison image, for example, the 3 rd historical identification image, from the identification images corresponding to the 3 rd to 5 th acquired images. Wherein, after the 3 rd historical recognition image is compared, the 4 th and/or 5 th historical recognition images can be used as comparison images.
It should be understood that the above description is only exemplary and should not be taken as limiting the present application.
In this embodiment, the image capturing position is a position of the image capturing device when the captured image is captured. When the component detection apparatus 100 acquires the captured image, the component detection apparatus 100 may acquire the position of the component detection apparatus 100 at the same time, that is, all the images for detecting the component acquired by the component detection apparatus 100 have a corresponding image capture position. Here, the component detection apparatus 100 is used to capture a captured image, and therefore, the position of the component detection apparatus 100 is the position of the image capture apparatus 110. In some embodiments, a more accurate position of the image capturing device 110 may be further determined according to a position relationship of the capturing camera within the component detecting device. When the component detection device 100 takes a picture at a preset photo site, the preset photo site can be used as the position of the image acquisition device.
In one embodiment, the process of determining a contrast image from a historical recognition image set based on image acquisition location and image features may comprise: for any history identification image in the history identification image set: and when the distance between the image acquisition position of the historical recognition image and the image acquisition position of the acquired image is smaller than a first distance threshold value and the image characteristics of the historical recognition image are matched with the image characteristics of the recognition image, determining the historical recognition image as a comparison image.
In this embodiment, the first distance threshold may be calculated from a maximum working field of view of a capturing camera used to capture the part image. In some embodiments, the camera may be disposed on the mechanical arm, and the first distance threshold may be calculated according to a farthest distance in a horizontal plane of the mechanical arm. Therefore, the distance between the image acquisition position of the historical identification image and the image acquisition position of the acquired image is calculated and compared with a preset first distance threshold, and when the distance between the two image acquisition positions is smaller than the first distance threshold, the identification image corresponding to the acquired image and the historical identification image possibly have repeated parts. On the contrary, when the distance between the two image acquisition positions is greater than the first distance threshold value, the identification image corresponding to the characteristic acquisition image and the historical identification image are unlikely to have repeated parts. It is understood that the component detection apparatus 100 may travel in the same direction, the image capture apparatus 110 may capture images during the travel, and if the distance between the two image capture positions is smaller, the closer the two image capture positions are, and the capture field of the capture camera for capturing the component image is unchanged, the closer the two image capture positions are, the more likely the captured images to overlap.
In this embodiment, the above determination may be performed on all the history recognition images in the history recognition image set to determine all the comparison images. For example, when the above determination is performed on all the history recognition images in the history recognition image set, one history recognition image may be sequentially acquired from the history recognition image set for determination until all the history recognition images in the history recognition image set are determined.
In this embodiment, when it is determined that the distance between the image capturing position of the history recognition image and the image capturing position of the captured image is smaller than the first distance threshold, the history recognition image and the recognition image may be matched to determine whether there is a repeated region/component. The feature points of the identification image and the historical identification image may be matched with the descriptors, and when the matching is determined, the historical identification image may be determined as a comparison file image. And otherwise, determining the next comparison image from the historical recognition image set for distance judgment and feature matching.
And S223, comparing the comparison image with the identification image, and determining repeated parts in the identification image.
In one embodiment, after the comparison image is determined, the comparison image is compared with the identification image to determine the repeated parts in the identification image. The comparison process comprises the following steps: respectively calculating perspective transformation matrixes of the contrast image and the identification image to obtain target frames of parts in the respective images of the contrast image and the identification image; matching the target frame of the comparison image with the target frame of the identification image under the same coordinate system to obtain a matching target frame pair; and when the intersection ratio of the matching target frame pair is larger than a preset intersection threshold value, determining the parts in the matching target frame as repeated parts.
Referring to fig. 5, fig. 5 is a schematic diagram of target box matching according to an embodiment of the present disclosure.
In this embodiment, the target frames of the components may be converted to the same coordinate system using the perspective transformation matrix, so that the components of different images may be matched, and whether the components are duplicated may be determined using the target frames of the components.
First, perspective transformation matrixes of the contrast image and the identification image are calculated respectively, and target frames of parts in the images of the contrast image and the identification image can be obtained. The target frame may have a plurality of frames, which is determined by the number of parts in the image, i.e., the number of target frames corresponds to the number of parts.
Then, when the target frame of the comparison image is matched with the target frame of the identification image, the coordinates of the comparison image and the coordinates of the identification image may be unified and then matched, for example, the coordinates of the identification image may be converted into the coordinates of the comparison image, or the coordinates of the comparison image may be converted into the coordinates of the identification image, so that the parts of different images may be matched in the same coordinate system to determine the matching target frame pair.
And finally, calculating the intersection ratio of the matching target frame pair, comparing the intersection ratio with a preset intersection threshold, and if the intersection ratio is greater than the preset intersection threshold, representing that the parts in the matching target frame pair are overlapped.
For calculating the perspective transformation matrix, the coordinate transformation, the target frame matching, and the intersection-to-parallel ratio, reference may be made to the prior art, which is not described herein again.
And S224, traversing all the historical identification images to detect repeated parts.
In this embodiment, after traversing all the target frames in the identification image, the next historical identification image may be taken as a comparison image, and the above repeated process of determining the parts is performed again until all the historical identification images that can be used as comparison images are traversed, and the duplicate removal result is output.
In order to avoid repeated detection, after it is determined that the intersection ratio of the matching target frame pair is greater than the preset intersection threshold, the index of the part corresponding to the matching target frame pair in the identification image may be recorded. Wherein, the index is the mark for identifying each part in the image. Therefore, when parts in the identification image are traversed, the parts with recorded indexes are not detected any more.
And S230, obtaining a detection result based on the duplicate removal result.
In this embodiment, after the duplicate removal result is obtained, subsequent processing processes such as detection, analysis, statistics, and the like may be performed on the parts in the duplicate removal result, so as to obtain a detection result.
In the embodiment of the application, after the identification image of the collected image is obtained, the identification image is subjected to duplicate removal processing, so that parts of the identification image, which are compared with the historical identification image set and repeatedly appear, are removed, therefore, the follow-up repeated detection and statistics of the parts can be avoided, the detection efficiency of the parts is improved, and the statistical accuracy of the parts is improved.
To facilitate the understanding of the present disclosure by those skilled in the art, the present disclosure is described by way of example only, and it should be understood that the following examples are not intended to limit the present disclosure.
In an optional embodiment, the implementation process of the method for detecting the parts in the image comprises the following steps: first, a captured image captured by an image capturing device of the component detecting apparatus is acquired. And sending the image to a processor, so that the processor identifies the acquired image, determines parts included in the acquired image, and obtains an identified image.
Then, the processor performs deduplication processing on the identification image, and the specific process comprises the following steps:
the identification image is subjected to 2-fold down-sampling, characteristic points and descriptors in the identification image are extracted, and the identification image, the characteristic points of the identification image, the descriptors and the like are stored in a history identification image set.
And judging whether the historical identification image exists in the historical identification image set or not, and directly taking the identification image as a duplicate removal result when the historical identification image does not exist.
When the history identification image exists in the history identification image set, the comparison image set which is compared with the identification image can be determined according to the size of the memory space. When the historical identification image set is smaller than a preset space constraint threshold value, the historical identification image set is directly used for determining a comparison image. And when the historical identification image set is larger than the preset space constraint threshold, removing first data in the historical identification image set, namely removing the identification image corresponding to the acquired image acquired firstly by the historical identification image set, and determining a comparison image based on other historical identification images included in the historical identification image set.
Then, a comparison image to be compared with the recognition image is determined from the historical recognition image set. The method comprises the following steps: and determining by using an image acquisition position and image feature matching mode, and when one historical identification image in the comparison image set meets the requirements on the image acquisition position and the image features, determining the historical identification image as a comparison image, and using the comparison image for judging whether the parts are repeated.
Then, through calculating perspective transformation matrixes of the comparison image and the identification image, target frames of the parts in the respective images are obtained, the target frames in the comparison image and the identification image are converted to the same coordinate system, the intersection and comparison of the two target frames matched in position in the comparison image and the identification image is calculated, whether the parts of the comparison image and the identification image are repeated or not is judged, and indexes of the parts are recorded for the repeated parts. And after the comparison and traversal of all the part target frames in the identification image, determining the next comparison image to detect repeated parts until traversal of all the comparison images in the comparison image set is completed. And eliminating the parts of the index records and outputting a duplicate removal result.
And finally, performing subsequent detection and statistical processing according to the duplicate removal result to obtain a duplicate removal result.
In an optional implementation manner, the component detection device 100 may capture a captured image at a preset shooting position, at this time, the position of the shooting position may be used as the position of the image capture device, and during an actual movement process of the component detection device 100, an accumulated error may occur, which causes a deviation between an actual image capture position and the preset shooting position, and a de-duplication process may use the image capture position, so that the deviation between the actual image capture position and the preset shooting position may affect the efficiency and accuracy of the de-duplication process.
Therefore, before capturing a captured image, the image capture position may also be determined based on a binocular camera, including: acquiring a positioning image shot by a binocular camera; and determining an image acquisition position according to the positioning image and parameters of the binocular camera.
In this embodiment, the binocular camera is preset, and includes the predetermined internal reference and external reference of the binocular camera, the predetermined positional relationship between the binocular camera and the component detection apparatus 100, and the like. For example, the positional relationship is obtained by correcting the binocular camera in advance, performing image enhancement and filter setting on the binocular camera, correcting the binocular camera to a limit, constructing a camera coordinate system, a world coordinate system, an image coordinate system, a pixel coordinate system, and the like to obtain internal and external parameters of the binocular camera, and measuring the distance, orientation, and the like between the binocular camera and the component detecting apparatus 100. The specific implementation process can refer to the prior art.
After the binocular camera setting is completed, the specific position of the parts inspection apparatus 100 may be determined using the binocular camera. The method comprises the following steps: the binocular camera collects the positioning images, and because the positioning images shot by the binocular camera are two, the positions and postures of the binocular camera can be determined together according to the characteristic points in the two positioning images, the setting parameters of the binocular camera and the like, and then the current position of the part detection equipment 100 is determined through the positioning images shot by the part detection equipment 100 at a plurality of point positions, so that the image collecting position can be determined. The pose of the binocular camera is determined according to the feature points in the two positioning images, the setting parameters of the binocular camera and the like, the image acquisition position is determined according to the positioning images of the point positions, and the pose determination method can be achieved jointly by various modes such as an ORB algorithm, a gray scale centroid method, a Fast corner point detection algorithm, a rotation matrix, a Hamming distance calculation method, a Reynolds average algorithm, a parallax calculation method and the like.
In another embodiment, the navigation point location may also be used as the position of the image capturing device, but due to the accumulated error, the navigation position of the component detecting device 100 may be deviated, and therefore, the navigation position may be corrected to use the corrected navigation point location as the position of the image capturing device. And during correction, correcting the navigation positioning according to the positioning image, the parameters of the binocular camera and a preset reference object.
The navigation positioning correction process comprises the following steps: acquiring a positioning image by using a binocular camera, and determining an image acquisition position based on the positioning image and parameters of the binocular camera; calculating the movement distance of the part acquisition equipment based on the positioning image and a preset reference object, and adjusting the navigation positioning based on the image acquisition position determined by the binocular camera when the difference between the movement distance and the reference distance is determined to be larger than a second distance threshold; wherein, the reference distance is the distance between the starting point and the preset reference object.
In this embodiment, the image capturing position may be calculated by the positioning image captured by the binocular camera and the parameters of the binocular camera. The preset landmark information is stored in the part detection device 100, the landmark information comprises the distance between each landmark and the starting point, the difference between the image acquisition position and the navigation point position can be determined through the landmark information and the image acquisition position, and then the part detection device 100 can timely adjust the navigation positioning, so that the accumulated error is avoided.
The above-mentioned implementation mode that determines the image acquisition position and use and predetermine the reference object and carry out navigation positioning correction through binocular camera can refer to prior art, and not repeated herein.
In this embodiment, through the binocular camera, the image acquisition position of the part detection apparatus 100 can be determined, so that the part detection apparatus 100 can perform deduplication processing using the image acquisition position, and the efficiency of part detection is improved. The positioning error is corrected by using the preset reference object, so that accumulated errors are avoided, and the efficiency and the accuracy of part detection are improved.
Referring to fig. 6, fig. 6 is a diagram illustrating a component detecting system according to an embodiment of the present disclosure. The part inspection system includes: a mobile acquisition device 310 and a cloud server 320.
And a mobile acquisition device 310 for acquiring the acquired image during the movement. The mobile acquisition equipment comprises a mobile device and a camera, wherein the mobile device is used for enabling the mobile acquisition equipment to move in a working area, and the camera is used for shooting an acquired image comprising parts. Wherein. The working area may be a scene with parts such as a railway track.
The mobile capturing device 310 further has a communication module, communicatively connected to the cloud server, for transmitting the captured image to the cloud server.
And the cloud server 320 is configured to receive the acquired image, identify the acquired image, and obtain an identification image, where the identification image is an identification result obtained by identifying a part in the acquired image.
The cloud server 320 is further configured to perform deduplication processing on the identification image pair based on the historical identification image set to obtain a deduplication result; wherein the deduplication process is used to remove parts that appear repeatedly in the recognition image.
The cloud server 320 is further configured to obtain a detection result based on the deduplication result.
The historical identification image set comprises historical identification images, and the cloud server 320 is further used for determining comparison images based on the historical identification image set; the comparison image is one or more historical identification images in the historical identification image set; comparing the comparison image with the identification image to determine repeated parts in the identification image; and removing the repeated parts in the identification image to obtain the detection result.
The cloud server 320 is further used for updating the historical identification image set based on a preset constraint threshold; determining the comparison image based on the updated historical recognition image set.
A cloud server 320 further for determining the comparison image from the set of historical recognition images based on image acquisition location and image characteristics; the image acquisition position comprises the position of an image acquisition device when each historical identification image in the historical identification image set is acquired and the position of the image acquisition device when the acquired image is acquired, and the image characteristics comprise the characteristics of each image in the historical identification image set and the characteristics of the acquired image.
The cloud server 320 is further configured to determine, for any history identification image in the history identification image set, the history identification image as a comparison image when a distance between an image acquisition position of the history identification image and an image acquisition position of an acquired image is smaller than a first distance threshold and an image feature of the history identification image matches with an image feature of the identification image.
The cloud server 320 is further configured to calculate perspective transformation matrices of the comparison image and the identification image respectively, obtain target frames of the parts in respective images of the comparison image and the identification image, and match the target frames of the comparison image and the target frames of the identification image in the same coordinate system to obtain a pair of matched target frames; and when the intersection ratio of the matching target frame pair is larger than a preset intersection threshold value, determining the parts in the matching target frame as repeated parts.
The cloud server 320 is further configured to obtain a detection result based on the deduplication result.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a computer, the method for detecting a part in an image in the foregoing embodiment is performed.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some communication interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. A method for detecting parts in an image is characterized by comprising the following steps:
acquiring an identification image, wherein the identification image is an identification result obtained by identifying parts in an acquired image;
carrying out duplicate removal processing on the identification image based on a historical identification image set to obtain a duplicate removal result; wherein the deduplication process is used to remove parts that appear repeatedly in the recognition image; the historical identification image set comprises a plurality of historical identification images;
obtaining a detection result based on the duplicate removal result;
the method for carrying out deduplication processing on the identification image pair based on the historical identification image set to obtain a deduplication result comprises the following steps: determining a comparison image based on the historical recognition image set; the comparison image is one or more historical identification images in the historical identification image set; comparing the comparison image with the identification image to determine repeated parts in the identification image; removing the repeated parts in the identification image to obtain the duplicate removal result;
the determining of a contrast image based on the set of historical recognition images comprises: determining the comparison image from the historical set of identification images based on image acquisition location and image features; the image acquisition position comprises the position of image acquisition equipment when each historical identification image in the historical identification image set is acquired and the position of the image acquisition equipment when the acquired image is acquired, and the image characteristics comprise the characteristics of each image in the historical identification image set and the characteristics of the acquired image;
the determining the contrast image from the historical set of identified images based on image acquisition location and image features comprises: for any history identification image in the history identification image set, when the distance between the image acquisition position of the history identification image and the image acquisition position of the acquired image is smaller than a first distance threshold value and the image characteristics of the history identification image are matched with the image characteristics of the identification image, determining the history identification image as the comparison image, wherein the comparison is one or more history identification images in the history identification image set.
2. The method of claim 1, wherein determining a comparison image based on the set of historically identified images comprises:
updating the historical identification image set based on a preset constraint threshold;
determining the comparison image based on the updated historical recognition image set.
3. The method of claim 1, wherein comparing the comparison image with the identification image to determine repeating features in the identification image comprises:
respectively calculating perspective transformation matrixes of the comparison image and the identification image to obtain target frames of parts in respective images of the comparison image and the identification image;
matching the target frame of the comparison image with the target frame of the identification image under the same coordinate system to obtain a matched target frame pair;
and when the intersection ratio of the matching target box pairs is larger than a preset intersection threshold value, determining the parts in the matching target box pairs as repeated parts.
4. The method of claim 1 or 3, wherein prior to the acquiring an identification image, the method further comprises:
shooting a positioning image based on a binocular camera;
and determining an image acquisition position according to the positioning image and the binocular camera.
5. The method of claim 4, wherein after determining an image capture location from the positioning image and the binocular camera, the method further comprises:
calculating a moving distance based on the positioning image and a preset reference object;
adjusting navigation positioning based on the image acquisition position when it is determined that the difference between the movement distance and the reference distance is greater than a second distance threshold; the reference distance is the distance between the starting point and the preset reference object.
6. A component detecting apparatus, comprising:
the image acquisition equipment is used for acquiring an acquired image;
the processor is used for identifying the collected image to obtain an identification image; the identification image is an identification result obtained by identifying the parts in the acquired image;
the processor is further configured to perform deduplication processing on the identification image pair based on a historical identification image set to obtain a deduplication result; wherein the deduplication processing is used for removing parts which repeatedly appear in the recognition image;
the processor is further used for obtaining a detection result based on the duplicate removal result;
the processor is further configured to determine a comparison image based on the historical set of identified images; the comparison image is one or more historical identification images in the historical identification image set; comparing the comparison image with the identification image to determine repeated parts in the identification image; removing the repeated parts in the identification image to obtain the duplication removing result;
the processor is further configured to determine the comparison image from the set of historical identification images based on image acquisition location and image characteristics; the image acquisition position comprises the position of image acquisition equipment when each historical identification image in the historical identification image set is acquired and the position of the image acquisition equipment when the acquired image is acquired, and the image characteristics comprise the characteristics of each image in the historical identification image set and the characteristics of the acquired image;
the processor is further configured to determine, for any one of the historical recognition images in the set of historical recognition images, the historical recognition image as the comparison image when a distance between the image acquisition position of the historical recognition image and the image acquisition position of the acquired image is smaller than a first distance threshold and an image feature of the historical recognition image matches an image feature of the recognition image, the comparison being one or more of the historical recognition images in the set of historical recognition images.
7. A component inspection system, comprising:
the mobile acquisition equipment is used for acquiring an acquired image in the moving process;
a cloud server connected with the mobile acquisition device for performing the method of claim 1 or 2.
8. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to carry out the method according to any one of claims 1 to 5.
CN202211161352.6A 2022-09-23 2022-09-23 Method, equipment and system for detecting parts in image Active CN115272302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211161352.6A CN115272302B (en) 2022-09-23 2022-09-23 Method, equipment and system for detecting parts in image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211161352.6A CN115272302B (en) 2022-09-23 2022-09-23 Method, equipment and system for detecting parts in image

Publications (2)

Publication Number Publication Date
CN115272302A CN115272302A (en) 2022-11-01
CN115272302B true CN115272302B (en) 2023-03-14

Family

ID=83755880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211161352.6A Active CN115272302B (en) 2022-09-23 2022-09-23 Method, equipment and system for detecting parts in image

Country Status (1)

Country Link
CN (1) CN115272302B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426485A (en) * 2015-11-20 2016-03-23 小米科技有限责任公司 Image combination method and device, intelligent terminal and server
CN110188722A (en) * 2019-06-05 2019-08-30 福建深视智能科技有限公司 A kind of method and terminal of local recognition of face image duplicate removal
CN111784663A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Method and device for detecting parts, electronic equipment and storage medium
CN112070837A (en) * 2020-08-31 2020-12-11 浙江省机电设计研究院有限公司 Part positioning and grabbing method and system based on visual analysis
CN113781371A (en) * 2021-08-23 2021-12-10 南京掌控网络科技有限公司 Method and equipment for removing duplication and splicing identification results of shelf pictures

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016149944A1 (en) * 2015-03-26 2016-09-29 北京旷视科技有限公司 Face recognition method and system, and computer program product
CN107832338B (en) * 2017-10-12 2020-02-07 北京京东尚科信息技术有限公司 Method and system for recognizing core product words
CN110660102B (en) * 2019-06-17 2020-10-27 腾讯科技(深圳)有限公司 Speaker recognition method, device and system based on artificial intelligence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426485A (en) * 2015-11-20 2016-03-23 小米科技有限责任公司 Image combination method and device, intelligent terminal and server
CN110188722A (en) * 2019-06-05 2019-08-30 福建深视智能科技有限公司 A kind of method and terminal of local recognition of face image duplicate removal
CN111784663A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Method and device for detecting parts, electronic equipment and storage medium
CN112070837A (en) * 2020-08-31 2020-12-11 浙江省机电设计研究院有限公司 Part positioning and grabbing method and system based on visual analysis
CN113781371A (en) * 2021-08-23 2021-12-10 南京掌控网络科技有限公司 Method and equipment for removing duplication and splicing identification results of shelf pictures

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A High-precision Duplicate Image Deduplication Approach;Ming Chen;《JOURNAL OF COMPUTERS》;20131130;第2768-2775页 *
全局多阶统计中混合应用局部多核度量学习图像集分类研究;李娜;《经营管理者》;20161225(第36期);第336页 *
基于图像边缘特征的零件分类与定位算法;卜伟等;《计量与测试技术》;20180930(第09期);第59-62页 *

Also Published As

Publication number Publication date
CN115272302A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN112476434A (en) Visual 3D pick-and-place method and system based on cooperative robot
CN110587597B (en) SLAM closed loop detection method and detection system based on laser radar
CN111027381A (en) Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN111832760B (en) Automatic inspection method for well lid based on visual algorithm
CN114972490B (en) Automatic data labeling method, device, equipment and storage medium
CN113688817A (en) Instrument identification method and system for automatic inspection
CN114494161A (en) Pantograph foreign matter detection method and device based on image contrast and storage medium
CN115131268A (en) Automatic welding system based on image feature extraction and three-dimensional model matching
CN112836683A (en) License plate recognition method, device, equipment and medium for portable camera equipment
CN110673607B (en) Feature point extraction method and device under dynamic scene and terminal equipment
CN113188509B (en) Distance measurement method and device, electronic equipment and storage medium
CN111476062A (en) Lane line detection method and device, electronic equipment and driving system
CN115272302B (en) Method, equipment and system for detecting parts in image
CN115115768A (en) Object coordinate recognition system, method, device and medium based on stereoscopic vision
CN112598743A (en) Pose estimation method of monocular visual image and related device
CN116524382A (en) Bridge swivel closure accuracy inspection method system and equipment
CN115546223A (en) Method and system for detecting loss of fastening bolt of equipment under train
CN112802112B (en) Visual positioning method, device, server and storage medium
CN111126286A (en) Vehicle dynamic detection method and device, computer equipment and storage medium
CN114037834B (en) Semantic segmentation method and device based on fusion of vibration signal and RGB image
CN115511834A (en) Battery cell alignment degree detection method, controller, detection system and storage medium
CN112270357A (en) VIO vision system and method
CN113639685A (en) Displacement detection method, device, equipment and storage medium
CN113810591B (en) High-precision map operation system and cloud platform
CN109034097B (en) Image-based switch equipment inspection positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Method, equipment, and system for detecting components in images

Granted publication date: 20230314

Pledgee: Guotou Taikang Trust Co.,Ltd.

Pledgor: Hangzhou Shenhao Technology Co.,Ltd.

Registration number: Y2024980011357