CN113269236A - Assembly body change detection method, device and medium based on multi-model integration - Google Patents

Assembly body change detection method, device and medium based on multi-model integration Download PDF

Info

Publication number
CN113269236A
CN113269236A CN202110507269.9A CN202110507269A CN113269236A CN 113269236 A CN113269236 A CN 113269236A CN 202110507269 A CN202110507269 A CN 202110507269A CN 113269236 A CN113269236 A CN 113269236A
Authority
CN
China
Prior art keywords
assembly
change
image
model
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110507269.9A
Other languages
Chinese (zh)
Other versions
CN113269236B (en
Inventor
陈成军
李长治
史宏思
李东年
洪军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University of Technology
Original Assignee
Qingdao University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University of Technology filed Critical Qingdao University of Technology
Priority to CN202110507269.9A priority Critical patent/CN113269236B/en
Publication of CN113269236A publication Critical patent/CN113269236A/en
Application granted granted Critical
Publication of CN113269236B publication Critical patent/CN113269236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention relates to an assembly body change detection method based on multi-model integration, which comprises the following steps: establishing a three-dimensional model of an assembly body, adding a label to each part in the three-dimensional model, setting a plurality of assembly nodes, acquiring depth images of the three-dimensional model under each assembly node under different visual angles, and acquiring a change type label of a newly added part of each assembly node; selecting two depth images at different visual angles at two moments before and after as training samples; training a detection model by sequentially performing feature extraction, target candidate region selection, feature information comparison, classification prediction and semantic segmentation on a training sample, storing a model parameter with optimal similarity in the training process, and finishing training; and acquiring depth images of a front assembly node and a rear assembly node in the assembly process of the assembly body to be detected, inputting the depth images into a trained detection model, and outputting the category and the image of the changed part in the assembly process.

Description

Assembly body change detection method, device and medium based on multi-model integration
Technical Field
The invention relates to an assembly body change detection method, device and medium based on multi-model integration, and belongs to the technical field of computer vision and intelligent manufacturing.
Background
In the assembly process of the mechanical assembly body, the assembly is required to be sequentially carried out according to given assembly steps. The method has the advantages that the new parts of each assembling step are detected from multiple visual angles, the related information of the mechanical assembling process can be acquired, errors can be found in time, the errors can be positioned quickly, the production efficiency of mechanical products is improved, the quality level of the mechanical products is guaranteed, and the method has important research value for intelligent detection of the assembling process of mechanical assembling bodies.
The image change detection method can judge the state difference of the images according to the two images at different time and different visual angles. At present, image change detection is mainly applied to the research of satellite images and aerial images, and for the field of machinery, particularly the field of the assembly process of mechanical assemblies, the application research of the change detection of the multi-view assembly process is very little. The main reason is that the mechanical parts have complex structures, serious shielding, monotonous color and texture information of the parts, difficult change detection of the assembly process, and lack of corresponding data sets.
The image change detection method is mainly divided into three types from the detection level: pixel level, feature level, and target level.
(1) The pixel level change detection method comprises the following steps: the method performs calculation processing on each pixel of two images, and then detects changes according to the comparative analysis result of each pixel. The method can effectively reserve as much original information as possible and has detail information which is not possessed by other methods, but the method has low efficiency and does not consider the characteristic attribute influence of space and the like.
(2) The detection method based on the feature level change comprises the following steps: the method adopts a certain algorithm to extract characteristic information (corners, shapes and outlines) from an original image, and then comprehensively analyzes the characteristic information so as to detect a change area. The method has high operation efficiency, has higher reliability and accuracy in judging the characteristic attributes, and reduces the interference of external factors on the result to a certain extent. However, in this method, a part of information is lost in the feature extraction process, and it is difficult to provide fine information depending on the result of feature extraction.
(3) The detection method based on the target level change comprises the following steps: the method mainly detects some specific objects (such as roads, houses and other targets with clear meanings), is change detection based on image understanding and image recognition, and is a high-level analysis method based on a target model. The detection result of the method can be directly applied, but the method has certain difficulty in directly extracting the target from the image.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an assembly body change detection method based on multi-model integration, which combines a mainstream mode of current change detection, and simultaneously adopts three detection levels of pixels, characteristics and targets as a change detection unit of the invention, so that the types of changed parts can be obtained while the position of a change mask is output, and more assembly monitoring related information is obtained.
The technical scheme of the invention is as follows:
the first technical scheme is as follows:
an assembly body change detection method based on multi-model integration comprises the following steps:
establishing a data set; establishing a three-dimensional model of an assembly body, adding labels to all parts in the three-dimensional model, determining a plurality of assembly nodes according to the assembly steps of the given assembly body, respectively imaging the three-dimensional model under each assembly node, acquiring depth images of the three-dimensional model under each assembly node under different visual angles, and acquiring change type labels and change mask label images of the newly added parts of each assembly node according to the labels of the parts;
training a detection model; selecting two depth images of the three-dimensional model of two adjacent assembly nodes at the front and rear moments under different visual angles as training samples; respectively extracting the features of the two depth images to obtain two corresponding feature images; scanning the characteristic image by using the regional candidate network, searching an interested region with parts, and generating a plurality of target candidate frames corresponding to each part in the characteristic image; matching the feature information in the target candidate frame of each part of the two feature images, determining a change area according to the difference of the feature information, and generating a change frame; the change frame corresponds to the depth image at the next moment to obtain a change characteristic diagram; establishing a category mapping relation library in which the feature information of each part of the assembly body corresponds to the categories one by one, and obtaining the categories of the predicted variable parts from the category mapping relation library through the feature information in the variable feature map; carrying out semantic segmentation on the change feature map, extracting edge information of a change part, and acquiring a pixel-level predicted change mask image; verifying the predicted change part category and the predicted change mask image respectively through the change category label and the change mask label image, and continuously selecting a training sample to iteratively update the detection model to complete training;
detecting the change; and acquiring depth images of a front assembly node and a rear assembly node in the assembly process of the assembly body to be detected, inputting the depth images into a trained detection model, and outputting the category of the variable parts in the assembly process.
Further, the step of scanning the feature image by using the area candidate network, searching for an area of interest in which the part exists, and generating a plurality of target candidate frames corresponding to each part in the feature image specifically includes:
predicting an interested area of a part in the characteristic image based on the area candidate network, and generating a high-quality target suggestion area by using an anchor frame;
and sequencing the obtained multiple target suggestion areas of the same target part according to scores by adopting a non-maximum suppression algorithm, and reserving the target suggestion area with the highest score as a target candidate frame.
Further, the step of corresponding the change frame to the depth image at the next moment and acquiring the change feature map specifically includes:
and converting the position information of the change frame in the feature image into the pixel coordinate position of the corresponding area on the depth image at the next moment through bilinear interpolation, so that the sizes of the converted change frame and the corresponding area of the depth image are aligned, and the change feature image is obtained.
Further, in the training process of the detection model, defining a loss function to calculate the similarity between the predicted change part category and the change category label, and performing iterative update on the detection model based on the similarity; the loss function is defined as follows;
Figure BDA0003058932330000041
wherein phi (A)i) Is the feature information in the corresponding feature image,
Figure BDA0003058932330000042
is a parameter that needs to be learned,
Figure BDA0003058932330000043
is the label value of the changed part category in the category mapping relation library.
The second technical scheme is as follows:
an assembly change detection device based on multi-model integration comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the program, the assembly change detection device based on multi-model integration realizes the assembly change detection method based on multi-model integration according to any embodiment of the invention.
The third technical scheme is as follows:
an assembly change detection medium based on multi-model integration, on which a computer program is stored, wherein the computer program, when executed by a processor, implements an assembly change detection method based on multi-model integration according to any embodiment of the present invention.
The invention has the following beneficial effects:
1. the invention discloses an assembly body change detection method based on multi-model integration, which is characterized in that a target area candidate frame of each part is extracted, feature information in the target area candidate frame of each part in two depth images is matched according to the feature information, a change frame is obtained according to the difference value of the feature information, and a change category and a change mask are obtained according to the change frame. The method combines the mainstream mode of the current change detection, and simultaneously adopts three detection levels of pixels, characteristics and targets as the change detection unit, so that the types of the changed parts can be obtained while the position of the change mask is output, and more relevant information of assembly monitoring is obtained.
2. The invention relates to an assembly body change detection method based on multi-model integration, which solves the problem that the same part has a plurality of target candidate frames by adopting a non-maximum suppression algorithm, only reserves the target candidate frame with the highest quality and improves the detection accuracy.
3. The invention relates to an assembly body change detection method based on multi-model integration, which aligns a feature image with the size of an initially input depth image by a bilinear interpolation method and avoids errors caused by any quantification of an obtained change frame boundary.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is an exemplary diagram of a detection model training process in an embodiment of the invention.
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments.
The first embodiment is as follows:
referring to fig. 1, an assembly body change detection method based on multi-model integration includes the following steps:
establishing a data set; establishing a data set; establishing a three-dimensional model of a mechanical assembly body through SolidWorks software, and adding labels to parts in the three-dimensional model, wherein in the embodiment, the labels added to the parts are color marks, setting m assembly nodes, assembling in m-1 steps, assembling one part in each step, then loading the three-dimensional model of the mechanical assembly body into depth image and color image imaging software, setting a virtual camera to respectively perform imaging processing on different angles of each assembly node, acquiring depth images and color images of the three-dimensional model under each assembly node at different visual angles, and generating a change type label of the part newly assembled by each assembly node by using the color marks of the color images; wherein, the change type label is a self-defined part code or a universal part noun, the self-defined part code is, for example, a part No. 1, a part No. 2 and the like, and the universal part name is, for example, a gear, a bearing and the like; the change mask label image is an image corresponding to the newly added part in the assembly node.
Training a multi-model integrated detection model:
firstly, selecting a depth image of a three-dimensional model of a previous assembly node under a visual angle and a depth image of a three-dimensional model of a next assembly node under different visual angles as a group of training samples;
referring to fig. 2, in this embodiment, the detection model integrated by multiple models includes a feature extraction network module, an RPN module, a comparison module, a RoI Align module, a classification module, and a semantic segmentation module;
the feature extraction network module: the module adopts an image feature extraction network such as Alex Net, Vgg, GoogLeNet, Resnet, DenseNet, ResNeXt, EfficientNet or ShuffLeNet to respectively extract features of the input assembly node double-time phase depth images at the front and rear moments to obtain corresponding feature images.
The RPN module utilizes a Region candidate Network scanning feature image of a Region of Region.
The comparison module is used for: the module directly matches the feature information in each target candidate frame in the double time-phase depth image in sequence according to the feature information in the target candidate frame generated by the RPN module, if the matching degree is greater than a certain threshold value, the matching area is not changed, if the matching degree is too small, the area is a change area, and a change frame is obtained according to the feature information of the change area.
And the RoI Align module is an interested area aligning module, and the obtained change frame corresponds to the depth image at the later moment to obtain a change characteristic diagram.
The classification module: the module establishes a category mapping relation library of each part according to different feature information quantities of each part in the assembly body, and obtains the category of the predicted change part from the category mapping relation library through the feature information in the change feature map.
And the semantic segmentation module adopts a semantic segmentation network such as FCN, U-Net, deep Lab or PSPNet and the like to extract the characteristics of the change characteristic graph again to obtain the edge information of the change part and finally obtain a predicted change mask image of the pixel level of the change part.
And verifying the predicted change part category and the predicted change mask image respectively through the change category label and the change mask label image, and continuously selecting a training sample to iteratively update the detection model to finish training.
Detecting the change; the method comprises the steps of obtaining depth images of a front assembly node and a rear assembly node in the assembly process of an assembly body to be detected, inputting the depth images into a trained detection model, and outputting the type of a changed part newly added to the rear assembly node relative to the front assembly node in the assembly process through the detection model.
In the embodiment, the feature information extracted from the depth images of the assembly body at the previous and subsequent moments is used for generating the target candidate frame of each part, so that the feature information in each part candidate frame in the double-temporal depth image is matched, the change frame is obtained according to the difference value of the feature information, the change category is obtained, the change condition of the assembly body in the assembly process can be accurately and effectively detected, and whether the missing, wrong and assembly steps in the assembly process of the assembly body are correct or not can be monitored by using the change condition.
Example two:
in this embodiment, the specific working process of the RPN module is as follows:
predicting an interested area with parts in the characteristic image based on the RPN area candidate network, and generating a high-quality target suggestion area by using an anchor frame;
in order to solve the problem of repeated suggestion, a non-maximum suppression algorithm is adopted to sort the obtained multiple target suggestion areas of the same target part according to scores, and the target suggestion area with the highest score is reserved as a target candidate frame.
In this embodiment, the specific working process of the RoI Align module is as follows:
and converting the position information of the change frame in the feature image into the pixel coordinate position of the corresponding area on the depth image at the next moment through bilinear interpolation, so that the sizes of the converted change frame and the corresponding area of the depth image are aligned, and the change feature image is obtained. Because the size of the feature image obtained after feature extraction is a floating point number, in order to avoid generating errors due to any quantization of the obtained change frame boundary, a bilinear interpolation method is adopted to align the size of the feature image with the size of the originally input depth image.
Further, in the training process of the detection model, defining a loss function to calculate the similarity between the predicted change part category and the change category label, and performing iterative update on the detection model based on the similarity; the loss function is defined as follows;
Figure BDA0003058932330000091
wherein phi (A)i) Is the feature information in the corresponding feature image,
Figure BDA0003058932330000092
is a parameter that needs to be learned,
Figure BDA0003058932330000093
is the label value of the changed part category in the category mapping relation library.
The smaller the loss function is, the higher the similarity is, and the closer the predicted variable part class output by the detection model is to the real variable part class label is.
Example three:
an assembly change detection device based on multi-model integration comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the program, the assembly change detection device based on multi-model integration realizes the assembly change detection method based on multi-model integration according to any embodiment of the invention.
Example four:
an assembly change detection medium based on multi-model integration, on which a computer program is stored, wherein the computer program, when executed by a processor, implements an assembly change detection method based on multi-model integration according to any embodiment of the present invention.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (6)

1. An assembly body change detection method based on multi-model integration is characterized by comprising the following steps:
establishing a data set; establishing a three-dimensional model of an assembly body, adding labels to all parts in the three-dimensional model, determining a plurality of assembly nodes according to the assembly steps of the given assembly body, respectively imaging the three-dimensional model under each assembly node, acquiring depth images of the three-dimensional model under each assembly node under different visual angles, and acquiring change type labels and change mask label images of the newly added parts of each assembly node according to the labels of the parts;
training a detection model; selecting two depth images of the three-dimensional model of two adjacent assembly nodes at the front and rear moments under different visual angles as training samples; respectively extracting the features of the two depth images to obtain two corresponding feature images; scanning the characteristic image by using the regional candidate network, searching an interested region with parts, and generating a plurality of target candidate frames corresponding to each part in the characteristic image; matching the feature information in the target candidate frame of each part of the two feature images, determining a change area according to the difference of the feature information, and generating a change frame; the change frame corresponds to the depth image at the next moment to obtain a change characteristic diagram; establishing a category mapping relation library in which the feature information of each part of the assembly body corresponds to the categories one by one, and obtaining the categories of the predicted variable parts from the category mapping relation library through the feature information in the variable feature map; carrying out semantic segmentation on the change feature map, extracting edge information of a change part, and acquiring a pixel-level predicted change mask image; verifying the predicted change part category and the predicted change mask image respectively through the change category label and the change mask label image, and continuously selecting a training sample to iteratively update the detection model to complete training;
detecting the change; and acquiring depth images of a front assembly node and a rear assembly node in the assembly process of the assembly body to be detected, inputting the depth images into a trained detection model, and outputting the category of the variable parts in the assembly process.
2. The assembly change detection method based on multi-model integration according to claim 1, wherein the step of scanning the feature image by using the area candidate network, finding the region of interest where the part exists, and generating a plurality of target candidate frames corresponding to each part in the feature image specifically comprises:
predicting an interested area of a part in the characteristic image based on the area candidate network, and generating a high-quality target suggestion area by using an anchor frame;
and sequencing the obtained multiple target suggestion areas of the same target part according to scores by adopting a non-maximum suppression algorithm, and reserving the target suggestion area with the highest score as a target candidate frame.
3. The assembly change detection method based on multi-model integration according to claim 1, wherein the step of corresponding the change frame to the depth image at the next moment and obtaining the change feature map specifically comprises:
and converting the position information of the change frame in the feature image into the pixel coordinate position of the corresponding area on the depth image at the next moment through bilinear interpolation, so that the sizes of the converted change frame and the corresponding area of the depth image are aligned, and the change feature image is obtained.
4. The assembly change detection method based on multi-model integration according to claim 1, wherein in the training process of the detection model, a loss function is defined to calculate and predict the similarity between the changed part type and the changed type label, and the detection model is iteratively updated based on the similarity; the loss function is defined as follows;
Figure FDA0003058932320000021
wherein phi (A)i) Is the feature information in the corresponding feature image,
Figure FDA0003058932320000031
is a parameter that needs to be learned,
Figure FDA0003058932320000032
is the label value of the changed part category in the category mapping relation library.
5. An assembly change detection device based on multi-model integration, comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor implements the assembly change detection method based on multi-model integration as claimed in any one of claims 1 to 4 when executing the program.
6. An assembly change detection medium based on multi-model integration, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the assembly change detection method based on multi-model integration according to any one of claims 1 to 4.
CN202110507269.9A 2021-05-10 2021-05-10 Assembly body change detection method, device and medium based on multi-model integration Active CN113269236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110507269.9A CN113269236B (en) 2021-05-10 2021-05-10 Assembly body change detection method, device and medium based on multi-model integration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110507269.9A CN113269236B (en) 2021-05-10 2021-05-10 Assembly body change detection method, device and medium based on multi-model integration

Publications (2)

Publication Number Publication Date
CN113269236A true CN113269236A (en) 2021-08-17
CN113269236B CN113269236B (en) 2022-04-01

Family

ID=77230249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110507269.9A Active CN113269236B (en) 2021-05-10 2021-05-10 Assembly body change detection method, device and medium based on multi-model integration

Country Status (1)

Country Link
CN (1) CN113269236B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743342A (en) * 2021-09-10 2021-12-03 齐鲁工业大学 Method, system, terminal and storage medium for assembly process detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103458261A (en) * 2013-09-08 2013-12-18 华东电网有限公司 Video scene variation detection method based on stereoscopic vision
CN107977992A (en) * 2017-12-05 2018-05-01 深圳大学 A kind of building change detecting method and device based on unmanned plane laser radar
CN108491776A (en) * 2018-03-12 2018-09-04 青岛理工大学 Assembly Parts Recognition method, apparatus based on pixel classifications and monitoring system
CN109559301A (en) * 2018-11-20 2019-04-02 江苏第二师范学院(江苏省教育科学研究院) A kind of EMU end to end enters the twin network method of institute's defects detection
CN112288750A (en) * 2020-11-20 2021-01-29 青岛理工大学 Mechanical assembly image segmentation method and device based on deep learning network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103458261A (en) * 2013-09-08 2013-12-18 华东电网有限公司 Video scene variation detection method based on stereoscopic vision
CN107977992A (en) * 2017-12-05 2018-05-01 深圳大学 A kind of building change detecting method and device based on unmanned plane laser radar
CN108491776A (en) * 2018-03-12 2018-09-04 青岛理工大学 Assembly Parts Recognition method, apparatus based on pixel classifications and monitoring system
CN109559301A (en) * 2018-11-20 2019-04-02 江苏第二师范学院(江苏省教育科学研究院) A kind of EMU end to end enters the twin network method of institute's defects detection
CN112288750A (en) * 2020-11-20 2021-01-29 青岛理工大学 Mechanical assembly image segmentation method and device based on deep learning network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALI CAN KARACA ET AL.: "Hyperspectral change detection with stereo disparity information enhancement", 《2014 6TH WORKSHOP ON HYPERSPECTRAL IMAGE AND SIGNAL PROCESSING EVOLUTION IN REMOTE SENSING (WHISPERS)》 *
冯文卿: "联合像素级和对象级分析的遥感影像变化检测", 《测绘学报》 *
田中可: "基于深度图像的零件识别及装配监测", 《计算机集成制造系统》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743342A (en) * 2021-09-10 2021-12-03 齐鲁工业大学 Method, system, terminal and storage medium for assembly process detection
CN113743342B (en) * 2021-09-10 2023-08-15 齐鲁工业大学 Method, system, terminal and storage medium for detecting assembly flow

Also Published As

Publication number Publication date
CN113269236B (en) 2022-04-01

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
Lee et al. Simultaneous traffic sign detection and boundary estimation using convolutional neural network
CN108830285B (en) Target detection method for reinforcement learning based on fast-RCNN
CN111640125B (en) Aerial photography graph building detection and segmentation method and device based on Mask R-CNN
CN110287826B (en) Video target detection method based on attention mechanism
US7756296B2 (en) Method for tracking objects in videos using forward and backward tracking
US8331650B2 (en) Methods, systems and apparatus for defect detection
Zhou et al. On detecting road regions in a single UAV image
CN109977997B (en) Image target detection and segmentation method based on convolutional neural network rapid robustness
CN109448015B (en) Image collaborative segmentation method based on saliency map fusion
CN106815323B (en) Cross-domain visual retrieval method based on significance detection
CN111862119A (en) Semantic information extraction method based on Mask-RCNN
CN111401293B (en) Gesture recognition method based on Head lightweight Mask scanning R-CNN
CN109035300B (en) Target tracking method based on depth feature and average peak correlation energy
CN111523463B (en) Target tracking method and training method based on matching-regression network
CN111553414A (en) In-vehicle lost object detection method based on improved Faster R-CNN
Chen et al. Dr-tanet: Dynamic receptive temporal attention network for street scene change detection
CN111862115A (en) Mask RCNN-based remote sensing image segmentation method
CN117252904B (en) Target tracking method and system based on long-range space perception and channel enhancement
CN113298146A (en) Image matching method, device, equipment and medium based on feature detection
CN113269236B (en) Assembly body change detection method, device and medium based on multi-model integration
Zhu et al. Scene text relocation with guidance
CN117011381A (en) Real-time surgical instrument pose estimation method and system based on deep learning and stereoscopic vision
CN114743045B (en) Small sample target detection method based on double-branch area suggestion network
CN116258937A (en) Small sample segmentation method, device, terminal and medium based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant