CN117496109A - Image comparison and analysis method and device, electronic equipment and storage medium - Google Patents

Image comparison and analysis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117496109A
CN117496109A CN202311233786.7A CN202311233786A CN117496109A CN 117496109 A CN117496109 A CN 117496109A CN 202311233786 A CN202311233786 A CN 202311233786A CN 117496109 A CN117496109 A CN 117496109A
Authority
CN
China
Prior art keywords
image
detection result
images
main body
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311233786.7A
Other languages
Chinese (zh)
Inventor
方晶
杨张震
谭旭星
曹佳磊
张洪岭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhongke Zhongtian Technology Co ltd
Original Assignee
Shenzhen Zhongke Zhongtian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongke Zhongtian Technology Co ltd filed Critical Shenzhen Zhongke Zhongtian Technology Co ltd
Priority to CN202311233786.7A priority Critical patent/CN117496109A/en
Publication of CN117496109A publication Critical patent/CN117496109A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image comparison and analysis method, an image comparison and analysis device, electronic equipment and a storage medium, which belong to the technical field of image comparison and analysis, and comprise the following steps: acquiring a first image and a second image to be compared, and preprocessing the images; extracting image features of a plurality of images to be identified by using a pre-trained feature extraction network model to obtain a plurality of image features; determining whether the first image is similar to the second image according to the corresponding similarity of the plurality of first images; the method, the device, the electronic equipment and the storage medium are reasonable in structural design, can improve the efficiency of determining the offset of each part of the image, realize multiple image detection, facilitate the processing of later-stage images, and can also carry out a large amount of training when a new image is added, thereby reducing the cost.

Description

Image comparison and analysis method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of image comparison and analysis, in particular to an image comparison and analysis method, an image comparison and analysis device, electronic equipment and a storage medium.
Background
In the related art, when identifying similar images, one implementation is to use edit distances to calculate the similarity of the graphics, such as hamming distances, levenstein distances, and the like. In the process of pattern recognition, the pattern is generally subjected to noise reduction and binarization processing, binary pattern information is searched in a traversing way, and when the similarity calculated by a similarity algorithm meets a threshold value, the current pattern is considered to be similar to the target pattern.
At present, software for accurately comparing images exists in the market, but the technology adopted by the software is that the offset of each part of the image is determined by a method of moving the image pixel by pixel and adjusting matching tolerance at any time, the efficiency is low, target detection results of each target to be detected are respectively extracted, the image analysis results are single, the subsequent processing of the image is inconvenient, a great deal of training on the image is required to be added every time the image is newly added, and the cost is high.
Disclosure of Invention
The invention aims to provide an image comparison analysis method, an image comparison analysis device, electronic equipment and a storage medium, which are used for solving the problems that in the background technology, the target detection result of each target to be detected is extracted respectively, the image analysis result is single, the subsequent processing of images is inconvenient, a large amount of training on a graph is required to be added every time the graph is newly added, and the cost is high.
In order to achieve the above purpose, the present invention provides the following technical solutions: image comparison and analysis method, device, electronic equipment and storage medium, comprising:
acquiring a first image and a second image to be compared, and preprocessing the images;
extracting image features of a plurality of images to be identified by using a pre-trained feature extraction network model to obtain a plurality of image features;
determining whether the first image is similar to the second image according to the corresponding similarity of the plurality of first images;
and merging the multiple target detection results according to a preset merging rule to obtain a target merging detection result.
Preferably, a first image size in the first image and a second image size in the second image are extracted; the size of the first image main body is the same as that of the second image main body, and the first image and the second image are cut according to a preset equal-dividing cutting principle respectively.
Preferably, the image is divided into a plurality of mutually non-overlapping regions, each region having a characteristic or feature that is the same or similar, and the image features between the different regions differ significantly, i.e. the characteristic changes are gentle and relatively uniform within the same region, whereas the characteristic changes are more intense at the region boundaries, given a certain consistency (uniformity) attribute criterion (measure) P, the process of dividing the image correctly into mutually non-overlapping sets of regions 11, o2, …, n) is referred to as segmentation.
Preferably, the metric used for image segmentation is not unique, and is related to the application scene image and application purpose, and scene image characteristic information used for image segmentation is luminance, color, texture, structure, temperature, spectrum, motion, shape, position, gradient, model, etc.
Preferably, sample pixels of the identified class are those located in training areas, in which class the analyst selects a number of training areas for each class on the pixel, the computer calculates statistics or other information for each training sample area, each pixel is compared with the training samples, and is divided into the most similar sample classes according to different rules.
Preferably, the target detection results included in each detection result group are combined according to the position relation to obtain an intra-group main body detection result and an intra-group component detection result after the targets in each group are combined, and the intra-group main body detection result and the intra-group component detection result of each target main body are respectively combined to obtain an intra-group combined detection result of the target main body.
Comprising the following steps:
an image acquisition module: the method comprises the steps of acquiring a first image and a second image which need to be compared;
and the feature extraction module is used for: extracting image features of a plurality of images to be identified by using a feature extraction network model;
an image detection module: for detecting and determining whether the first image is similar to the second image;
and an image merging module: and the method is used for merging a plurality of target detection results to obtain a target merging detection result.
Comprising the following steps:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
The computer instructions for causing the computer to perform the method of any one of claims 1-6.
Compared with the prior art, the invention has the beneficial effects that: according to the image comparison and analysis method, the device, the electronic equipment and the storage medium, through extracting the first image size in the first image and the second image size in the second image, the first image and the second image are respectively cut according to a preset equal-division cutting principle, the image array is decomposed into a plurality of mutually non-overlapped areas, certain characteristics or features in each area are the same or close, each pixel is compared with a training sample, the pixels are divided into sample types similar to the pixels according to different rules, the target detection results included in each detection result group are combined to obtain an intra-group main body detection result and an intra-group component detection result after the target combination in each group, and the intra-group combined detection result of each target main body is obtained.
Drawings
FIG. 1 is a flowchart of an image comparison and analysis method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an image comparing device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Embodiment one:
referring to fig. 1, the present invention provides a technical solution: image comparison and analysis method, device, electronic equipment and storage medium, comprising:
acquiring a first image and a second image to be compared, and preprocessing the images;
extracting image features of a plurality of images to be identified by using a pre-trained feature extraction network model to obtain a plurality of image features;
determining whether the first image is similar to the second image according to the corresponding similarity of the plurality of first images;
and merging the multiple target detection results according to a preset merging rule to obtain a target merging detection result.
Extracting a first image size in the first image and a second image size in the second image; the size of the first image main body is the same as that of the second image main body, and the first image and the second image are cut according to a preset equal-dividing cutting principle respectively.
Image segmentation, in which an image array is divided into a plurality of mutually non-overlapping regions, each of which has a characteristic or feature that is the same or similar, and the image features between the different regions differ significantly, i.e. the characteristic changes within the same region are gentle and relatively uniform, whereas the characteristic changes at the region boundaries are relatively severe, the process of correctly dividing the image into mutually non-overlapping sets of regions 11, o2, …, n) is called segmentation, given a certain uniform attribute criterion (measure) P.
The metric used for image segmentation is not unique and is related to the application scene image and application purpose, and scene image characteristic information used for image segmentation is brightness, color, texture, structure, temperature, spectrum, motion, shape, position, gradient, model, etc.
Sample pixels of the identified class are those located in training areas, in which class the analyst selects a number of training areas for each class, the computer calculates statistics or other information for each training area, and each pixel is compared with the training samples and classified into the class of samples most similar to the pixel according to different rules.
And combining target detection results included in each detection result group according to the position relation to obtain an intra-group main body detection result and an intra-group component detection result after the targets in each group are combined, and respectively combining the intra-group main body detection result and the intra-group component detection result of each target main body to obtain an intra-group combined detection result of the target main body.
The first image size in the first image and the second image size in the second image are extracted, the first image and the second image are respectively cut according to a preset equal-dividing cutting principle, the image array is decomposed into a plurality of mutually non-overlapped areas, certain characteristics or features in each area are the same or close, each pixel is compared with a training sample, the pixels are divided into sample types which are the most similar to the pixels according to different rules, target detection results included in each detection result group are combined to obtain an intra-group main body detection result and an intra-group component detection result after target combination in each group, the intra-group main body detection result and the intra-group component detection result of each target main body are respectively combined to obtain an intra-group combined detection result of each target main body, the offset amount of each part of the images can be determined, multiple image detection is realized, the processing of later images is facilitated, a large amount of training can be performed when a new image is added, and the cost is reduced.
Embodiment two:
referring to fig. 1-2, the present invention provides a technical solution: image comparison and analysis method, device, electronic equipment and storage medium, comprising:
acquiring a first image and a second image to be compared, and preprocessing the images;
extracting image features of a plurality of images to be identified by using a pre-trained feature extraction network model to obtain a plurality of image features;
determining whether the first image is similar to the second image according to the corresponding similarity of the plurality of first images;
and merging the multiple target detection results according to a preset merging rule to obtain a target merging detection result.
Extracting a first image size in the first image and a second image size in the second image; the size of the first image main body is the same as that of the second image main body, and the first image and the second image are cut according to a preset equal-dividing cutting principle respectively.
Image segmentation, in which an image array is divided into a plurality of mutually non-overlapping regions, each of which has a characteristic or feature that is the same or similar, and the image features between the different regions differ significantly, i.e. the characteristic changes within the same region are gentle and relatively uniform, whereas the characteristic changes at the region boundaries are relatively severe, the process of correctly dividing the image into mutually non-overlapping sets of regions 11, o2, …, n) is called segmentation, given a certain uniform attribute criterion (measure) P.
The metric used for image segmentation is not unique and is related to the application scene image and application purpose, and scene image characteristic information used for image segmentation is brightness, color, texture, structure, temperature, spectrum, motion, shape, position, gradient, model, etc.
Sample pixels of the identified class are those located in training areas, in which class the analyst selects a number of training areas for each class, the computer calculates statistics or other information for each training area, and each pixel is compared with the training samples and classified into the class of samples most similar to the pixel according to different rules.
And combining target detection results included in each detection result group according to the position relation to obtain an intra-group main body detection result and an intra-group component detection result after the targets in each group are combined, and respectively combining the intra-group main body detection result and the intra-group component detection result of each target main body to obtain an intra-group combined detection result of the target main body.
Comprising the following steps:
an image acquisition module: the method comprises the steps of acquiring a first image and a second image which need to be compared;
and the feature extraction module is used for: extracting image features of a plurality of images to be identified by using a feature extraction network model;
an image detection module: for detecting and determining whether the first image is similar to the second image;
and an image merging module: and the method is used for merging a plurality of target detection results to obtain a target merging detection result.
Comprising the following steps:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
The computer instructions for causing the computer to perform the method of any one of claims 1-6.
Ashing the image to obtain a gray level image, determining the edge position information of the primitives on the image according to the gray level value change of the pixels on the gray level image, determining the primitive area on the image according to the edge position information by using a connected domain marking algorithm, extracting the image features of a plurality of images to be identified by using a pre-trained feature extraction network model to obtain a plurality of image features, and judging whether the total occurrence times of interference targets and the continuous occurrence times of the interference targets in a plurality of detection results meet preset conditions or not, wherein the preset conditions are that the total occurrence times are larger than a first preset threshold or the continuous occurrence times are larger than a second preset threshold, and the first preset threshold is larger than the second preset threshold.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although the invention has been described hereinabove with reference to embodiments, various modifications thereof may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In particular, the features of the disclosed embodiments may be combined with each other in any manner so long as there is no structural conflict, and the exhaustive description of these combinations is not given in this specification merely for the sake of brevity and resource saving. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (9)

1. An image comparison and analysis method is characterized in that: comprising the following steps:
acquiring a first image and a second image to be compared, and preprocessing the images;
extracting image features of a plurality of images to be identified by using a pre-trained feature extraction network model to obtain a plurality of image features;
determining whether the first image is similar to the second image according to the corresponding similarity of the plurality of first images;
and merging the multiple target detection results according to a preset merging rule to obtain a target merging detection result.
2. An image comparison and analysis method according to claim 1, wherein: extracting a first image size in the first image and a second image size in the second image; the size of the first image main body is the same as that of the second image main body, and the first image and the second image are cut according to a preset equal-dividing cutting principle respectively.
3. An image comparison and analysis method according to claim 1, wherein: image segmentation, in which an image array is divided into a plurality of mutually non-overlapping regions, each of which has a characteristic or feature that is the same or similar, and the image features between the different regions differ significantly, i.e. the characteristic changes within the same region are gentle and relatively uniform, whereas the characteristic changes at the region boundaries are relatively severe, the process of correctly dividing the image into mutually non-overlapping sets of regions 11, o2, …, n) is called segmentation, given a certain uniform attribute criterion (measure) P.
4. An image comparison and analysis method according to claim 1, wherein: the metric used for image segmentation is not unique and is related to the application scene image and application purpose, and scene image characteristic information used for image segmentation is brightness, color, texture, structure, temperature, spectrum, motion, shape, position, gradient, model, etc.
5. An image comparison and analysis method according to claim 1, wherein: sample pixels of the identified class are those located in training areas, in which class the analyst selects a number of training areas for each class, the computer calculates statistics or other information for each training area, and each pixel is compared with the training samples and classified into the class of samples most similar to the pixel according to different rules.
6. An image comparison and analysis method according to claim 1, wherein: and combining target detection results included in each detection result group according to the position relation to obtain an intra-group main body detection result and an intra-group component detection result after the targets in each group are combined, and respectively combining the intra-group main body detection result and the intra-group component detection result of each target main body to obtain an intra-group combined detection result of the target main body.
7. An image comparison device, characterized in that: comprising the following steps:
an image acquisition module: the method comprises the steps of acquiring a first image and a second image which need to be compared;
and the feature extraction module is used for: extracting image features of a plurality of images to be identified by using a feature extraction network model;
an image detection module: for detecting and determining whether the first image is similar to the second image;
and an image merging module: and the method is used for merging a plurality of target detection results to obtain a target merging detection result.
8. An electronic device and a storage medium, characterized in that: comprising the following steps:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
9. A computer-readable storage medium storing computer instructions, characterized by: the computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202311233786.7A 2023-09-23 2023-09-23 Image comparison and analysis method and device, electronic equipment and storage medium Pending CN117496109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311233786.7A CN117496109A (en) 2023-09-23 2023-09-23 Image comparison and analysis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311233786.7A CN117496109A (en) 2023-09-23 2023-09-23 Image comparison and analysis method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117496109A true CN117496109A (en) 2024-02-02

Family

ID=89667907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311233786.7A Pending CN117496109A (en) 2023-09-23 2023-09-23 Image comparison and analysis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117496109A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935068A (en) * 2024-03-25 2024-04-26 中国平安财产保险股份有限公司四川分公司 Crop disease analysis method and analysis system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935068A (en) * 2024-03-25 2024-04-26 中国平安财产保险股份有限公司四川分公司 Crop disease analysis method and analysis system
CN117935068B (en) * 2024-03-25 2024-05-24 中国平安财产保险股份有限公司四川分公司 Crop disease analysis method and analysis system

Similar Documents

Publication Publication Date Title
WO2020156361A1 (en) Training sample obtaining method and apparatus, electronic device and storage medium
Jia et al. Degraded document image binarization using structural symmetry of strokes
CN109461148A (en) Steel rail defect based on two-dimentional Otsu divides adaptive fast algorithm
CN108898132B (en) Terahertz image dangerous article identification method based on shape context description
CN110415208B (en) Self-adaptive target detection method and device, equipment and storage medium thereof
Niu et al. Research and analysis of threshold segmentation algorithms in image processing
US8983199B2 (en) Apparatus and method for generating image feature data
CN110288566B (en) Target defect extraction method
CN115393657B (en) Metal pipe production abnormity identification method based on image processing
CN117496109A (en) Image comparison and analysis method and device, electronic equipment and storage medium
Lech et al. Optimization of the fast image binarization method based on the Monte Carlo approach
Widyantara et al. Gamma correction-based image enhancement and canny edge detection for shoreline extraction from coastal imagery
CN108764343B (en) Method for positioning tracking target frame in tracking algorithm
CN110175257B (en) Method for matching line manuscript images, electronic equipment and storage medium
Tabatabaei et al. A novel method for binarization of badly illuminated document images
CN109544614B (en) Method for identifying matched image pair based on image low-frequency information similarity
Yang et al. The improvement of Bernsen binarization algorithm for QR Code image
Wang et al. SVD of shot boundary detection based on accumulative difference
CN114764810A (en) Medical image segmentation method
CN111489371A (en) Image segmentation method for scene with histogram approximate to unimodal distribution
CN107451574B (en) Motion estimation method based on Haar-like visual feature perception
Tian et al. A new algorithm for license plate localization in open environment using color pair and stroke width features of character
Zhang et al. Moving cast shadow detection based on regional growth
Qiang et al. An Infrared Small Target Fast Detection Algorithm in the Sky Based on Human Visual System
CN117037049B (en) Image content detection method and system based on YOLOv5 deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination