CN117853754A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN117853754A
CN117853754A CN202410189331.8A CN202410189331A CN117853754A CN 117853754 A CN117853754 A CN 117853754A CN 202410189331 A CN202410189331 A CN 202410189331A CN 117853754 A CN117853754 A CN 117853754A
Authority
CN
China
Prior art keywords
image
texture
transformation
data
article
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410189331.8A
Other languages
Chinese (zh)
Inventor
王萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Yunchuang Digital Technology Beijing Co ltd
Original Assignee
Ant Yunchuang Digital Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ant Yunchuang Digital Technology Beijing Co ltd filed Critical Ant Yunchuang Digital Technology Beijing Co ltd
Priority to CN202410189331.8A priority Critical patent/CN117853754A/en
Publication of CN117853754A publication Critical patent/CN117853754A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification provides an image processing method and device, wherein the image processing method comprises the following steps: in the detection process of the object to be detected, image coordinate transformation is carried out on the object image of the object to be detected to obtain a transformation image, data merging processing is carried out on image data of image pairs formed by each preset object image and the transformation image to obtain merged image data of each image pair, texture matching strategies of each image pair are determined by means of image parameters of the object image, texture matching processing is carried out on the merged image data according to the texture matching strategies to obtain texture matching indexes of each image pair, scoring processing is carried out on the texture matching indexes according to transformation coordinates corresponding to the image coordinate transformation, namely, the texture matching indexes are corrected, and the detection result of the object to be detected is determined based on the scoring result.

Description

Image processing method and device
Technical Field
The present document relates to the field of data processing technologies, and in particular, to an image processing method and apparatus.
Background
With the continuous development of internet technology and the continuous improvement of the living standard of users, the users purchase various articles in an online or offline mode, and after the users purchase articles for a period of time, the users may have the need of detecting the purchased articles, such as detecting the quality of the articles purchased or detecting the authenticity of the articles purchased, etc., so that the users have difficulty in carrying out quality or authenticity authentication on the articles by naked eyes, therefore, the purchased articles can be authenticated in an automatic mode, and in the process, higher authentication requirements are put forward on the authentication party of the articles.
Disclosure of Invention
One or more embodiments of the present specification provide an image processing method including: performing image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and obtaining a plurality of preset article images. And carrying out data merging processing on the image data of the image pairs formed by the preset object images and the transformation images to obtain merged image data of each image pair. And determining a texture matching strategy of each image pair based on the image parameters of the object image, and performing texture matching processing on the combined image data according to the texture matching strategy to obtain texture matching indexes of each image pair. And scoring the texture matching index according to transformation coordinates corresponding to the image coordinate transformation, and determining a detection result of the object to be detected based on the scoring result.
One or more embodiments of the present specification provide an image processing apparatus including: the image transformation module is configured to perform image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and acquire a plurality of preset article images. And the data merging module is configured to perform data merging processing on the image data of the image pairs formed by the preset object images and the transformation images to obtain merged image data of the image pairs. And the texture matching module is configured to determine a texture matching strategy of each image pair based on the image parameters of the object image, and perform texture matching processing on the combined image data according to the texture matching strategy to obtain texture matching indexes of each image pair. And the scoring processing module is configured to score the texture matching index according to transformation coordinates corresponding to the image coordinate transformation and determine the detection result of the object to be detected based on the scoring result.
One or more embodiments of the present specification provide an image processing apparatus including: a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to: performing image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and obtaining a plurality of preset article images. And carrying out data merging processing on the image data of the image pairs formed by the preset object images and the transformation images to obtain merged image data of each image pair. And determining a texture matching strategy of each image pair based on the image parameters of the object image, and performing texture matching processing on the combined image data according to the texture matching strategy to obtain texture matching indexes of each image pair. And scoring the texture matching index according to transformation coordinates corresponding to the image coordinate transformation, and determining a detection result of the object to be detected based on the scoring result.
One or more embodiments of the present specification provide a storage medium storing computer-executable instructions that, when executed by a processor, implement the following: performing image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and obtaining a plurality of preset article images. And carrying out data merging processing on the image data of the image pairs formed by the preset object images and the transformation images to obtain merged image data of each image pair. And determining a texture matching strategy of each image pair based on the image parameters of the object image, and performing texture matching processing on the combined image data according to the texture matching strategy to obtain texture matching indexes of each image pair. And scoring the texture matching index according to transformation coordinates corresponding to the image coordinate transformation, and determining a detection result of the object to be detected based on the scoring result.
Drawings
For a clearer description of one or more embodiments of the present description or of the solutions of the prior art, the drawings that are needed in the description of the embodiments or of the prior art will be briefly described below, it being obvious that the drawings in the description that follow are only some of the embodiments described in the present description, from which other drawings can be obtained, without inventive faculty, for a person skilled in the art;
FIG. 1 is a schematic diagram of an environment in which an image processing method according to one or more embodiments of the present disclosure is implemented;
FIG. 2 is a process flow diagram of an image processing method according to one or more embodiments of the present disclosure;
FIG. 3A is a schematic illustration of an article image of an article to be inspected according to one or more embodiments of the present disclosure;
FIG. 3B is a schematic diagram of a transformed image of an article to be inspected according to one or more embodiments of the present disclosure;
FIG. 4 is a process flow diagram of an image processing method for a first image detection scene according to one or more embodiments of the present disclosure;
FIG. 5 is a process flow diagram of an image processing method for a second image detection scenario according to one or more embodiments of the present disclosure;
FIG. 6 is a schematic diagram of an embodiment of an image processing apparatus according to one or more embodiments of the present disclosure;
fig. 7 is a schematic structural diagram of an image processing apparatus according to one or more embodiments of the present disclosure.
Detailed Description
In order to enable a person skilled in the art to better understand the technical solutions in one or more embodiments of the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one or more embodiments of the present disclosure without inventive effort, are intended to be within the scope of the present disclosure.
The image processing method provided in one or more embodiments of the present disclosure may be applied to an implementation environment of image processing, and referring to fig. 1, the implementation environment includes at least:
the server 101 for image detection of the object to be detected, and the terminal device 102 for acquiring and uploading the object image of the object to be detected.
The server 101 may be a server, or a server cluster formed by a plurality of servers, or one or more cloud servers in a cloud computing platform;
the terminal device 102 may be a mobile phone, a personal computer, a tablet computer, an electronic book reader, a wearable device, an AR (Augmented Reality) augmented Reality)/VR (Virtual Reality) -based device for information interaction, a laptop portable computer, or the like, and may be provided with an application program or a browser, through which an item image of an item to be detected is submitted to the server 101.
In this implementation environment, the server 101 performs image coordinate transformation on the object image of the object to be detected uploaded by the terminal device 102 to obtain a transformed image, performs data merging processing on image data of each preset object image and image data of the transformed image in a plurality of preset object images to obtain merged image data of each image pair formed by each preset object image and the transformed image, determines a texture matching policy of each image pair by means of image parameters of the object image, performs texture matching processing on the merged image data according to the texture matching policy to obtain texture matching indexes of each image pair, performs scoring processing on the texture matching indexes by combining transformation coordinates corresponding to the image coordinate transformation on the basis, and determines a detection result of the object to be detected based on the scoring result.
One or more embodiments of an image processing method provided in the present specification are as follows:
referring to fig. 2, the image processing method provided in the present embodiment specifically includes steps S202 to S208.
Step S202, performing image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and acquiring a plurality of preset article images.
The article to be detected in this embodiment may be any article to be detected that has a detection requirement, for example, an article to be detected (such as a ceramic product) or an organism to be detected, and the organism to be detected may be a plant to be detected or an animal to be detected; the object to be detected in the embodiment may also be a geographic marker object, that is, a landmark object to be detected; the detection requirements comprise requirements of quality detection, authenticity detection, article class detection, factory information detection and the like. The object to be detected can be any object with any shape and any color; the article image of the article to be detected comprises an article image acquired by a camera of the calling terminal device.
The transformation image comprises a transformation image obtained after the image coordinate transformation of the object image, namely the transformation image and the image coordinate of the object image may be different; optionally, the image coordinate transformation includes image angle deflection; the number of transformation coordinates corresponding to the image coordinate transformation can be one or more, namely, the number of transformation coordinates of the image coordinate transformation, namely, the number of deflection angles is n, is one or more, and the object image of the object to be detected is deflected at 360/n degree intervals to obtain B1-Bn transformation images with different deflection angles or different transformation coordinates; as with the item image of the item to be inspected shown in fig. 3A, the transformed image shown in fig. 3B, it will be appreciated that the examples of the item image and transformed image of the item to be inspected of fig. 3A and 3B are merely illustrative.
The plurality of preset article images refer to a plurality of preset article images; each preset article image in the preset article images may or may not contain an article to be detected, for example, each preset article image in the preset article images may contain preset article images of a plurality of different article types.
In practical application, in the process of comparing two images, a characteristic alignment mode is often adopted to perform image comparison, for example, in a biological identification scene, key parts such as eyes, nose tips, mouth corners and the like are aligned and characteristic comparison is performed, but the characteristic alignment mode is complicated, in a part of the scene, the image comparison cannot be realized through the characteristic alignment, for example, a right circular shape is formed, a marker with a certain angle does not exist on the surface, and an aligned object to be detected cannot be detected; in order to improve the efficiency of image comparison, the process of image comparison is simplified, the image coordinate transformation can be carried out on the object images of the objects to be detected to obtain transformed images, a plurality of preset object images are obtained, the transformed images are compared with the preset object images, the process of feature alignment is simplified, and the efficiency of image comparison is improved.
In an actual application scene, different color types of the object images of the objects to be detected and/or different complexity of the object shapes of the objects to be detected may cause different requirements on the number of transformation coordinates for performing image coordinate transformation on the object images of the objects to be detected, and in order to improve the image matching efficiency and improve the image matching accuracy, the number of transformation coordinates for performing image coordinate transformation can be determined according to the color types of the object images of the objects to be detected and/or the object shapes of the objects to be detected in the object images; in an optional implementation manner provided in this embodiment, before performing image coordinate transformation on an article image of an article to be detected to obtain a transformed image and obtaining a plurality of preset article images, the following operations are further performed:
determining the color type of the article image and/or the article shape of the article to be detected in the article image;
and determining the transformation coordinate number of the image coordinate transformation according to the color type and/or the object shape.
Wherein, the color type of the article image refers to the type of the color contained in the article image; the shape of the object to be detected in the object image comprises a circle, a square, a rectangle and/or an irregular shape, and in addition, the shape of the object to be detected can be other shapes. The number of transformation coordinates of the image coordinate transformation includes the number of transformation coordinates for performing the image coordinate transformation, for example, the number of transformation coordinates is n, i.e., the number of images of the transformed image is n.
Specifically, the type number of the color types of the article image and/or the complexity of the article shape of the article to be detected in the article image can be determined, and the transformation coordinate number of the image coordinate transformation is determined according to the type number of the color types and/or the complexity of the article shape of the article to be detected in the article image; in determining the number of transformation coordinates of the image coordinate transformation according to the number of types of color types and/or the complexity of the article shape of the article to be detected in the article image, the number of transformation coordinates of the type number and/or the complexity mapping of the article shape can be searched in the number mapping table according to the number of types and/or the complexity of the article shape.
In addition, the number of transformation coordinates for performing image coordinate transformation can also be obtained by calculation according to a random algorithm, for example, the article image of the article to be detected is input into the random algorithm to perform calculation of the number of transformation coordinates, and the number of transformation coordinates for performing image coordinate transformation is obtained.
In addition, before the image coordinate transformation is performed on the object image of the object to be detected to obtain a transformed image and the execution of obtaining a plurality of preset object images, the transformation coordinate number of the image coordinate transformation can be determined according to the image resolution of the object image of the object to be detected; in addition, the transformation coordinate number of the image coordinate transformation can be determined according to the color type of the object image, the object shape of the object to be detected in the object image and/or the image resolution of the object image; optionally, the image resolution of the article image, the number of color types of the article image, the complexity of the article shape of the article to be detected and the number of transformation coordinates have a positive correlation.
Optionally, the object image of the object to be detected is acquired based on an image acquisition page; step S202, carrying out image coordinate transformation on an article image of an article to be detected to obtain a transformed image, acquiring a plurality of preset article images, and executing after detecting an acquisition instruction submitted through an image acquisition page.
On the basis of the above-mentioned determination of the number of transformation coordinates of the image coordinate transformation, in the execution process of step S202, the image coordinate transformation may be performed on the article image of the article to be detected according to the number of transformation coordinates to obtain a transformed image, and a plurality of preset article images may be obtained.
It should be noted that, the image of the object to be detected may be directly compared with the images of the preset object images without performing image coordinate transformation, so as to improve the convenience of image comparison, and step S202 may be replaced by acquiring the object image of the object to be detected and a plurality of preset object images, and forming a new implementation manner with other processing steps provided in this embodiment;
alternatively, step S202 may be replaced by performing image angle deflection on the article image of the article to be detected to obtain a deflected image, and obtaining a plurality of preset article images; or, performing multi-angle deflection on the object image of the object to be detected to obtain a plurality of deflection images, and obtaining a plurality of preset object images, and forming a new implementation manner with other processing steps provided by the embodiment.
Step S204, carrying out data merging processing on the image data of the image pairs formed by the preset object images and the transformation images to obtain merged image data of the image pairs.
In the step, in order to improve the convenience of data processing, the image data of the image pairs formed by each preset article image and the converted image are subjected to data merging processing to obtain merged image data of each image pair.
The image pair formed by each preset article image and the transformed image in the embodiment refers to an image pair formed by each preset article image and the transformed image in each preset article image, for example, each preset article image includes an image A1 and an image A2, the transformed image includes a transformed image B, and the formed image pair includes < A1, B >, < A2, B >; the image data of the image pair includes image data of each image in the image pair, the image data of each image includes pixel coordinate data and/or channel data, and the image data of each image may be an image matrix, for example, the image pair is < A1, B >, and the image data of the image pair includes image data h×w×c1 of the image A1 and image data h×w×c of the transformed image B, where H represents a height, W represents a width, and C represents the number of image channels.
The combined image data of each image pair may be combined image data including pixel coordinate data and/or channel data of each image pair, for example, the image data of the image A1 is h×w×c1, the image data of the transformed image B is h×w×c, and the combined image data of the image pair < A1, B > is h×w×c1c; in addition, the combined image data for each image pair may also include combined image data comprised of channel data for each image pair.
In practical application, the attention of the channel data of each image pair may be higher, so in order to improve the convenience of image detection, the image detection is prevented from being interfered by other data except the channel data, the accuracy of the image detection is lower, and channel data combination can be performed on the first channel data in the image data of each preset object image and the second image data in the image data of the converted image, so as to obtain combined channel data of each image pair; in a first optional implementation manner provided in this embodiment, in a process of performing data merging processing on image data of an image pair formed by each preset article image and the transformed image to obtain merged image data of each image pair, the following operations are performed:
Extracting first channel data in the image data of each preset article image and extracting second channel data in the image data of the transformation image;
and carrying out channel data combination on the first channel data and the second channel data to obtain combined channel data of each image pair.
The first channel data comprises first channel data composed of pixel values of each pixel point in each preset article image, wherein the pixel values can be color values, such as R (Red), G (Green), B (Blue) values or gray values; the second channel data includes second channel data composed of pixel values of each pixel point in the transformed image.
Specifically, first color channel data in image data of each preset article image can be extracted, second color channel data in image data of the converted image can be extracted, and channel data combination is carried out on the first color channel data and the second color channel data, so that combined channel data of each image pair are obtained.
In addition, in the subsequent image detection process, image detection can be performed by a model, and the model may have related requirements on the data volume of the input data, and has limitation on the data volume of the input data, so that the data transmission of the model is convenient; in a second optional implementation manner provided in this embodiment, in a process of performing data merging processing on image data of an image pair formed by each preset article image and the transformed image to obtain merged image data of each image pair, the following operations are performed:
Compressing the image data of each preset article image to obtain first compressed data of each preset article image, and compressing the image data of the converted image to obtain second compressed data of the converted image;
and carrying out data combination on the first compressed data and the second compressed data to obtain the combined image data.
In addition, the image specification of each preset article image and the image specification of the converted image may be different, so as to affect the image detection efficiency, in order to improve the image detection efficiency, the image data of each preset article image and the image data of the converted image may be subjected to data boundary adjustment according to the image specification of each preset article image and the position coordinates of the article to be detected in the converted image, so as to obtain adjustment data of each preset article image and adjustment data of the converted image, and the adjustment data of each preset article image and the adjustment data of the converted image are subjected to data combination, so as to obtain combined image data; in a third alternative implementation manner provided in this embodiment, in a process of performing data merging processing on image data of an image pair formed by each preset article image and the transformed image to obtain merged image data of each image pair, the following operations are performed:
Identifying position coordinates of the object to be detected in the transformed image;
according to the image specification and the position coordinates of each preset article image, carrying out data boundary adjustment on the image data of each preset article image and the image data of the transformation image to obtain adjustment data of each preset article image and adjustment data of the transformation image;
and carrying out data combination on the adjustment data of each preset article image and the adjustment data of the transformation image to obtain the combined image data.
The image specification of each preset article image comprises the image size, namely the image size, of each preset article image in each preset article image. The data boundaries of the adjustment data of each preset article image in the preset article images and the adjustment data of the transformation image can be the same; the adjustment data refers to image data subjected to data boundary adjustment; the data boundary of the adjustment data refers to boundary information of the image data at which the adjustment data is at the boundary.
Specifically, according to the image specification of each preset article image and the position coordinates of the article to be detected in the transformation image, performing boundary adaptation adjustment on the data boundary of the image data of each preset article image and the data boundary of the image data of the transformation image to obtain adjustment data of each preset article image and adjustment data of the transformation image; the boundary adaptation adjustment here includes an adjustment process in which the data boundary of the image data of each preset article image and the data boundary of the image data of the converted image are adjusted to be approximately the same.
In addition, in the process of carrying out data merging processing on the image data of the image pairs formed by the preset object images and the transformation images to obtain the merged image data of the image pairs, the pixel coordinate data and the channel data of the preset object images, the pixel coordinate data and the channel data of the transformation images can be subjected to data merging to obtain the merged image data of the image pairs formed by the preset object images and the transformation images.
It should be noted that, optionally, in the step S204, the image data of the image pair formed by each preset article image and the transformed image is subjected to data merging processing, so as to obtain merged image data of each image pair, which may be executed after detecting the acquisition instruction submitted through the image acquisition page.
It should be noted that, in this embodiment, the data merging process may not be performed on the image data of each preset article image and the image data of the transformed image, that is, step S204 may be replaced by using the image data of each preset article image and the image data of the transformed image as the merged image data of the image pair formed by each preset article image and the transformed image, and a new implementation manner may be formed with the other processing steps provided in this embodiment.
Step S206, determining a texture matching strategy of each image pair based on the image parameters of the object image, and performing texture matching processing on the combined image data according to the texture matching strategy to obtain texture matching indexes of each image pair.
In the step, in order to improve pertinence and flexibility of texture matching of images, a texture matching policy of each image pair can be determined based on image parameters of the object images, and then texture matching processing is performed on the combined image data according to the texture matching policy, so as to obtain texture matching indexes of each image pair.
The image parameters of the article image in this embodiment refer to image related parameters of the article image, for example, the image parameters of the article image include image resolution and/or the number of color types of the article image, and in addition, the image parameters of the article image may also be other types of image parameters. The texture matching policy of each image pair refers to a policy of performing texture matching on each image pair, for example, the texture matching policy includes a texture matching area, a texture matching position and/or a texture matching sequence, and for example, the texture matching policy includes a texture extraction mode.
The texture matching index for each image pair includes an index, such as a texture matching degree, that characterizes the matching degree of the image texture in each image pair.
In practical application, different texture matching strategies may exist for each image pair, and the texture matching flexibility of each image pair is improved through the different texture matching strategies, and the following three implementation methods for determining the texture matching strategies of each image pair are provided.
(1) Implementation one
In the implementation, in the process of determining the texture matching strategy of each image pair based on the image parameters of the object image, the texture matching area of each image pair can be calculated according to the image resolution of the object image and the conversion factor of the resolution and the area, or the texture matching position and/or the texture matching sequence of each image pair can be determined based on the object shape of the object to be detected in the transformed image; specifically, in a first optional implementation manner provided in this embodiment, in determining a texture matching policy of each image pair based on an image parameter of an object image, the following operations are performed:
determining the image resolution of the object image, and calculating the texture matching area of each image pair according to the image resolution and the conversion factors of the resolution and the area;
And determining texture matching positions and texture matching sequences of the image pairs based on the object shape of the object to be detected in the transformation image.
Wherein, the conversion factor of resolution and area refers to a factor for representing the conversion relation between the resolution of the image and the texture matching area, and for example, the conversion factor comprises a conversion coefficient. The texture matching area of each image pair comprises the image block size or the image block size of each image pair in the process of performing texture matching, for example, the texture matching area is s, the texture matching area represents that the image blocks with the area s are selected for performing texture matching by A1 and B in the image pair < A1, B >, and the image pairs are similar to each other < A2, B >. The texture matching positions of the image pairs comprise positions of each image pair in the process of texture matching, for example, the texture matching positions can be center positions or edge positions; the texture matching order refers to an order in which texture matching is performed. Optionally, the image resolution of the object image has a positive correlation with the texture matching area of each image pair, i.e. the higher the image resolution of the object image, the greater the texture matching area of each image pair.
Specifically, the texture matching area of each image pair may be calculated according to the image resolution and the conversion factor of the resolution and the area of the article image of the article to be detected, the texture matching position of each image pair is determined to be the center position of each image pair and the texture matching order of each image pair is determined to be the order from the center position to the edge position in the case of converting the article shape of the article to be detected in the image to the first article shape, and the texture matching position of each image pair is determined to be the target edge position in the edge position of each image pair and the texture matching order of each image pair is determined to be the order from the target edge position to the center position in the case of converting the article shape of the article to be detected in the image to the second article shape.
The first article shape can be a circular shape, the circular shape can be that the overall outline of the article to be detected is a circular shape, and the circular shape comprises an elliptical shape; the second article shape may be other than a circular shape, such as a parallelogram, diamond, rectangle, etc. The target one of the edge positions may be an edge position where there is an angle, such as four corners of a parallelogram, among the edge positions.
In order to improve the convenience and efficiency of texture matching of each image pair, element comparison can be performed on texture matching areas in combined image data, so that the time cost of texture matching is saved; in a first optional implementation manner provided in this embodiment, in a process of performing texture matching processing on combined image data according to a texture matching policy to obtain texture matching indexes of each image pair, the following operations are performed:
determining texture matching areas of the image pairs according to the texture matching areas and the texture matching positions;
and carrying out element comparison on the texture element sequences in the texture matching areas in the merged image data according to the texture matching sequence, and calculating the texture element matching degree of each image pair based on element comparison results.
The texture matching areas of the image pairs comprise image areas for performing texture matching on the image pairs.
Specifically, in the process of comparing the texture element sequences in the texture matching areas in the merged image data according to the texture matching sequence and calculating the texture element matching degree based on the element comparison result, the texture elements of each image pair in the texture matching areas in the merged image data can be compared according to the texture matching sequence and the texture element matching degree can be calculated based on the element comparison result. The texture element may be a pixel point, the pixel values of the pixel point may be compared, and the texture element matching degree may be a pixel matching degree. In addition, the matching degree of the texture elements can be calculated based on the comparison result by comparing the types of the texture elements of each image pair in the texture matching area in the combined image data and the number of each texture element type according to the texture matching sequence; the texture elements herein may also be texture elements divided by color, such as white texture elements, red texture elements, texture elements herein may also be texture elements divided by structure, such as circled texture elements, bar texture elements, etc.
For example, the image pair is < A1, B >, the texture matching region is an image region in which A1 and B are texture matched, the texture elements of A1 and B in the texture matching region in the combined image data of the image pair < A1, B > can be element-compared in the texture matching sequence, and the texture element matching degree is calculated based on the element comparison result.
In addition, the first alternative embodiment of performing texture matching processing on combined image data may be performed based on the first implementation of determining texture matching policies for each image pair provided above.
(2) Implementation II
In practical application, different texture extraction modes may exist for different articles to be detected, and the diversified requirements of the articles to be detected are met through the different texture extraction modes; in a second alternative implementation manner provided in this embodiment, in determining a texture matching policy of each image pair based on image parameters of an object image, the following operations are performed:
determining candidate texture extraction modes of each image pair according to the number of color types of the object images;
and screening out texture extraction modes from the candidate texture extraction modes based on the image specification of the transformation image.
Wherein the number of color categories of the article image refers to the number of different color categories contained in the article image; the image specification of the converted image refers to the image size of the converted image. The texture extraction method herein refers to a method of extracting texture features.
In the implementation, texture feature extraction is carried out on the combined image data through different texture extraction modes, so that the flexibility of texture matching is improved, and the success rate of the texture matching is further improved; in a second optional implementation manner provided in this embodiment, in a process of performing texture matching processing on combined image data according to a texture matching policy to obtain texture matching indexes of each image pair, the following operations are performed:
extracting respective texture color features, texture structure features and/or texture element specification features of the transformed image and each preset object image from the combined image data according to the texture extraction mode;
and calculating the feature matching degree of the transformation image and each preset object image according to the texture color features, the texture structure features and/or the texture element specification features.
Wherein the texture color feature refers to a feature that characterizes the color of the texture; the texture feature refers to a feature representing the structure of a texture, the texture feature comprises a texture shape feature and a texture specification feature of each texture shape feature, and in addition, the texture feature can also comprise other types of texture features; the texel specification feature refers to a feature that characterizes the specification of a texel, such as a specification feature of a white texel, i.e. a size feature representing a white texel.
In addition, the second alternative embodiment of performing texture matching processing on the combined image data may be performed based on the second implementation of determining texture matching policies for each image pair provided above.
In an alternative implementation manner provided in this embodiment, in the process of determining the candidate texture extraction manner of each image pair according to the number of color types of the object image, the following operations are performed:
if the number of the color types is larger than a number threshold, determining that the candidate texture extraction mode comprises a texture color extraction mode, a texture structure extraction mode and a texture element specification extraction mode;
and if the number of the color types is smaller than or equal to the number threshold, determining that the candidate texture extraction mode comprises the texture structure extraction mode and the texture element specification extraction mode.
In the above-mentioned alternative implementation manner provided in this embodiment, based on the image specification of the transformed image, the texture extraction manner is selected in the candidate texture extraction manner, and in the process of selecting the texture extraction manner in the candidate texture extraction manner based on the image specification of the transformed image, the following operations are performed:
And if the image specification of the transformed image is smaller than a specification threshold, eliminating the texture element specification extraction mode from the candidate texture extraction modes to obtain the texture extraction mode.
(3) Implementation III
In practical applications, the merged image data of each image pair may only include channel data, but since the merged image data does not include coordinate data, texture matching is performed to more fit the channel data; in a third alternative implementation manner provided in this embodiment, the texture matching length of each image pair may be calculated according to the image resolution of the object image, and specifically, in determining the texture matching policy of each image pair based on the image parameters of the object image, the following operations may be performed:
determining an image resolution of the item image;
and calculating the texture matching length of each image pair according to the image resolution.
The texture matching length of each image pair comprises a matching length for performing texture matching in the combined image data.
In order to improve the efficiency of texture matching, in a third alternative implementation manner provided in this embodiment, in a process of performing texture matching processing on the merged image data according to the texture matching policy to obtain texture matching indexes of each image pair, the following operations are performed:
Extracting key merging channel data from the merging channel data according to the texture matching length;
and calculating the texture matching degree of each image pair according to the distribution of the texture elements in the key merging channel data.
The key merging channel data refer to key merging channel data corresponding to the texture matching length in the merging channel data; the texture element distribution includes the number of identical texture elements in the key merge channel data of each preset item image and the number of identical texture elements in the key merge channel data of the transformed image, such as the number of white texture elements, the number of red texture elements in the key merge channel data of each preset item image, and the number of white texture elements, the number of red texture elements in the key merge channel data of the transformed image.
In addition, an optional implementation of the third texture matching process for combined image data herein may be performed based on the implementation of the third texture matching policy for each image pair provided above.
Optionally, step S206 is executed based on a texture matching model, where a texture matching policy of each image pair is determined based on image parameters of the object image, and texture matching processing is performed on the combined image data according to the texture matching policy to obtain texture matching indexes of each image pair; optionally, the input of the texture matching model is the combined image data of each image pair, and the input is the detection result of the object to be detected; the texture matching model is a classification model.
In a fourth alternative implementation manner provided in the present embodiment, based on the fact that step S206 may be performed by the texture matching model, the texture matching model may perform texture matching processing on the combined image data in the following manner:
decompressing the first compressed data and the second compressed data in the combined image data to obtain image data of each preset object image and image data of the converted image;
extracting texture features of the image data of each preset object image and the image data of the transformation image according to the texture matching strategy to obtain respective texture features;
and calculating the feature matching degree of the respective texture features as the texture matching index.
In practical application, in order to improve the efficiency of texture matching, the image acquisition page may acquire the object image of the object to be detected, and perform image preparation on the transformed image and the combined image data of each image pair in advance, and on this basis, the step S206 may be performed after detecting the detection instruction submitted based on the image acquisition page.
And step S208, scoring the texture matching index according to the transformation coordinates corresponding to the image coordinate transformation, and determining the detection result of the object to be detected based on the scoring result.
The above-mentioned image parameters based on the object image confirm the texture matching strategy of every image pair, and carry on the texture matching processing to the combined image data according to the texture matching strategy, obtain the texture matching index of every image pair, in this step, the transformation coordinates corresponding to carrying on the image coordinate transformation may be different in the degree of importance in the texture matching process, so in order to promote accuracy and availability of the texture matching index of every image pair, and then promote the accuracy of the detection result of the object to be detected, can carry on the scoring processing to the texture matching index of every image pair according to the transformation coordinates corresponding to the image coordinate transformation, and confirm the detection result of the object to be detected based on the scoring result.
The detection results of the to-be-detected object include a quality detection result and/or a traceability detection result of the to-be-detected object, where the traceability detection result refers to a result of detecting factory information of the to-be-detected object; the quality detection results comprise detection passing results and/or detection failing results, and the quality detection results comprise results of quality detection of the object to be detected and also comprise results of authenticity detection of the object to be detected. Authenticity detection refers to detecting whether an item to be detected is authentic or counterfeit.
In specific implementation, the texture matching index can be weighted according to the coordinate weight of the transformation coordinate corresponding to the image coordinate transformation to obtain the weighted matching index of each image pair, if the weighted matching index of each image pair has an image pair larger than the matching index threshold, the quality detection result or the authenticity detection result of the object to be detected is determined to be passing, and if the weighted matching index of each image pair does not have an image pair larger than the matching index threshold, the quality detection result or the authenticity detection result of the object to be detected is determined to be failing.
In the specific implementation process, in order to improve the effectiveness of the texture matching index of each image pair, index correction can be performed on the texture matching index of each image pair according to the transformation coordinates corresponding to the image coordinate transformation; in the process of scoring the texture matching index according to the transformation coordinates corresponding to the image coordinate transformation, the following operations may be performed:
determining the coordinate weight of each transformation coordinate in a plurality of transformation coordinates corresponding to the image coordinate transformation;
and carrying out weighted calculation on the texture matching indexes of the image pairs according to the coordinate weights, and obtaining the weighted matching indexes of the image pairs as the scoring result.
The coordinate weight of each transformation coordinate is used for representing the importance degree of each transformation coordinate in the texture matching process.
Specifically, in the process of performing weighted calculation on the texture matching index of each image pair according to the coordinate weight to obtain the weighted matching index of each image pair, the product of the coordinate weight of each transformation coordinate in the plurality of transformation coordinates and the texture matching index of the corresponding image pair can be calculated as the weighted matching index of the image pair to be used as the scoring result of the image pair.
In addition, in an optional implementation manner provided in this embodiment, in a process of scoring the texture matching index according to the transformation coordinates corresponding to the image coordinate transformation, the following operations may be performed on the basis of the first combined image data provided in the foregoing performing the texture matching process:
determining the coordinate weight of each transformation coordinate in a plurality of transformation coordinates corresponding to the image coordinate transformation;
and carrying out weighted calculation on the matching degree of the texture elements of each image pair according to the coordinate weights, and obtaining the weighted matching degree of each image pair as the scoring result.
In determining the coordinate weight of each transformed coordinate, in an alternative implementation manner provided in this embodiment, the coordinate weight of each transformed coordinate is obtained by:
Acquiring target transformation coordinates corresponding to a target transformation image of the historical object image;
and calculating the coordinate number of each target transformation coordinate in the target transformation coordinates, and calculating the coordinate weight of each transformation coordinate according to the coordinate number.
The historical item image refers to an item image obtained in the history, and the historical item image here may be a historical item image of an item consistent with the item type of the item to be detected in the embodiment, or may be a historical item image of an item inconsistent with the item type of the item to be detected.
Specifically, in calculating the coordinate weight of each transformed coordinate in terms of the number of coordinates, the number of coordinates of each transformed coordinate in the number of coordinates of each target transformed coordinate may be determined, and the coordinate weight of each transformed coordinate may be calculated from the number of coordinates of each transformed coordinate.
Note that, the coordinate weights of the transformed coordinates may be obtained by random assignment.
In order to improve the detection accuracy and detection effectiveness of the object to be detected, after the texture matching indexes of each image pair are obtained, the object image of the object to be detected can be subjected to secondary coordinate transformation, and further texture matching is performed; in determining the detection result of the article to be detected based on the scoring result, the following operations may be performed:
Determining a target image pair in each image pair according to the weighted matching index of each image pair, and performing secondary coordinate sampling based on transformation coordinates corresponding to the target image pair to obtain secondary transformation coordinates;
and carrying out secondary coordinate transformation on the object image according to the secondary transformation coordinates to obtain a secondary transformation image, and determining a detection result of the object to be detected according to a texture feature comparison result of the secondary transformation image and a preset object image in the target image pair.
Wherein, the secondary transformation coordinates can be one or more, and the secondary transformation image can be one or more. The transformation coordinates corresponding to the target image pair refer to transformation coordinates corresponding to a transformation image contained in the target image pair.
Specifically, in the process of determining a target image pair in each image pair according to the weighted matching index of each image pair, and performing secondary coordinate sampling based on the transformation coordinates corresponding to the target image pair to obtain secondary transformation coordinates, the weighted matching index of each image pair can be subjected to sorting processing, and the target image pair corresponding to the weighted matching index with the sorting position before the preset position is determined in the sorting result, and the transformation coordinates of the transformation image in the target image pair are subjected to secondary coordinate sampling to obtain the secondary transformation coordinates.
It should be noted that, in the first alternative implementation manner provided in this embodiment, in determining the detection result of the object to be detected based on the scoring result, the following operations may be performed on the basis of the first alternative implementation manner provided in the foregoing aspect of performing texture matching processing on the combined image data, or on the basis of the foregoing alternative implementation manner provided in the foregoing aspect of performing scoring processing on the texture matching index according to the transformation coordinates corresponding to the image coordinate transformation:
determining a target image pair in each image pair according to the weighted matching degree of each image pair, and performing secondary coordinate sampling based on transformation coordinates corresponding to the target image pair to obtain secondary transformation coordinates;
and carrying out secondary coordinate transformation on the object image according to the secondary transformation coordinates to obtain a secondary transformation image, and determining a detection result of the object to be detected according to a texture feature comparison result of the secondary transformation image and a preset object image in the target image pair.
In practical application, there may be a scoring result greater than a scoring threshold value or a scoring result less than or equal to the scoring threshold value in the scoring result of each image pair, and in the second alternative implementation provided in this embodiment, in the process of determining the detection result of the to-be-detected article based on the scoring result, the following operations are performed:
Judging whether scoring results larger than a scoring threshold exist in the scoring results of the image pairs or not;
if the detection result of the object to be detected is determined to be passing detection.
On this basis, in an optional implementation manner provided in this embodiment, if the execution result after the execution of the scoring result that is greater than the scoring threshold is not present in the scoring result determined in the image pairs, the following operations are executed:
determining a target scoring result in the scoring results of the image pairs, and determining target transformation coordinates corresponding to the transformed images in the specific image pairs corresponding to the target scoring result;
determining an intermediate value of secondary coordinate sampling according to the difference value between the target scoring result and the scoring threshold, and performing secondary coordinate sampling according to the intermediate value, the transformed coordinate number of the image coordinate transformation and the target transformed coordinate to obtain secondary transformed coordinate;
performing secondary coordinate transformation on the object image according to the secondary transformation coordinates to obtain a secondary transformation image, and determining texture similarity according to a texture comparison result of the secondary transformation image and a preset object image in the specific image pair;
If the texture similarity is greater than the similarity threshold, determining that the detection result of the object to be detected is passing, and if the texture similarity is not greater than the similarity threshold, determining that the detection result of the object to be detected is not passing.
Specifically, in the process of determining the target scoring result in the scoring results of each image pair, the scoring results of each image pair may be ranked, and the scoring result before the ranking position is located at the preset position is determined as the target scoring result in the ranking results. In the process of determining the intermediate value of the secondary coordinate sampling according to the difference value between the target scoring result and the scoring threshold value, the intermediate value of the difference value mapping between the target scoring result and the scoring threshold value can be searched in a difference value mapping table; in the process of performing secondary coordinate sampling according to the intermediate numerical value, the transformation number of the image coordinate transformation and the target transformation coordinate to obtain a secondary transformation coordinate, determining the sampling number of the secondary coordinate sampling according to the intermediate numerical value and the transformation number of the image coordinate transformation, and performing secondary coordinate sampling on the target transformation coordinate according to the sampling number of the secondary coordinate sampling to obtain the secondary transformation coordinate; in the process of determining the sampling number of the secondary coordinate samples according to the intermediate numerical value and the transformation number of the image coordinate transformation, the intermediate numerical value and the transformation number of the image coordinate transformation can also be input into a number calculation algorithm to calculate the sampling number so as to obtain the sampling number of the secondary coordinate samples; for example, the transformation images comprise B1, B2, B3 and B4, the transformation images in the image pair corresponding to the target scoring result are determined to be B3, M represents an intermediate value, and 2 x (M/K-1) angles are taken as secondary deflection angles for the neighborhood of the target deflection angle corresponding to B3 according to the angle interval of 360/M.
For example, the number calculation algorithm includes 2 x (M/K-1), K represents the number of transforms of the image coordinate transform, and M represents an intermediate value.
It should be noted that, based on the above-mentioned second alternative embodiment of performing texture matching processing on the combined image data, the second alternative embodiment of determining the detection result of the object to be detected may be performed.
Optionally, step S208 is executed based on a texture matching model by performing scoring processing on the texture matching index according to the transformation coordinates corresponding to the image coordinate transformation, determining a detection result of the object to be detected based on the scoring result, and executing the scoring processing based on the texture matching model; optionally, step S208 is performed after detecting a detection instruction submitted based on the image acquisition page.
It should be noted that, the above-provided alternative embodiments for each of the steps S202 to S208 may be executed separately, or may be combined with each other and cited with each other, which is not specifically limited herein.
In summary, the one or more image processing methods provided in this embodiment firstly perform image coordinate transformation on an object image of an object to be detected to obtain a transformed image, obtain a plurality of preset object images, perform data merging processing on image data of an image pair formed by each preset object image and the transformed image, obtain merged image data of each image pair, determine image resolution of the object image, calculate a texture matching area of each image pair according to the image resolution and a conversion factor of the resolution and area, determine a texture matching position and a texture matching sequence of each image pair based on an object shape of the object to be detected in the transformed image, determine a texture matching area of each image pair according to the texture matching area and the texture matching position, perform element comparison according to the texture matching sequence, calculate a texture element matching degree based on the element comparison result, secondly determine coordinate weights of each transformed coordinate in the image pair corresponding to the image coordinate transformation, perform weighted calculation on texture matching indexes of each image pair according to the coordinate weight, obtain a weighted matching index of each image pair, perform secondary transformation index of each image pair according to the weighted matching index of each image pair, and obtain a secondary transformation index of each image pair, and perform secondary image matching according to the target transformation coordinate pair, and obtain a secondary image matching result of the secondary transformation of each image pair;
Or firstly, carrying out image coordinate transformation on an article image of an article to be detected to obtain a transformed image, obtaining a plurality of preset article images, carrying out data merging processing on image data of image pairs formed by the preset article images and the transformed image to obtain merged image data of the image pairs, determining candidate texture extraction modes of the image pairs according to the number of color types of the article images, screening out the texture extraction modes in the candidate texture extraction modes based on the image specification of the transformed image, extracting respective texture color features, texture structure features and/or texture element specification features of the transformed image and each preset article image from the merged image data according to the texture extraction modes, calculating the feature matching degree of the transformed image and each preset article image according to the texture color features, the texture structure features and/or the texture element specification features, secondly, carrying out weighted calculation on texture matching indexes of the image pairs according to the coordinate weights of transformation coordinates corresponding to the image coordinate transformation, obtaining weighted matching indexes of the image pairs as scoring results of the image pairs, judging whether scoring results of the image pairs are larger than a threshold value, if the scoring results are present, determining that the article to be detected is detected, detecting the article to be detected, carrying out secondary sampling according to the coordinate transformation results of the coordinate transformation coordinates of the transformed image pair, and determining a secondary sampling value, and obtaining a secondary grading value according to the intermediate grading result, and carrying out secondary coordinate transformation on the object image according to the secondary transformation coordinates to obtain a secondary transformation image, determining texture similarity according to texture comparison results of the secondary transformation image and the object image preset in the image pair, if texture similarity larger than a similarity threshold exists in the texture similarity, determining that the detection result of the object to be detected is detection passing, if texture similarity larger than the similarity threshold does not exist in the texture similarity, determining that the detection result of the object to be detected is detection failing, and improving the flexibility of texture matching in a two-stage different texture matching mode, and improving the accuracy and effectiveness of the texture matching, and further improving the accuracy of the detection result of the object to be detected.
The following describes an example of application of the image processing method provided in the present embodiment to a first image detection scene, and referring to fig. 4, the image processing method applied to the first image detection scene specifically includes the following steps.
Step S402, performing image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and acquiring a plurality of preset article images.
Step S404, carrying out data merging processing on the image data of the image pairs formed by the preset object images and the transformation images to obtain merged image data of the image pairs.
Step S406, determining the image resolution of the object image, and calculating the texture matching area of each image pair according to the image resolution and the conversion factor of the resolution and the area.
Step S408, determining the texture matching position and the texture matching order of each image pair based on the article shape of the article to be detected in the transformed image.
Step S410, determining texture matching areas of the image pairs according to the texture matching areas and the texture matching positions, comparing elements of the texture element sequences in the texture matching areas in the combined image data according to the texture matching sequence, and calculating the matching degree of the texture elements of the image pairs based on the element comparison results.
Step S412, the matching degree of the texture elements of each image pair is weighted according to the coordinate weight of the transformation coordinates corresponding to the image coordinate transformation, so as to obtain the weighted matching degree of each image pair.
In step S414, a target image pair is determined in each image pair according to the weighted matching degree of each image pair, and secondary coordinate sampling is performed based on the transformed coordinates corresponding to the target image pair, so as to obtain secondary transformed coordinates.
And S416, performing secondary coordinate transformation on the object image according to the secondary transformation coordinates to obtain a secondary transformation image, and determining a detection result of the object to be detected according to a texture feature comparison result of the secondary transformation image and a preset object image in the target image pair.
It should be noted that any one step or any combination of steps from step S402 to step S416 may be replaced by the corresponding technical means provided in step S202 to step S208 according to the requirement of implementing deployment, which is not described herein.
The following further describes the image processing method provided in this embodiment by taking an application of the image processing method provided in this embodiment to a second image detection scene as an example, and referring to fig. 5, the image processing method applied to the second image detection scene specifically includes the following steps.
Step S502, performing image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and obtaining a plurality of preset article images.
Step S504 extracts first channel data in the image data of each preset article image and extracts second channel data in the image data of the converted image.
Step S506, the first channel data and the second channel data are combined to obtain combined channel data of each image pair.
Step S508, calculating the texture matching length of each image pair according to the image resolution of the object image.
Step S510, extracting key merging channel data from the merging channel data according to the texture matching length, and calculating the texture matching degree of each image pair according to the distribution of the texture elements in the key merging channel data.
In step S512, the texture matching degree of each image pair is weighted according to the coordinate weight of the transformation coordinate corresponding to the image coordinate transformation, and the weighted matching degree of each image pair is obtained as the scoring result.
Step S514, judging whether the scoring result larger than a scoring threshold exists in the scoring results of each image pair;
if the detection result exists, determining that the quality detection result of the object to be detected is passing detection;
If not, the following steps S516 to S522 are performed.
Step S516, determining a target scoring result in the scoring results of each image pair, and determining target transformation coordinates corresponding to the transformed images in the specific image pair corresponding to the target scoring result.
Step S518, determining an intermediate value of the secondary coordinate sampling according to the difference value between the target scoring result and the scoring threshold, and performing secondary coordinate sampling according to the intermediate value, the transformation coordinate number of the image coordinate transformation and the target transformation coordinate to obtain a secondary transformation coordinate.
And step S520, performing secondary coordinate transformation on the object image according to the secondary transformation coordinates to obtain a secondary transformation image, and determining the texture similarity according to the texture comparison result of the secondary transformation image and the preset object image in the specific image pair.
In step S522, if there is a texture similarity greater than the similarity threshold, determining that the detection result of the authenticity of the object to be detected is passing.
It should be noted that any one step or any combination of steps from step S502 to step S522 may be replaced by the corresponding technical means provided in step S202 to step S208 according to the requirement of implementing deployment, which is not described herein.
An embodiment of an image processing apparatus provided in the present specification is as follows:
in the above-described embodiments, an image processing method and an image processing apparatus corresponding thereto are provided, and the following description is made with reference to the accompanying drawings.
Referring to fig. 6, a schematic diagram of an embodiment of an image processing apparatus provided in this embodiment is shown.
Since the apparatus embodiments correspond to the method embodiments, the description is relatively simple, and the relevant portions should be referred to the corresponding descriptions of the method embodiments provided above. The device embodiments described below are merely illustrative.
The present embodiment provides an image processing apparatus including:
an image transformation module 602 configured to perform image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and acquire a plurality of preset article images;
a data merging module 604, configured to perform data merging processing on the image data of the image pairs formed by each preset article image and the transformation image, so as to obtain merged image data of each image pair;
a texture matching module 606 configured to determine a texture matching policy of each image pair based on image parameters of the object image, and perform texture matching processing on the combined image data according to the texture matching policy, so as to obtain texture matching indexes of each image pair;
And the scoring processing module 608 is configured to score the texture matching index according to the transformation coordinates corresponding to the image coordinate transformation, and determine the detection result of the object to be detected based on the scoring result.
An embodiment of an image processing apparatus provided in the present specification is as follows:
in correspondence to the above-described image processing method, one or more embodiments of the present disclosure further provide an image processing apparatus for performing the above-provided image processing method, based on the same technical concept, and fig. 7 is a schematic structural diagram of the image processing apparatus provided by the one or more embodiments of the present disclosure.
An image processing apparatus provided in this embodiment includes:
as shown in fig. 7, the image processing apparatus may have a relatively large difference due to different configurations or performances, and may include one or more processors 701 and a memory 702, and one or more storage applications or data may be stored in the memory 702. Wherein the memory 702 may be transient storage or persistent storage. The application programs stored in the memory 702 may include one or more modules (not shown), each of which may include a series of computer-executable instructions in the image processing apparatus. Still further, the processor 701 may be arranged to communicate with the memory 702 and execute a series of computer executable instructions in the memory 702 on the image processing device. The image processing device may also include one or more power supplies 703, one or more wired or wireless network interfaces 704, one or more input/output interfaces 705, one or more keyboards 706, and the like.
In a particular embodiment, an image processing apparatus includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the image processing apparatus, and configured to be executed by the one or more processors, the one or more programs comprising computer-executable instructions for:
carrying out image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and obtaining a plurality of preset article images;
carrying out data merging processing on image data of image pairs formed by each preset object image and the transformation image to obtain merged image data of each image pair;
determining a texture matching strategy of each image pair based on the image parameters of the object image, and performing texture matching processing on the combined image data according to the texture matching strategy to obtain texture matching indexes of each image pair;
and scoring the texture matching index according to transformation coordinates corresponding to the image coordinate transformation, and determining a detection result of the object to be detected based on the scoring result.
An embodiment of a storage medium provided in the present specification is as follows:
in correspondence with the above-described image processing method, one or more embodiments of the present specification further provide a storage medium based on the same technical idea.
The storage medium provided in this embodiment is configured to store computer executable instructions that, when executed by a processor, implement the following flow:
carrying out image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and obtaining a plurality of preset article images;
carrying out data merging processing on image data of image pairs formed by each preset object image and the transformation image to obtain merged image data of each image pair;
determining a texture matching strategy of each image pair based on the image parameters of the object image, and performing texture matching processing on the combined image data according to the texture matching strategy to obtain texture matching indexes of each image pair;
and scoring the texture matching index according to transformation coordinates corresponding to the image coordinate transformation, and determining a detection result of the object to be detected based on the scoring result.
It should be noted that, in the present specification, an embodiment of a storage medium and an embodiment of an image processing method in the present specification are based on the same inventive concept, so that a specific implementation of the embodiment may refer to an implementation of the foregoing corresponding method, and a repetition is omitted.
In this specification, each embodiment is described in a progressive manner, and the same or similar parts of each embodiment are referred to each other, and each embodiment focuses on the differences from other embodiments, for example, an apparatus embodiment, and a storage medium embodiment, which are all similar to a method embodiment, so that description is relatively simple, and relevant content in reading apparatus embodiments, and storage medium embodiments is referred to the part description of the method embodiment.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In the 30 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each unit may be implemented in the same piece or pieces of software and/or hardware when implementing the embodiments of the present specification.
One skilled in the relevant art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable image processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is by way of example only and is not intended to limit the present disclosure. Various modifications and changes may occur to those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. that fall within the spirit and principles of the present document are intended to be included within the scope of the claims of the present document.

Claims (24)

1. An image processing method, comprising:
carrying out image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and obtaining a plurality of preset article images;
carrying out data merging processing on image data of image pairs formed by each preset object image and the transformation image to obtain merged image data of each image pair;
determining a texture matching strategy of each image pair based on the image parameters of the object image, and performing texture matching processing on the combined image data according to the texture matching strategy to obtain texture matching indexes of each image pair;
And scoring the texture matching index according to transformation coordinates corresponding to the image coordinate transformation, and determining a detection result of the object to be detected based on the scoring result.
2. The image processing method of claim 1, the determining a texture matching policy for the image pairs based on image parameters of the item image, comprising:
determining the image resolution of the object image, and calculating the texture matching area of each image pair according to the image resolution and the conversion factors of the resolution and the area;
and determining texture matching positions and texture matching sequences of the image pairs based on the object shape of the object to be detected in the transformation image.
3. The image processing method according to claim 2, wherein the performing texture matching processing on the combined image data according to the texture matching policy to obtain texture matching indexes of the image pairs includes:
determining texture matching areas of the image pairs according to the texture matching areas and the texture matching positions;
and carrying out element comparison on the texture element sequences in the texture matching areas in the merged image data according to the texture matching sequence, and calculating the texture element matching degree of each image pair based on element comparison results.
4. The image processing method according to claim 3, wherein the scoring the texture matching index according to the transformation coordinates corresponding to the image coordinate transformation comprises:
determining the coordinate weight of each transformation coordinate in a plurality of transformation coordinates corresponding to the image coordinate transformation;
and carrying out weighted calculation on the matching degree of the texture elements of each image pair according to the coordinate weights, and obtaining the weighted matching degree of each image pair as the scoring result.
5. The image processing method according to claim 4, wherein the determining the detection result of the object to be detected based on the scoring result includes:
determining a target image pair in each image pair according to the weighted matching degree of each image pair, and performing secondary coordinate sampling based on transformation coordinates corresponding to the target image pair to obtain secondary transformation coordinates;
and carrying out secondary coordinate transformation on the object image according to the secondary transformation coordinates to obtain a secondary transformation image, and determining a detection result of the object to be detected according to a texture feature comparison result of the secondary transformation image and a preset object image in the target image pair.
6. The image processing method according to claim 4, wherein the coordinate weight of each transformed coordinate is obtained by:
Acquiring target transformation coordinates corresponding to a target transformation image of the historical object image;
and calculating the coordinate number of each target transformation coordinate in the target transformation coordinates, and calculating the coordinate weight of each transformation coordinate according to the coordinate number.
7. The image processing method of claim 1, the determining a texture matching policy for the image pairs based on image parameters of the item image, comprising:
determining candidate texture extraction modes of each image pair according to the number of color types of the object images;
and screening out texture extraction modes from the candidate texture extraction modes based on the image specification of the transformation image.
8. The image processing method according to claim 7, wherein the performing texture matching processing on the combined image data according to the texture matching policy to obtain texture matching indexes of the image pairs, includes:
extracting respective texture color features, texture structure features and/or texture element specification features of the transformed image and each preset object image from the combined image data according to the texture extraction mode;
and calculating the feature matching degree of the transformation image and each preset object image according to the texture color features, the texture structure features and/or the texture element specification features.
9. The image processing method according to claim 8, the determining the detection result of the object to be detected based on the scoring result, comprising:
judging whether scoring results larger than a scoring threshold exist in the scoring results of the image pairs or not;
if the detection result of the object to be detected is determined to be passing detection.
10. The image processing method according to claim 9, wherein if the execution result after the execution of the scoring result operation for determining whether the scoring result greater than the scoring threshold exists in the scoring results of the image pairs is not present, the following operations are executed:
determining a target scoring result in the scoring results of the image pairs, and determining target transformation coordinates corresponding to the transformed images in the specific image pairs corresponding to the target scoring result;
determining an intermediate value of secondary coordinate sampling according to the difference value between the target scoring result and the scoring threshold, and performing secondary coordinate sampling according to the intermediate value, the transformed coordinate number of the image coordinate transformation and the target transformed coordinate to obtain secondary transformed coordinate;
performing secondary coordinate transformation on the object image according to the secondary transformation coordinates to obtain a secondary transformation image, and determining texture similarity according to a texture comparison result of the secondary transformation image and a preset object image in the specific image pair;
If the texture similarity is greater than a similarity threshold, determining that the detection result of the object to be detected is passing detection.
11. The image processing method according to claim 7, wherein the determining the candidate texture extraction method for each image pair according to the number of color categories of the object image comprises:
if the number of the color types is larger than a number threshold, determining that the candidate texture extraction mode comprises a texture color extraction mode, a texture structure extraction mode and a texture element specification extraction mode;
and if the number of the color types is smaller than or equal to the number threshold, determining that the candidate texture extraction mode comprises the texture structure extraction mode and the texture element specification extraction mode.
12. The image processing method according to claim 11, wherein the selecting a texture extraction mode among the candidate texture extraction modes based on the image specification of the transformed image, comprises:
and if the image specification of the transformed image is smaller than a specification threshold, eliminating the texture element specification extraction mode from the candidate texture extraction modes to obtain the texture extraction mode.
13. The image processing method according to claim 1, wherein the performing data merging processing on the image data of the image pairs formed by the preset article images and the transformed image to obtain merged image data of the image pairs, includes:
Extracting first channel data in the image data of each preset article image and extracting second channel data in the image data of the transformation image;
and carrying out channel data combination on the first channel data and the second channel data to obtain combined channel data of each image pair.
14. The image processing method of claim 13, the determining a texture matching policy for the image pairs based on image parameters of the item image, comprising:
determining an image resolution of the item image;
and calculating the texture matching length of each image pair according to the image resolution.
15. The image processing method according to claim 14, wherein the performing texture matching processing on the combined image data according to the texture matching policy to obtain texture matching indexes of the image pairs includes:
extracting key merging channel data from the merging channel data according to the texture matching length;
and calculating the texture matching degree of each image pair according to the distribution of the texture elements in the key merging channel data.
16. The image processing method according to claim 1, wherein the performing data merging processing on the image data of the image pairs formed by the preset article images and the transformed image to obtain merged image data of the image pairs, includes:
Compressing the image data of each preset article image to obtain first compressed data of each preset article image, and compressing the image data of the converted image to obtain second compressed data of the converted image;
and carrying out data combination on the first compressed data and the second compressed data to obtain the combined image data.
17. The image processing method according to claim 16, wherein the step of determining a texture matching policy of each image pair based on image parameters of the article image, performing texture matching processing on the combined image data according to the texture matching policy to obtain texture matching indexes of each image pair, and the step of scoring the texture matching indexes according to transformation coordinates corresponding to the image coordinate transformation, determining a detection result of the article to be detected based on the scoring result, and executing based on a texture matching model;
the texture matching model is input as combined image data of each image pair, and output as a detection result of the object to be detected; the texture matching model is a classification model.
18. The image processing method according to claim 17, wherein performing texture matching processing on the combined image data according to the texture matching policy to obtain texture matching indexes of the image pairs, includes:
Decompressing the first compressed data and the second compressed data in the combined image data to obtain image data of each preset object image and image data of the converted image;
extracting texture features of the image data of each preset object image and the image data of the transformation image according to the texture matching strategy to obtain respective texture features;
and calculating the feature matching degree of the respective texture features as the texture matching index.
19. The image processing method according to claim 1, wherein the image coordinate transformation of the article image of the article to be detected to obtain a transformed image, and the step of acquiring a plurality of preset article images is performed, further comprising:
determining the color type of the article image and/or the article shape of the article to be detected in the article image;
and determining the transformation coordinate number of the image coordinate transformation according to the color type and/or the object shape.
20. The image processing method according to claim 1, wherein the performing data merging processing on the image data of the image pairs formed by the preset article images and the transformed image to obtain merged image data of the image pairs, includes:
Identifying position coordinates of the object to be detected in the transformed image;
according to the image specification and the position coordinates of each preset article image, carrying out data boundary adjustment on the image data of each preset article image and the image data of the transformation image to obtain adjustment data of each preset article image and adjustment data of the transformation image;
and carrying out data combination on the adjustment data of each preset article image and the adjustment data of the transformation image to obtain the combined image data.
21. The image processing method according to claim 1, wherein the article image of the article to be detected is acquired based on an image acquisition page;
the step of carrying out image coordinate transformation on the article image of the article to be detected to obtain a transformed image, the step of obtaining a plurality of preset article images, the step of carrying out data merging processing on image data of image pairs formed by each preset article image and the transformed image to obtain merged image data of each image pair, and executing after detecting an acquisition instruction submitted by the image acquisition page;
the step of determining the texture matching strategy of each image pair based on the image parameters of the object image, performing texture matching processing on the combined image data according to the texture matching strategy to obtain texture matching indexes of each image pair, the step of scoring the texture matching indexes according to transformation coordinates corresponding to the image coordinate transformation, determining the detection result of the object to be detected based on the scoring result, and executing after detecting a detection instruction submitted based on the image acquisition page.
22. An image processing apparatus comprising:
the image transformation module is configured to perform image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and acquire a plurality of preset article images;
the data merging module is configured to perform data merging processing on the image data of the image pairs formed by the preset object images and the transformation images to obtain merged image data of the image pairs;
the texture matching module is configured to determine a texture matching strategy of each image pair based on the image parameters of the object image, and perform texture matching processing on the combined image data according to the texture matching strategy to obtain texture matching indexes of each image pair;
and the scoring processing module is configured to score the texture matching index according to transformation coordinates corresponding to the image coordinate transformation and determine the detection result of the object to be detected based on the scoring result.
23. An image processing apparatus comprising:
a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to:
carrying out image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and obtaining a plurality of preset article images;
Carrying out data merging processing on image data of image pairs formed by each preset object image and the transformation image to obtain merged image data of each image pair;
determining a texture matching strategy of each image pair based on the image parameters of the object image, and performing texture matching processing on the combined image data according to the texture matching strategy to obtain texture matching indexes of each image pair;
and scoring the texture matching index according to transformation coordinates corresponding to the image coordinate transformation, and determining a detection result of the object to be detected based on the scoring result.
24. A storage medium storing computer-executable instructions that when executed by a processor implement the following:
carrying out image coordinate transformation on an article image of an article to be detected to obtain a transformed image, and obtaining a plurality of preset article images;
carrying out data merging processing on image data of image pairs formed by each preset object image and the transformation image to obtain merged image data of each image pair;
determining a texture matching strategy of each image pair based on the image parameters of the object image, and performing texture matching processing on the combined image data according to the texture matching strategy to obtain texture matching indexes of each image pair;
And scoring the texture matching index according to transformation coordinates corresponding to the image coordinate transformation, and determining a detection result of the object to be detected based on the scoring result.
CN202410189331.8A 2024-02-20 2024-02-20 Image processing method and device Pending CN117853754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410189331.8A CN117853754A (en) 2024-02-20 2024-02-20 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410189331.8A CN117853754A (en) 2024-02-20 2024-02-20 Image processing method and device

Publications (1)

Publication Number Publication Date
CN117853754A true CN117853754A (en) 2024-04-09

Family

ID=90546557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410189331.8A Pending CN117853754A (en) 2024-02-20 2024-02-20 Image processing method and device

Country Status (1)

Country Link
CN (1) CN117853754A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000027131A2 (en) * 1998-10-30 2000-05-11 C3D Limited Improved methods and apparatus for 3-d imaging
US8200020B1 (en) * 2011-11-28 2012-06-12 Google Inc. Robust image alignment using block sums
CN110097054A (en) * 2019-04-29 2019-08-06 济南浪潮高新科技投资发展有限公司 A kind of text image method for correcting error based on image projection transformation
US20220230443A1 (en) * 2021-01-19 2022-07-21 Micromax International Corp. Method and system for detecting and analyzing objects
CN114820470A (en) * 2022-04-07 2022-07-29 苏州科技大学 Plate defect detection system and detection method based on multi-feature fusion
CN114821614A (en) * 2022-04-07 2022-07-29 平安科技(深圳)有限公司 Image recognition method and device, electronic equipment and computer readable storage medium
CN115908774A (en) * 2023-01-10 2023-04-04 佰聆数据股份有限公司 Quality detection method and device of deformed material based on machine vision
CN116824339A (en) * 2023-07-13 2023-09-29 支付宝(杭州)信息技术有限公司 Image processing method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000027131A2 (en) * 1998-10-30 2000-05-11 C3D Limited Improved methods and apparatus for 3-d imaging
US8200020B1 (en) * 2011-11-28 2012-06-12 Google Inc. Robust image alignment using block sums
CN110097054A (en) * 2019-04-29 2019-08-06 济南浪潮高新科技投资发展有限公司 A kind of text image method for correcting error based on image projection transformation
US20220230443A1 (en) * 2021-01-19 2022-07-21 Micromax International Corp. Method and system for detecting and analyzing objects
CN114820470A (en) * 2022-04-07 2022-07-29 苏州科技大学 Plate defect detection system and detection method based on multi-feature fusion
CN114821614A (en) * 2022-04-07 2022-07-29 平安科技(深圳)有限公司 Image recognition method and device, electronic equipment and computer readable storage medium
CN115908774A (en) * 2023-01-10 2023-04-04 佰聆数据股份有限公司 Quality detection method and device of deformed material based on machine vision
CN116824339A (en) * 2023-07-13 2023-09-29 支付宝(杭州)信息技术有限公司 Image processing method and device

Similar Documents

Publication Publication Date Title
CN111476306A (en) Object detection method, device, equipment and storage medium based on artificial intelligence
CN114638980B (en) Vegetable variety identification processing method and device
CN116824339A (en) Image processing method and device
CN106033613B (en) Method for tracking target and device
CN116994007B (en) Commodity texture detection processing method and device
CN115712866A (en) Data processing method, device and equipment
CN117078962B (en) Data chaining method and device based on texture acquisition
CN116958135B (en) Texture detection processing method and device
CN117612269A (en) Biological attack detection method, device and equipment
CN115187307B (en) Advertisement putting processing method and device for virtual world
CN108536769B (en) Image analysis method, search method and device, computer device and storage medium
CN117853754A (en) Image processing method and device
CN116503357A (en) Image processing method and device
EP3940590A1 (en) Methods, apparatuses, devices, and systems for testing biometric recognition device
CN118568276A (en) Index-based virtual image data processing method and device
CN110309859A (en) A kind of image true-false detection method, device and electronic equipment
CN114638613A (en) Dish settlement processing method and device based on identity recognition
JP6175904B2 (en) Verification target extraction system, verification target extraction method, verification target extraction program
CN117156040A (en) Equipment screen adjusting method and device
CN116994248B (en) Texture detection processing method and device
CN111275445B (en) Data processing method, device and equipment
CN115905913B (en) Method and device for detecting digital collection
CN117975045A (en) Texture recognition processing method and device based on model
CN117975044A (en) Image processing method and device based on feature space
CN118038087A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination