CN111046911A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111046911A
CN111046911A CN201911106001.3A CN201911106001A CN111046911A CN 111046911 A CN111046911 A CN 111046911A CN 201911106001 A CN201911106001 A CN 201911106001A CN 111046911 A CN111046911 A CN 111046911A
Authority
CN
China
Prior art keywords
image
brightness
similarity
characteristic
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911106001.3A
Other languages
Chinese (zh)
Inventor
朱兴杰
刘岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taikang Insurance Group Co Ltd
Original Assignee
Taikang Insurance Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taikang Insurance Group Co Ltd filed Critical Taikang Insurance Group Co Ltd
Priority to CN201911106001.3A priority Critical patent/CN111046911A/en
Publication of CN111046911A publication Critical patent/CN111046911A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention discloses an image processing method and device, and relates to the technical field of computers. One embodiment of the method comprises: converting the target image into a brightness image, and calculating a brightness co-occurrence matrix of the brightness image in multiple directions; splicing the brightness co-occurrence matrixes in multiple directions into a characteristic image, and performing characteristic extraction on the characteristic image to obtain a first characteristic vector; acquiring a second feature vector corresponding to at least one reference image in the reference image set, and respectively calculating the similarity of the first feature vector and the second feature vector; and determining whether the reference image which is the same as or similar to the target image exists in the reference image set according to the similarity so as to identify the abnormal target image. According to the method, the brightness co-occurrence matrixes of the brightness images in multiple directions are calculated, and then the similarity measurement is carried out on the images based on the brightness co-occurrence matrixes, so that the automatic comparison of the image similarity is realized, and the accuracy of image retrieval is improved.

Description

Image processing method and device
Technical Field
The present invention relates to the field of computers, and in particular, to an image processing method and apparatus.
Background
With the popularization of terminal devices equipped with cameras, such as tablet computers and smart phones, the number of services for conducting service transactions based on digital image data has increased dramatically. For the insurance field, the digital images contain a large amount of useful information such as identity documents, medical bills, medical scenes and the like, and the required information is extracted from the digital images, so that corresponding business can be conveniently handled.
However, the same or similar images may exist in the digital images, and the same or similar images are likely to be repeat reimbursement requests for the same case, so that the same or similar images need to be identified. In the prior art, workers are usually required to manually check whether data images are the same or similar, so that a user is prevented from reimbursing the same case for many times.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
and manually checking whether the data images are the same or similar, the checking speed is low, the checking effect is poor, the cost is high, and the actual business requirements cannot be met.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image processing method and apparatus, where a luminance co-occurrence matrix in multiple directions of a luminance image is calculated, and then similarity measurement is performed on the image based on the luminance co-occurrence matrix, so that automatic comparison of image similarity is achieved, and accuracy of image retrieval is improved.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided an image processing method.
An image processing method according to an embodiment of the present invention includes: converting a target image into a brightness image, and calculating a brightness co-occurrence matrix of the brightness image in multiple directions; splicing the brightness co-occurrence matrixes in the multiple directions into a characteristic image, and performing characteristic extraction on the characteristic image to obtain a first characteristic vector; acquiring a second feature vector corresponding to at least one reference image in a reference image set, and respectively calculating the similarity of the first feature vector and the second feature vector; and determining whether a reference image which is the same as or similar to the target image exists in the reference image set according to the similarity so as to identify an abnormal target image.
Optionally, converting the target image into a luminance image comprises: extracting RGB values of a plurality of pixels in the target image to respectively determine the maximum value and the minimum value of the plurality of pixels in an R channel, a G channel and a B channel; and respectively calculating the sum value of the maximum value and the minimum value of the plurality of pixels, and taking half of the sum value as the brightness value of the target image in the HSL color space to obtain the brightness image of the target image.
Optionally, calculating a luminance co-occurrence matrix of one direction of the luminance image includes: selecting a first sampling point from the brightness image and a second sampling point deviating from the first sampling point; moving the first sampling point on the brightness image according to a set direction and a set step length to obtain various brightness combinations; wherein the luminance combination includes luminance values of the first sample point and luminance values of the second sample point; and respectively counting the occurrence times of the plurality of brightness combinations, and arranging the occurrence times into a square matrix according to a set brightness level so as to obtain a brightness co-occurrence matrix in the corresponding direction.
Optionally, performing feature extraction on the feature image to obtain a first feature vector, including: constructing a depth self-coding network; the deep self-coding network comprises an encoding sub-network, a decoding sub-network and a loss function, wherein the encoding sub-network and the decoding sub-network both comprise corresponding network parameters; training the deep self-coding network by using a set sample set to determine a network parameter which minimizes the loss function; and inputting the characteristic image into a trained depth self-coding network to output a first characteristic vector.
Optionally, the stitching the luminance co-occurrence matrices in the multiple directions into a feature image includes: and sequentially splicing the corresponding brightness co-occurrence matrixes into a square matrix according to the angle size sequence of the directions, wherein the square matrix is a characteristic image.
Optionally, obtaining a luminance image of the target image comprises: taking an image formed by the brightness values of the pixels as an initial image, and performing filtering processing on the initial image to obtain a standard image; and carrying out normalization processing on the standard image to obtain a brightness image of the target image.
Optionally, determining whether a reference image identical or similar to the target image exists in the reference image set according to the similarity includes: if the similarity between the first feature vector and the current second feature vector is greater than or equal to a set threshold, a reference image which is the same as or similar to the target image exists in the reference image set; and if the similarity of the first feature vector and all the second feature vectors is smaller than the threshold value, a reference image which is the same as or similar to the target image does not exist in the reference image set.
To achieve the above object, according to another aspect of the embodiments of the present invention, there is provided an image processing apparatus.
An image processing apparatus according to an embodiment of the present invention includes: the matrix calculation module is used for converting a target image into a brightness image and calculating a brightness co-occurrence matrix of the brightness image in multiple directions; the characteristic extraction module is used for splicing the brightness co-occurrence matrixes in the multiple directions into a characteristic image and extracting the characteristics of the characteristic image to obtain a first characteristic vector; the similarity calculation module is used for acquiring a second feature vector corresponding to at least one reference image in a reference image set and calculating the similarity between the first feature vector and the second feature vector respectively; and the image identification module is used for determining whether a reference image which is the same as or similar to the target image exists in the reference image set according to the similarity so as to identify an abnormal target image.
Optionally, the matrix calculation module is further configured to: extracting RGB values of a plurality of pixels in the target image to respectively determine the maximum value and the minimum value of the plurality of pixels in an R channel, a G channel and a B channel; and respectively calculating the sum value of the maximum value and the minimum value of the plurality of pixels, and taking half of the sum value as the brightness value of the target image in the HSL color space to obtain the brightness image of the target image.
Optionally, the matrix calculation module is further configured to: selecting a first sampling point from the brightness image and a second sampling point deviating from the first sampling point; moving the first sampling point on the brightness image according to a set direction and a set step length to obtain various brightness combinations; wherein the luminance combination includes luminance values of the first sample point and luminance values of the second sample point; and respectively counting the occurrence times of the plurality of brightness combinations, and arranging the occurrence times into a square matrix according to a set brightness level so as to obtain a brightness co-occurrence matrix in the corresponding direction.
Optionally, the feature extraction module is further configured to: constructing a depth self-coding network; the deep self-coding network comprises an encoding sub-network, a decoding sub-network and a loss function, wherein the encoding sub-network and the decoding sub-network both comprise corresponding network parameters; training the deep self-coding network by using a set sample set to determine a network parameter which minimizes the loss function; and inputting the characteristic image into a trained depth self-coding network to output a first characteristic vector.
Optionally, the feature extraction module is further configured to: and sequentially splicing the corresponding brightness co-occurrence matrixes into a square matrix according to the angle size sequence of the directions, wherein the square matrix is a characteristic image.
Optionally, the matrix calculation module is further configured to: taking an image formed by the brightness values of the pixels as an initial image, and performing filtering processing on the initial image to obtain a standard image; and carrying out normalization processing on the standard image to obtain a brightness image of the target image.
Optionally, the image recognition module is further configured to: if the similarity between the first feature vector and the current second feature vector is greater than or equal to a set threshold, a reference image which is the same as or similar to the target image exists in the reference image set; and if the similarity of the first feature vector and all the second feature vectors is smaller than the threshold value, a reference image which is the same as or similar to the target image does not exist in the reference image set.
To achieve the above object, according to still another aspect of embodiments of the present invention, there is provided an electronic apparatus.
An electronic device of an embodiment of the present invention includes: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement an image processing method of an embodiment of the present invention.
To achieve the above object, according to still another aspect of embodiments of the present invention, there is provided a computer-readable medium.
A computer-readable medium of an embodiment of the present invention has stored thereon a computer program that, when executed by a processor, implements an image processing method of an embodiment of the present invention.
One embodiment of the above invention has the following advantages or benefits: by calculating the brightness co-occurrence matrix of the brightness image in multiple directions and further performing similarity measurement on the image based on the brightness co-occurrence matrix, the automatic comparison of the image similarity is realized, and the accuracy of image retrieval is improved; the conversion relation of converting the RGB color space of the target image into the HSL color space can separate the brightness information and the chrominance information of the target image, independently process the brightness information, reduce the processing time and improve the real-time property; the distribution characteristic of the brightness and the position characteristic of the pixel with the same brightness or close to the brightness are reflected through the brightness co-occurrence matrix and serve as the basis of subsequent feature extraction, and the accuracy of similarity measurement is improved; feature extraction is carried out based on a depth self-coding network, so that feature vectors can be accurately output, and the accuracy of similarity measurement is further improved; the luminance co-occurrence matrixes in multiple directions are spliced into the characteristic image, so that the luminance statistics can be comprehensively carried out, the characteristic description is accurately carried out on the image, and the final similarity measurement result is more accurate; the filtered and normalized image is used as a brightness image, so that the image noise is reduced, meanwhile, the calculation complexity is reduced, and the processing time is further reduced; and comparing the similarity with a threshold value to further improve the accuracy of the similarity measurement.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a schematic diagram of main steps of an image processing method according to a first embodiment of the present invention;
FIG. 2 is a schematic main flow chart of an image processing method according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of feature images generated by stitching according to a second embodiment of the present invention;
FIG. 4 is a schematic main flow chart of an image processing method according to a third embodiment of the present invention;
fig. 5 is a schematic diagram of main blocks of an image processing apparatus according to an embodiment of the present invention;
FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
FIG. 7 is a schematic diagram of a computer apparatus suitable for use in an electronic device to implement an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of main steps of an image processing method according to a first embodiment of the present invention. As shown in fig. 1, the image processing method according to the first embodiment of the present invention mainly includes the following steps:
step S101: and converting the target image into a brightness image, and calculating a brightness co-occurrence matrix of the brightness image in multiple directions. The target image is generally an RGB color image, and the RGB color image is converted into an HSL color space from an RGB color space to obtain a brightness image; and then, counting the condition that two pixels with a certain distance in the brightness image respectively have a certain brightness value to obtain brightness co-occurrence matrixes in multiple directions. RGB represents colors of three channels of Red (Red), Green (Green), and Blue (Blue), and HSL represents Hue (Hue), Saturation (Saturation), and brightness (Lightness).
Step S102: and splicing the brightness co-occurrence matrixes in the multiple directions into a characteristic image, and performing characteristic extraction on the characteristic image to obtain a first characteristic vector. Splicing the brightness co-occurrence matrixes in multiple directions into a characteristic image according to a set splicing rule; then extracting texture features corresponding to the brightness co-occurrence matrix in each direction in the feature image, such as energy, entropy, contrast, inverse difference moment and the like; and finally, combining the texture features calculated in each direction into a comprehensive vector, wherein the comprehensive vector is the first feature vector. The splicing rule can be set in a self-defined manner, for example, the corresponding luminance co-occurrence matrix is spliced in rows according to the angle size sequence of a plurality of directions; and for example, according to the angle size sequence of a plurality of directions, splicing the corresponding brightness co-occurrence matrixes according to columns.
Step S103: and acquiring a second feature vector corresponding to at least one reference image in the reference image set, and respectively calculating the similarity of the first feature vector and the second feature vector. Converting the current reference image in the reference image set into a brightness image according to the processing procedure of the step S101, and calculating a brightness co-occurrence matrix of the brightness image in the same direction as the step S101; and then according to the same splicing rule as the step S102, splicing the brightness co-occurrence matrixes into a characteristic image, extracting the texture characteristics of the characteristic image in each direction, and combining the texture characteristics into a second characteristic vector. In an embodiment, the second feature vectors corresponding to a plurality of reference images in the reference image set are obtained in the above manner, and then, the similarities between the first feature vector and the plurality of second feature vectors can be respectively calculated.
Step S104: and determining whether a reference image which is the same as or similar to the target image exists in the reference image set according to the similarity so as to identify an abnormal target image. Comparing each similarity calculated in step S103 with a set threshold, and if there is a similarity between a second feature vector and the first feature vector that is greater than or equal to the threshold, determining that the reference image corresponding to the second feature vector is the same as or similar to the target image, where the target image is an abnormal image; if the similarity of all the second feature vectors and the first feature vectors is smaller than the threshold value, the fact that the same or similar reference images do not exist in the reference image set is indicated, and the target image is a normal image.
Fig. 2 is a schematic main flow chart of an image processing method according to a second embodiment of the invention. As shown in fig. 2, the image processing method according to the second embodiment of the present invention mainly includes the following steps:
step S201: and preprocessing the input target image to obtain a brightness image. And converting the target image from the RGB color space to the HSL color space to obtain an initial image. Wherein, the conversion formula of the brightness space is as follows:
Figure BDA0002271326520000071
wherein L (s, t) represents an initial image; max (R, G, B) represents the maximum value of R, G, B in the RGB color space; min (R, G, B) represents the minimum value of R, G, B in the RGB color space.
In the embodiment, the initial image can be directly used as a brightness image to perform subsequent processing. However, the original image has noise, and in order to reduce image noise, the original image L (x, y) may be subjected to filtering processing to obtain a standard image, and the standard image may be subjected to subsequent processing as a luminance image. The filtering process is implemented by, for example, median filtering, maximum filtering, minimum filtering, mean filtering, and the like.
In the embodiment, the original image is filtered by adopting a sliding mean filtering mode. The mean filtering is to calculate the pixel mean value of the window area, then assign the mean value to the pixel at the center point of the window, and the standard image obtained after filtering can be represented by the following formula:
Figure BDA0002271326520000072
wherein F (x, y) represents a standard image; sxyA filter window of size m × n at the center point (x, y); l (s, t) denotes an initial image.
In a preferred embodiment, in order to reduce the complexity of the calculation and reduce the processing time of the algorithm, the normalization processing may be performed on the standard image, and the normalized image is taken as the luminance image of this embodiment. The luminance image can be represented by the following formula:
Figure BDA0002271326520000081
in the formula, Q (x, y) represents a luminance image; f (x, y) represents a standard image; INT (-) denotes rounding down; vmThe maximum luminance value of F (x, y); vnIs the normalized maximum brightness value.
Step S202: and calculating a brightness co-occurrence matrix of the brightness image in multiple directions. The construction process of the brightness co-occurrence matrix is as follows: selecting an arbitrary first sample point (x, y) and a second sample point (x + a, y + b) offset from the first sample point from the luminance image Q (x, y), the first sample point (x, y) and the second sample point (x + a, y + b) constituting a pair of sample points, assuming that the luminance value of the pair of sample points is (g)1,g2). By moving the first sample point (x, y) over the luminance image Q (x, y), various values (g) are obtained1,g2) Value, assuming a brightness level of k, (g)1,g2) Has a total of k2And (4) seed preparation. For the luminance image Q (x, y), each of (g) is counted1,g2) Then arranging the occurrence times into a square matrix to obtain a k multiplied by k brightness co-occurrence matrix.
When a is 1 and b is 0, the sampling point pair is horizontal, i.e. 0 ° scan; when a is 1 and b is 1, the sampling point pair is right diagonal, i.e. 45 ° scanning; when a is 0 and b is 1, the sampling point pair is vertical, i.e. 90 ° scanning; when a is-1 and b is-1, the sampling point pair is diagonal to the left, i.e. 135 ° scan. The values of (a, b) can be selected according to the characteristics of the periodic distribution of the texture, and for the finer texture, small values such as (1,0), (1,1), (2,0) and the like can be selected.
Step S203: and splicing the brightness co-occurrence matrixes in multiple directions into a characteristic image, and performing characteristic extraction on the characteristic image to obtain a first characteristic vector. Determining the directions of generating the brightness co-occurrence matrixes according to actual requirements, wherein the directions are 0 degree, 45 degrees, 90 degrees and 135 degrees, and splicing the brightness co-occurrence matrixes in the four directions to obtain a characteristic image.
Fig. 3 is a schematic diagram of feature images generated by stitching in the second embodiment of the present invention. As shown in fig. 3, the luminance co-occurrence matrices corresponding to the 0 °, 45 °, 90 ° and 135 ° directions are respectively: h0(k,k)、H45(k,k)、H90(k, k) and H135And (k, k), sequentially splicing the four brightness co-occurrence matrixes into a square matrix according to the angle size sequence, wherein the square matrix is the characteristic image. It should be noted that the above splicing manner is only used for illustration, and in a specific implementation, the luminance co-occurrence matrix may be spliced by setting a splicing rule based on a requirement to obtain the feature image.
After the characteristic images are spliced, extracting the characteristics of the characteristic images to obtain a first characteristic vector. In the embodiment, the feature is extracted by using a deep self-coding network, and the method is specifically realized as follows: constructing a depth self-coding network; training a deep self-coding network by using a set sample set to determine a network parameter which minimizes a loss function; and inputting the feature image into a trained depth self-coding network to output a first feature vector.
The deep self-coding network comprises an encoding sub-network, a decoding sub-network and a loss function, wherein the encoding sub-network and the decoding sub-network both comprise corresponding network parameters (namely a weight matrix and a deviation vector); the input of the coding sub-network is a characteristic image, and the output of the coding sub-network is the input of the decoding sub-network; the output of the decoding subnetwork is the output of the entire depth self-coding network. The setting rule of the loss function is a minimum error principle.
In an embodiment, the feature of the feature image may be automatically extracted using sparse auto encoder (auto encoding in sparse mode). The sparse autoencoder network structure includes an input layer, an implied layer, and an output layer. The activation degree of the input layer (such as image) is represented by the activation degree of the hidden layer, and then the information of the hidden layer is restored at the output layer. Thus the information on the hidden layer is a compressed representation of the input layer and its entropy is reduced.
Step S204: and acquiring a second feature vector corresponding to the known reference image, and calculating the similarity between the first feature vector and the second feature vector. And pre-calculating a second feature vector corresponding to the reference image and storing the second feature vector in a database. Wherein, the calculation process of the second feature vector is as follows: converting the reference image into a luminance image according to the processing procedure of step S201; then, calculating a luminance co-occurrence matrix of the luminance image of the reference image in the same direction as the step S202; and then according to the same splicing rule as the step S203, splicing the brightness co-occurrence matrixes into a characteristic image, extracting the texture characteristics of the characteristic image in each direction, and combining the texture characteristics into a second characteristic vector.
There are various ways to calculate the similarity between two eigenvectors, such as cosine distance, euclidean distance, pearson correlation coefficient, etc. The following description will take the calculation of the cosine distance of two eigenvectors as the similarity of two images as an example.
Figure BDA0002271326520000101
In the formula, cos theta represents a cosine value of an included angle between the first characteristic vector and the second characteristic vector;
Figure BDA0002271326520000102
an ith element representing a second feature vector; piAn ith element representing the first feature vector; n is the number of elements of the first feature vector or the second feature vector.
Since the cosine value has a value range of [ -1, +1], in an alternative embodiment, the value is generally normalized to [0, 1] when calculating the similarity between two vectors. Wherein, the normalization can be realized by the following steps:
Disp=0.5+0.5×cosθ
equation 5
In the formula, DispRepresenting the similarity of the target image and the reference image.
Step S205: judging whether the similarity is greater than or equal to a set threshold, if so, executing a step S206; if less than the set threshold, step S207 is performed. Compare similarity DispAnd judging whether the target image is similar to the reference image or not according to the size of the threshold value T.
Step S206: and outputting the recognition result that the target image is similar to the reference image. If the similarity DispAnd if the value is more than or equal to the threshold value T, the target image is similar to the reference image, the two similar images are output, and the percentage value corresponding to the similarity is output. For example, if the similarity is 0.9, 90% is output.
Step S207: and outputting the recognition result that the target image is not similar to the reference image. If the similarity Disp<And a threshold value T indicates that the target image is not similar to the reference image, and prompt information that the two images are not similar is output.
In the second embodiment, how to determine the similarity between two images is described, which can be used for image similarity comparison and image similarity search. For the fields of insurance claim payment and the like, the clients need to submit claim settlement data images, insurance companies need to check the images and then carry out claim settlement, and the situation that the secondary claim payment of the same claim settlement case occurs is avoided, so that financial loss is brought to the insurance companies. The third embodiment is directed at the application scene, the similarity of the claim settlement data images uploaded by the clients is compared in real time, and risk investigation and avoidance can be performed in time without the participation of workers, so that the labor cost and the company operating cost are greatly saved. The details will be described below.
Fig. 4 is a schematic main flow chart of an image processing method according to a third embodiment of the present invention. As shown in fig. 4, the image processing method according to the third embodiment of the present invention mainly includes the following steps:
step S401: and preprocessing the input claim data image to obtain a brightness image. The claim data image may be, for example, a medical invoice image, a charge list image, a statement image, a discharge image, or the like. The specific implementation process of this step is the same as step S201, and is not described here again.
Step S402: and calculating a brightness co-occurrence matrix of the brightness image in multiple directions. The specific implementation process of this step is the same as step S202, and is not described here again.
Step S403: and splicing the brightness co-occurrence matrixes in multiple directions into a characteristic image, and performing characteristic extraction on the characteristic image to obtain a first characteristic vector. The specific implementation process of this step is the same as step S203, and is not described here again.
Step S404: and acquiring a second feature vector corresponding to the current reference image in the reference image set, and calculating the similarity between the first feature vector and the second feature vector. The reference image set can be the image set of the claims data submitted by the client, or the image set of the claims data submitted by all clients purchasing insurance from insurance companies. And selecting one image from the reference image set as a current reference image, and acquiring a second feature vector corresponding to the current reference image from the database. The calculation of the second feature vector is as described above. And calculating the similarity of the first characteristic vector and the second characteristic vector by adopting a cosine distance mode.
Step S405: judging whether the similarity is greater than or equal to a set threshold, if so, executing step S406; if less than the set threshold, step S407 is performed. Compare similarity DispAnd judging whether the claim data image is similar to the current reference image or not according to the size of the threshold value T.
Step S406: and outputting the prompt information that the claim data image is an abnormal image, and ending the process. If the similarity DispIf the value is more than or equal to the threshold value T, the claim data image is similar to the current reference image, namely, an image similar to the claim data image already exists in the reference image set, the probability of repeated submission of the claim data image is high, and the risk of secondary claim payment exists, so that the two similar images can be output, and the prompt message that the claim data image is an abnormal image is provided.
Step S407: judging whether the next reference image exists in the reference image set, if so, executing the stepStep S408; if not, step S409 is performed. If the similarity Disp<And a threshold value T, which indicates that the claim data image is not similar to the current reference image, and continuously comparing the next reference image until the comparison of all the reference images of the reference image set is finished.
Step S408: step S404 is performed with the next reference image as the current reference image.
Step S409: and updating the claim data image to the reference image set, outputting prompt information that the claim data image is a normal image, and ending the process. If the claim data image is not similar to each reference image in the reference image set, it indicates that an image similar to the claim data image does not exist in the reference image set, and the probability that the claim data image is submitted for the first time is higher, so that the claim data image can be updated to the reference image set, and the similarity comparison of a new claim data image is facilitated subsequently. In addition, the prompt message that the claim data image is a normal image can be output.
According to the image processing method, the brightness co-occurrence matrixes in multiple directions of the brightness image are calculated, and then the similarity measurement is carried out on the image based on the brightness co-occurrence matrixes, so that the automatic comparison of the image similarity is realized, and the accuracy of image retrieval is improved; the conversion relation of converting the RGB color space of the target image into the HSL color space can separate the brightness information and the chrominance information of the target image, independently process the brightness information, reduce the processing time and improve the real-time property; the distribution characteristic of the brightness and the position characteristic of the pixel with the same brightness or close to the brightness are reflected through the brightness co-occurrence matrix and serve as the basis of subsequent feature extraction, and the accuracy of similarity measurement is improved.
The image processing method provided by the embodiment of the invention is used for extracting the features based on the depth self-coding network, so that the feature vector can be accurately output, and the accuracy of similarity measurement is further improved; the luminance co-occurrence matrixes in multiple directions are spliced into the characteristic image, so that the luminance statistics can be comprehensively carried out, the characteristic description is accurately carried out on the image, and the final similarity measurement result is more accurate; the filtered and normalized image is used as a brightness image, so that the image noise is reduced, meanwhile, the calculation complexity is reduced, and the processing time is further reduced; and comparing the similarity with a threshold value to further improve the accuracy of the similarity measurement.
Fig. 5 is a schematic diagram of main blocks of an image processing apparatus according to an embodiment of the present invention. As shown in fig. 5, the image processing apparatus 500 according to the embodiment of the present invention mainly includes:
a matrix calculation module 501, configured to convert the target image into a luminance image, and calculate a luminance co-occurrence matrix of the luminance image in multiple directions. The target image is generally an RGB color image, and the RGB color image is converted into an HSL color space from an RGB color space to obtain a brightness image; and then, counting the condition that two pixels with a certain distance in the brightness image respectively have a certain brightness value to obtain brightness co-occurrence matrixes in multiple directions.
The feature extraction module 502 is configured to splice the luminance co-occurrence matrices in the multiple directions into a feature image, and perform feature extraction on the feature image to obtain a first feature vector. Splicing the brightness co-occurrence matrixes in multiple directions into a characteristic image according to a set splicing rule; then extracting texture features corresponding to the brightness co-occurrence matrix in each direction in the feature image, such as energy, entropy, contrast, inverse difference moment and the like; and finally, combining the texture features calculated in each direction into a comprehensive vector, wherein the comprehensive vector is the first feature vector.
The splicing rule can be set in a user-defined manner, for example, the corresponding brightness co-occurrence matrixes are spliced in rows according to the angle size sequence of a plurality of directions, and then the corresponding brightness co-occurrence matrixes are spliced in columns according to the angle size sequence of a plurality of directions.
The similarity calculation module 503 is configured to obtain a second feature vector corresponding to at least one reference image in the reference image set, and calculate similarities between the first feature vector and the second feature vector respectively. Converting the current reference image in the reference image set into a luminance image according to the processing process of the matrix calculation module 501, and calculating a luminance co-occurrence matrix of the luminance image in the same direction as the matrix calculation module 501; then, according to the same stitching rule as the feature extraction module 502, the luminance co-occurrence matrices are stitched into a feature image, and the texture features of the feature image in each direction are extracted to combine into a second feature vector. In an embodiment, the second feature vectors corresponding to a plurality of reference images in the reference image set are obtained in the above manner, and then, the similarities between the first feature vector and the plurality of second feature vectors can be respectively calculated.
An image recognition module 504, configured to determine whether a reference image that is the same as or similar to the target image exists in the reference image set according to the similarity, so as to recognize an abnormal target image. Comparing each calculated similarity with a set threshold, and if the similarity between a certain second feature vector and the first feature vector is greater than or equal to the threshold, determining that the reference image corresponding to the second feature vector is the same as or similar to the target image, wherein the target image is an abnormal image; if the similarity of all the second feature vectors and the first feature vectors is smaller than the threshold value, the fact that the same or similar reference images do not exist in the reference image set is indicated, and the target image is a normal image.
In addition, the image processing apparatus 500 according to the embodiment of the present invention may further include: and an updating module (not shown in fig. 5) for updating the target image to the reference image set when there is no reference image in the reference image set that is the same as or similar to the target image.
From the above description, it can be seen that by calculating the luminance co-occurrence matrix of the luminance image in multiple directions and further performing similarity measurement on the image based on the luminance co-occurrence matrix, automatic comparison of image similarity is realized, and the accuracy of image retrieval is improved; the conversion relation of converting the RGB color space of the target image into the HSL color space can separate the brightness information and the chrominance information of the target image, independently process the brightness information, reduce the processing time and improve the real-time property; the distribution characteristic of the brightness and the position characteristic of the pixel with the same brightness or close to the brightness are reflected through the brightness co-occurrence matrix and serve as the basis of subsequent feature extraction, and the accuracy of similarity measurement is improved.
Fig. 6 shows an exemplary system architecture 600 of an image processing method or an image processing apparatus to which an embodiment of the present invention can be applied.
As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 serves to provide a medium for communication links between the terminal devices 601, 602, 603 and the server 605. Network 604 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 601, 602, 603 to interact with the server 605 via the network 604 to receive or send messages or the like. Various communication client applications, such as shopping applications, web browser applications, search applications, instant messaging tools, mailbox clients, social platform software, and the like, may be installed on the terminal devices 601, 602, and 603.
The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 605 may be a server that provides various services, such as a background management server that an administrator performs processing using target images transmitted by the terminal apparatuses 601, 602, 603. The background management server can convert the target image into a brightness image, calculate a brightness co-occurrence matrix, perform feature extraction, similarity calculation and other processing, and feed back a processing result (for example, the target image is an abnormal image) to the terminal device.
It should be noted that the image processing method provided in the embodiment of the present application is generally executed by the server 605, and accordingly, the image processing apparatus is generally disposed in the server 605.
It should be understood that the number of terminal devices, networks, and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The invention also provides an electronic device and a computer readable medium according to the embodiment of the invention.
The electronic device of the present invention includes: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement an image processing method of an embodiment of the present invention.
The computer-readable medium of the present invention has stored thereon a computer program which, when executed by a processor, implements an image processing method of an embodiment of the present invention.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use with an electronic device implementing an embodiment of the present invention. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the computer system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, the processes described above with respect to the main step diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program containing program code for performing the method illustrated in the main step diagram. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 701.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a matrix calculation module, a feature extraction module, a similarity calculation module, and an image recognition module. The names of these modules do not constitute a limitation to the module itself in some cases, and for example, the matrix calculation module may also be described as a module that converts a target image into a luminance image and calculates a luminance co-occurrence matrix for a plurality of directions of the luminance image.
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: converting a target image into a brightness image, and calculating a brightness co-occurrence matrix of the brightness image in multiple directions; splicing the brightness co-occurrence matrixes in the multiple directions into a characteristic image, and performing characteristic extraction on the characteristic image to obtain a first characteristic vector; acquiring a second feature vector corresponding to at least one reference image in a reference image set, and respectively calculating the similarity of the first feature vector and the second feature vector; and determining whether a reference image which is the same as or similar to the target image exists in the reference image set according to the similarity so as to identify an abnormal target image.
From the above description, it can be seen that by calculating the luminance co-occurrence matrix of the luminance image in multiple directions and further performing similarity measurement on the image based on the luminance co-occurrence matrix, automatic comparison of image similarity is realized, and the accuracy of image retrieval is improved; the conversion relation of converting the RGB color space of the target image into the HSL color space can separate the brightness information and the chrominance information of the target image, independently process the brightness information, reduce the processing time and improve the real-time property; the distribution characteristic of the brightness and the position characteristic of the pixel with the same brightness or close to the brightness are reflected through the brightness co-occurrence matrix and serve as the basis of subsequent feature extraction, and the accuracy of similarity measurement is improved.
The product can execute the method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiment of the present invention.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image processing method, comprising:
converting a target image into a brightness image, and calculating a brightness co-occurrence matrix of the brightness image in multiple directions;
splicing the brightness co-occurrence matrixes in the multiple directions into a characteristic image, and performing characteristic extraction on the characteristic image to obtain a first characteristic vector;
acquiring a second feature vector corresponding to at least one reference image in a reference image set, and respectively calculating the similarity of the first feature vector and the second feature vector;
and determining whether a reference image which is the same as or similar to the target image exists in the reference image set according to the similarity so as to identify an abnormal target image.
2. The method of claim 1, wherein converting the target image into a luminance image comprises:
extracting RGB values of a plurality of pixels in the target image to respectively determine the maximum value and the minimum value of the plurality of pixels in an R channel, a G channel and a B channel;
and respectively calculating the sum value of the maximum value and the minimum value of the plurality of pixels, and taking half of the sum value as the brightness value of the target image in the HSL color space to obtain the brightness image of the target image.
3. The method of claim 1, wherein calculating a luminance co-occurrence matrix for one direction of the luminance image comprises:
selecting a first sampling point from the brightness image and a second sampling point deviating from the first sampling point;
moving the first sampling point on the brightness image according to a set direction and a set step length to obtain various brightness combinations; wherein the luminance combination includes luminance values of the first sample point and luminance values of the second sample point;
and respectively counting the occurrence times of the plurality of brightness combinations, and arranging the occurrence times into a square matrix according to a set brightness level so as to obtain a brightness co-occurrence matrix in the corresponding direction.
4. The method of claim 1, wherein extracting the feature of the feature image to obtain a first feature vector comprises:
constructing a depth self-coding network; the deep self-coding network comprises an encoding sub-network, a decoding sub-network and a loss function, wherein the encoding sub-network and the decoding sub-network both comprise corresponding network parameters;
training the deep self-coding network by using a set sample set to determine a network parameter which minimizes the loss function;
and inputting the characteristic image into a trained depth self-coding network to output a first characteristic vector.
5. The method according to claim 1, wherein the stitching the luminance co-occurrence matrices of the plurality of directions into a feature image comprises:
and sequentially splicing the corresponding brightness co-occurrence matrixes into a square matrix according to the angle size sequence of the directions, wherein the square matrix is a characteristic image.
6. The method of claim 2, wherein obtaining a luminance image of the target image comprises:
taking an image formed by the brightness values of the pixels as an initial image, and performing filtering processing on the initial image to obtain a standard image;
and carrying out normalization processing on the standard image to obtain a brightness image of the target image.
7. The method according to any one of claims 1 to 6, wherein determining whether a reference image identical or similar to the target image exists in the reference image set according to the similarity comprises:
if the similarity between the first feature vector and the current second feature vector is greater than or equal to a set threshold, a reference image which is the same as or similar to the target image exists in the reference image set;
and if the similarity of the first feature vector and all the second feature vectors is smaller than the threshold value, a reference image which is the same as or similar to the target image does not exist in the reference image set.
8. An image processing apparatus characterized by comprising:
the matrix calculation module is used for converting a target image into a brightness image and calculating a brightness co-occurrence matrix of the brightness image in multiple directions;
the characteristic extraction module is used for splicing the brightness co-occurrence matrixes in the multiple directions into a characteristic image and extracting the characteristics of the characteristic image to obtain a first characteristic vector;
the similarity calculation module is used for acquiring a second feature vector corresponding to at least one reference image in a reference image set and calculating the similarity between the first feature vector and the second feature vector respectively;
and the image identification module is used for determining whether a reference image which is the same as or similar to the target image exists in the reference image set according to the similarity so as to identify an abnormal target image.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN201911106001.3A 2019-11-13 2019-11-13 Image processing method and device Pending CN111046911A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911106001.3A CN111046911A (en) 2019-11-13 2019-11-13 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911106001.3A CN111046911A (en) 2019-11-13 2019-11-13 Image processing method and device

Publications (1)

Publication Number Publication Date
CN111046911A true CN111046911A (en) 2020-04-21

Family

ID=70232679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911106001.3A Pending CN111046911A (en) 2019-11-13 2019-11-13 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111046911A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630639A (en) * 2023-07-20 2023-08-22 深圳须弥云图空间科技有限公司 Object image identification method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007088814A (en) * 2005-09-22 2007-04-05 Casio Comput Co Ltd Imaging apparatus, image recorder and imaging control program
CN101876993A (en) * 2009-11-26 2010-11-03 中国气象科学研究院 Method for extracting and retrieving textural features from ground digital nephograms
US20140029855A1 (en) * 2012-07-26 2014-01-30 Sony Corporation Image processing apparatus, image processing method, and program
EP3255586A1 (en) * 2016-06-06 2017-12-13 Fujitsu Limited Method, program, and apparatus for comparing data graphs
CN109784357A (en) * 2018-11-19 2019-05-21 西安理工大学 A kind of image based on statistical model retakes detection method
CN110070140A (en) * 2019-04-28 2019-07-30 清华大学 Method and device is determined based on user's similitude of multi-class information
CN110177108A (en) * 2019-06-02 2019-08-27 四川虹微技术有限公司 A kind of anomaly detection method, device and verifying system
CN110222775A (en) * 2019-06-10 2019-09-10 北京字节跳动网络技术有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110298344A (en) * 2019-07-04 2019-10-01 河海大学常州校区 A kind of positioning of instrument knob and detection method based on machine vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007088814A (en) * 2005-09-22 2007-04-05 Casio Comput Co Ltd Imaging apparatus, image recorder and imaging control program
CN101876993A (en) * 2009-11-26 2010-11-03 中国气象科学研究院 Method for extracting and retrieving textural features from ground digital nephograms
US20140029855A1 (en) * 2012-07-26 2014-01-30 Sony Corporation Image processing apparatus, image processing method, and program
EP3255586A1 (en) * 2016-06-06 2017-12-13 Fujitsu Limited Method, program, and apparatus for comparing data graphs
CN109784357A (en) * 2018-11-19 2019-05-21 西安理工大学 A kind of image based on statistical model retakes detection method
CN110070140A (en) * 2019-04-28 2019-07-30 清华大学 Method and device is determined based on user's similitude of multi-class information
CN110177108A (en) * 2019-06-02 2019-08-27 四川虹微技术有限公司 A kind of anomaly detection method, device and verifying system
CN110222775A (en) * 2019-06-10 2019-09-10 北京字节跳动网络技术有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110298344A (en) * 2019-07-04 2019-10-01 河海大学常州校区 A kind of positioning of instrument knob and detection method based on machine vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630639A (en) * 2023-07-20 2023-08-22 深圳须弥云图空间科技有限公司 Object image identification method and device
CN116630639B (en) * 2023-07-20 2023-12-12 深圳须弥云图空间科技有限公司 Object image identification method and device

Similar Documents

Publication Publication Date Title
CN102246165B (en) Method and apparatus for representing and identifying feature descriptors utilizing a compressed histogram of gradients
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN111355941B (en) Image color real-time correction method, device and system
CN109871845B (en) Certificate image extraction method and terminal equipment
CN113378911B (en) Image classification model training method, image classification method and related device
CN110599554A (en) Method and device for identifying face skin color, storage medium and electronic device
CN111259915A (en) Method, device, equipment and medium for recognizing copied image
CN111383254A (en) Depth information acquisition method and system and terminal equipment
Jha et al. l2‐norm‐based prior for haze‐removal from single image
CN111028186B (en) Image enhancement method and device
US10902590B2 (en) Recognizing pathological images captured by alternate image capturing devices
CN111046911A (en) Image processing method and device
CN111179276A (en) Image processing method and device
CN113392241A (en) Method, device, medium and electronic equipment for identifying definition of well logging image
CN110895699B (en) Method and apparatus for processing feature points of image
CN116485645A (en) Image stitching method, device, equipment and storage medium
CN115019057A (en) Image feature extraction model determining method and device and image identification method and device
CN113628256A (en) Data processing method and device
CN114283416A (en) Processing method and device for vehicle insurance claim settlement pictures
CN111797922A (en) Text image classification method and device
CN112016348A (en) Face authenticity identification method and device
CN116912631B (en) Target identification method, device, electronic equipment and storage medium
CN114896439A (en) Image duplicate removal method and device for equipment safety detection
CN114708625A (en) Face recognition method and device
CN112241714B (en) Method and device for identifying designated area in image, readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination