CN112215302A - Image identification method and device and terminal equipment - Google Patents

Image identification method and device and terminal equipment Download PDF

Info

Publication number
CN112215302A
CN112215302A CN202011192599.5A CN202011192599A CN112215302A CN 112215302 A CN112215302 A CN 112215302A CN 202011192599 A CN202011192599 A CN 202011192599A CN 112215302 A CN112215302 A CN 112215302A
Authority
CN
China
Prior art keywords
hash value
image
preset
processed
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011192599.5A
Other languages
Chinese (zh)
Inventor
夏成明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011192599.5A priority Critical patent/CN112215302A/en
Publication of CN112215302A publication Critical patent/CN112215302A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image identification method, which comprises the following steps: calculating a first hash value of an image to be processed; comparing the first hash value with at least two preset hash values stored in advance respectively to obtain hash value similarity of the first hash value respectively aiming at each preset hash value, wherein each preset hash value corresponds to one preset identifier, and the preset identifiers corresponding to different preset hash values are different; and determining an identifier corresponding to the image to be processed from each preset identifier according to the hash value similarity, wherein the identifier corresponding to the image to be processed is used for representing the category information of the image to be processed. By the method, the problems that in the prior art, when the image category is identified by adopting modes such as a convolutional neural network model and the like, the calculation process is complex, and the loss of the performance of the terminal equipment is large during operation can be solved.

Description

Image identification method and device and terminal equipment
Technical Field
The present application belongs to the technical field of image identification, and in particular, to an image identification method, an image identification device, a terminal device, and a computer-readable storage medium.
Background
In many applications, such as games, it is often necessary to identify categories of images, such as user avatars, to provide personalized functionality, such as implementing personalized recommendations or prompts, etc., for the identified categories of images.
At present, the convolutional neural network model is often adopted to identify the type of the image, but the convolutional neural network is often complex in structure and large in resource consumption of the terminal equipment, so that the calculation process is often complex, and the loss of the performance of the terminal equipment during operation is large.
Disclosure of Invention
The embodiment of the application provides an image identification method, an image identification device, a terminal device and a computer readable storage medium, which can solve the problems that in the prior art, when the type of an image is identified by adopting modes such as a convolutional neural network model and the like, the calculation process is complex, and the loss of the performance of the terminal device is large during operation.
In a first aspect, an embodiment of the present application provides an image identification method, including:
calculating a first hash value of an image to be processed;
comparing the first hash value with at least two preset hash values stored in advance respectively to obtain hash value similarity of the first hash value respectively aiming at each preset hash value, wherein each preset hash value corresponds to one preset identifier, and the preset identifiers corresponding to different preset hash values are different;
and determining an identifier corresponding to the image to be processed from each preset identifier according to the hash value similarity, wherein the identifier corresponding to the image to be processed is used for representing the category information of the image to be processed.
In a second aspect, an embodiment of the present application provides an apparatus for identifying an image, including:
the calculation module is used for calculating a first hash value of the image to be processed;
the comparison module is used for comparing the first hash value with at least two preset hash values stored in advance respectively to obtain hash value similarity of the first hash value aiming at each preset hash value respectively, wherein each preset hash value corresponds to one preset identifier, and the preset identifiers corresponding to different preset hash values are different;
and the determining module is used for determining the identifier corresponding to the image to be processed from each preset identifier according to the hash value similarity, wherein the identifier corresponding to the image to be processed is used for representing the category information of the image to be processed.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, a display, and a computer program stored in the memory and executable on the processor, where the processor implements the image identification method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the image identification method according to the first aspect.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method for identifying an image in the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: in the embodiment of the application, a first hash value of an image to be processed is calculated, so that the image to be processed can be represented by the first hash value; then, comparing the first hash value with at least two preset hash values stored in advance respectively to obtain hash value similarity of the first hash value for each preset hash value respectively, wherein each preset hash value corresponds to a preset identifier, and the preset identifiers corresponding to different preset hash values are different; and determining an identifier corresponding to the image to be processed from each preset identifier according to the similarity of the hash values, wherein the identifier corresponding to the image to be processed is used for representing the category information of the image to be processed. At this time, according to the similarity of the hash values, a preset hash value which is similar to the first hash value is determined from the preset hash values, so that according to a preset identifier corresponding to the preset hash value which is similar to the first hash value, an identifier corresponding to the image to be processed is determined from the preset identifier, the type of the image to be processed is determined, long-time feature extraction and data processing do not need to be performed by adopting a complex algorithm such as a convolutional neural network model, the calculated amount is small, and the loss of terminal equipment during operation is small.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image identification method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another image identification method according to an embodiment of the present application;
fig. 3 is an exemplary schematic diagram of processing the second image to obtain an image to be processed according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image identification device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The image identification method provided by the embodiment of the application can be applied to a server, a desktop computer, a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA) and other terminal devices, and the embodiment of the application does not limit the specific types of the terminal devices.
Specifically, fig. 1 shows a flowchart of an image identification method provided in an embodiment of the present application, where the image identification method may be applied to a terminal device.
As shown in fig. 1, the image identification method may include:
step S101, calculating a first hash value of the image to be processed.
In the embodiment of the application, the image to be processed can be obtained in a preset mode. The to-be-processed image may be acquired in various ways. For example, the image to be processed may be obtained by performing shooting with a camera in the terminal device according to the embodiment of the present application; in addition, the image to be processed can also be obtained by shooting through a camera in communication connection with the terminal equipment and is transmitted to the terminal equipment; alternatively, the image to be processed may be a local image stored in the terminal device in advance, or the image to be processed may be obtained by performing image resizing, graying processing, or the like on a specified image by the terminal device. The specific acquisition mode of the image to be processed is not limited herein.
In some examples, the image to be processed may be a grayscale image. In this case, the substantial difference between the image contents can be recognized more favorably by masking the difference in image style by graying, for example, avoiding the influence of the image on the filter, the cool/warm tone, and the like of the image.
The specific content of the image to be processed can be various.
For example, in a specific application scenario, the image to be processed may be a user account head portrait or an image including a specific object selected by a user. For example, in an online game, a user may select a designated object such as a specific character, a cartoon character, or the like through a display interface as an avatar in a subsequent game operation, the designated object selected by the user may serve as an avatar of the user, or may be displayed in a designated area of the game interface. At this time, the image to be processed may be an image including the specified object.
In this embodiment of the application, the first hash value may be calculated by using a hash algorithm such as perceptual hash (also referred to as "diapsh"), average hash (also referred to as "aHash"), and difference hash (also referred to as "dHash"). At this time, the image to be processed can be represented by the first hash value, the calculation amount is small, and a complex operation process is not needed, so that the resource consumption of the terminal device is reduced.
Step S102, comparing the first hash value with at least two preset hash values stored in advance respectively to obtain hash value similarity of the first hash value for each preset hash value respectively, wherein each preset hash value corresponds to one preset identifier, and the preset identifiers corresponding to different preset hash values are different.
In the embodiment of the application, at least two preset hash values may be stored in advance in a form of a list or the like. Each preset hash value may correspond to a preset identifier, and for each preset hash value, the preset hash value may be obtained according to a pixel value of an assigned image corresponding to the preset identifier, and at this time, the preset hash value may correspondingly represent an internal image feature of the assigned image. The preset identifier may be an ID such as a name and a number of the designated image.
For example, a storage manner of any preset hash value in the preset list and the preset identifier corresponding to the preset hash value may be [ preset identifier: preset hash value ].
For example, in an application scenario, a user may select a specific object such as a specific character, a cartoon character, etc. as an avatar in a subsequent game operation through a display interface, and at this time, the following contents may be stored in the preset list:
136:110011011001100110010101001111010010010110011001001001001110010
wherein, the number 136 may be a number of the avatar, and the number may be a preset identifier of the avatar, and the subsequent sequence may be a preset hash value of the avatar.
Therefore, in the embodiment of the application, each designated image can be collected in advance, and the preset hash values corresponding to each designated image are calculated, so that the corresponding designated images are represented one by one through each preset hash value.
And comparing the first hash value with at least two preset hash values stored in advance respectively, so as to determine a preset hash value which is similar to the first hash value from each preset hash value, namely determining an appointed image which is similar to the image to be processed.
In the embodiment of the present application, there may be a plurality of ways to compare the first hash value with the preset hash value. For example, the specific gravity of the same part between the first hash value and the preset hash value may be compared, and the like.
Step S103, determining an identifier corresponding to the image to be processed from each preset identifier according to the hash value similarity, wherein the identifier corresponding to the image to be processed is used for representing the category information of the image to be processed.
In the embodiment of the application, according to each hash value similarity follows each predetermine among the hash value and determine with after the hash value is predetermine that first hash value is comparatively similar predetermine the hash value, can be according to with the sign is predetermine that the hash value corresponds is predetermine that first hash value is comparatively similar, follow each predetermine in the sign and confirm the sign that the image to be processed corresponds, and need not to adopt complicated algorithms such as convolutional neural network model to carry out long-time feature extraction and data processing, the calculated amount is less, and the loss to terminal equipment during the operation is also less.
In some embodiments, the comparing the first hash value with at least two pre-stored preset hash values respectively to obtain the hash value similarity of the first hash value with respect to each of the pre-stored preset hash values respectively includes:
calculating a first Hamming distance between the first hash value and each preset hash value;
and determining the hash value similarity between the first hash value and the preset hash value according to a first Hamming distance between the first hash value and the preset hash value.
The first Hamming distance between the two character strings is the number of different characters at the corresponding positions of the two character strings. In this embodiment, the character string refers to the first hash value and the preset hash value. Therefore, through the first hamming distance, the proportion of the same character part between the first hash value and the preset hash value can be determined, so as to determine the hash value similarity between the first hash value and the preset hash value.
It should be noted that, the specific form of the hash value similarity may be various.
In one example, a first hamming distance between the first hash value and the preset hash value may be taken as a hash value similarity between the first hash value and the preset hash value. At this time, the smaller the value of the hash value similarity is, the more similar the first hash value and the preset hash value are correspondingly.
In addition, the hash value similarity may be obtained by calculating a difference between the number of characters of the first hash value and the first hamming distance, and dividing the difference between the number of characters of the first hash value and the first hamming distance by the number of characters. At this time, the greater the value of the hash value similarity, the more similar the first hash value and the preset hash value are.
In some embodiments, the determining the hash value similarity between the first hash value and the preset hash value according to the first hamming distance between the first hash value and the preset hash value includes:
taking a first Hamming distance between the first hash value and the preset hash value as a hash value similarity between the first hash value and the preset hash value;
determining the identifier corresponding to the image to be processed from each preset identifier according to the hash value similarity, including:
and if the minimum value in the hash value similarities is smaller than a preset similarity threshold, taking a preset identifier of a preset hash value corresponding to the minimum value in the hash value similarities as the identifier corresponding to the image to be processed.
In this embodiment, a first hamming distance between the first hash value and the preset hash value is used as a hash value similarity between the first hash value and the preset hash value. At this time, the smaller the value of the hash value similarity is, the more similar the first hash value and the preset hash value are correspondingly. Therefore, if the minimum value of the hash value similarities is smaller than the preset similarity threshold, it may be determined that the similarity between the first hash value corresponding to the minimum value of the hash value similarities and the preset hash value meets the preset similarity condition, and thus the preset identifier of the preset hash value corresponding to the minimum value of the hash value similarities may be used as the identifier corresponding to the image to be processed.
In some embodiments, before determining, according to the hash value similarity, an identifier corresponding to the image to be processed from each preset identifier, the method further includes:
calculating a second Hamming distance between every two preset Hash values;
and determining the similarity threshold value according to the minimum value in the second Hamming distances.
In the embodiment of the present application, the similarity threshold may be determined according to each second hamming distance. For example, in an application scenario, 101 avatars may be included in a designated online game for selection by a user, and each avatar may correspond to a preset hash value. At this time, 101 × 50 second hamming distances may be obtained; if the minimum value of the 101 × 50 second hamming distances is 16, the similarity threshold may be determined according to the minimum value of 16. For example, in order to reduce interference due to information such as image sharpness, the similarity threshold may be a value slightly higher than 16 such as 18.
Therefore, according to the embodiment of the application, the similarity threshold value can be reasonably determined by referring to each second Hamming distance, so that the accuracy of judging the similarity of the Hash value is improved, and the accuracy of determining the identifier corresponding to the image to be processed is improved.
In some embodiments, before calculating the first hash value of the image to be processed, the method further includes:
step S201, after a preset instruction is detected, screenshot is carried out on a display interface of the terminal equipment to obtain a first image;
step S202, cutting a designated area in the first image, and taking the cut image as a second image;
step S203, performing size adjustment on the second image to obtain a second image after size adjustment, wherein the size of the second image after size adjustment is a preset size;
and step S204, carrying out graying processing on the second image after size adjustment to obtain the image to be processed.
In the embodiment of the application, through screenshot and cutting, image contents such as a user head portrait and an avatar can be cut from the display interface, namely, the second image is obtained. The second image may then be resized to avoid image detail changes due to differences in resolution, thereby affecting the identification of substantial differences in image content comparisons. In addition, by obtaining the second image after the size adjustment, the image size of the subsequent image to be processed may be the preset size, so that the number of characters of the first hash value of the image to be processed is a fixed number of characters, thereby facilitating subsequent calculation.
In addition, in the embodiment of the application, graying processing can be performed on the second image after the size adjustment, so as to obtain the image to be processed. In this case, the gradation processing can mask the difference in image style, for example, avoid the influence of the image on the filter, the cool/warm tone, and the like, and can recognize the substantial difference between the image contents more favorably.
As shown in fig. 3, which is an exemplary schematic diagram of processing the second image to obtain an image to be processed.
Wherein the second image may be a color image. After obtaining the second image, the second image may be resized, scaled to a resized second image of 8 x 9 resolution. At this time, the resized second image remains as a color image. Then, in order to cover up the image style difference, for example, to avoid the influence of the filter, the cool/warm tone, and the like of the image on the image, the second image after the size adjustment may be subjected to a graying process to obtain the image to be processed.
In some embodiments, before comparing the first hash value with at least two pre-stored preset hash values respectively to obtain hash value similarities of the first hash value respectively for the preset hash values, the method further includes:
acquiring at least two third images, wherein one third image corresponds to a preset identifier, the preset identifiers corresponding to different third images are different, the size of each third image is a preset size, and the third images are grayscale images;
and calculating the hash value of each third image, and taking the hash value of the third image as a preset hash value.
In this embodiment of the application, the third image may be regarded as a designated image corresponding to each preset hash value referred to in the foregoing embodiments.
Because the image to be processed is a preset size and is a gray image, each third image with the size of the preset size and is a gray image can be obtained, so that the number of the characters of the preset hash value corresponding to each third image is the same as the number of the characters of the first hash value, and subsequent comparison operation is facilitated.
In some embodiments, after resizing the second image to obtain a resized second image, further comprising:
acquiring first color information of the second image after size adjustment;
determining the identifier corresponding to the image to be processed from each preset identifier according to the hash value similarity, including:
respectively comparing the first color information with each preset color information to obtain the color similarity of the first color information respectively aiming at each preset color information, wherein each preset hash value corresponds to one preset color information;
and determining the identifier corresponding to the image to be processed according to the color similarity and the hash value similarity.
In the embodiment of the application, the image to be processed is a grayscale image, so that the first color information of the second image can be further obtained, the first color information is compared with each preset color information respectively to obtain the color similarity of the first color information to each preset color information, and then the third image with the highest similarity to the image to be processed is more accurately identified by combining the color similarity and each hash value similarity, so that the identifier corresponding to the image to be processed can be more accurately determined. Each of the preset color information may be obtained according to each of the fourth images, and for each of the third images, the third image may be an image obtained by performing graying processing on the corresponding fourth image, so that each of the fourth images corresponds to one of the third images, and correspondingly, each of the preset hash values corresponds to one of the preset color information.
The specific comparison mode for comparing the first color information with each preset color information can be determined according to the content of the first color information.
In some embodiments, the first color information may include a color distribution histogram of the second image and/or pixel values corresponding to respective pixel points of the second image.
If the first color information includes the color distribution histogram of the second image, each preset color information also includes a preset color distribution histogram correspondingly. The color distribution histogram may include the number of pixels included in each pixel value interval in the corresponding image. Therefore, the difference between the color distribution histogram of the second image and the preset color distribution histogram and the number of the pixel points included in each pixel value interval can be compared to judge the color similarity of the first color information respectively for each preset color information.
And if the first color information includes the pixel values corresponding to the respective pixel points of the second image, then, for each preset color information, the preset pixel value corresponding to each preset pixel point in the corresponding designated image is also included. Therefore, the difference between the pixel value of each pixel point in the second image and the preset pixel value of the corresponding preset pixel point in the designated image can be compared to determine the color similarity of the first color information with respect to each preset color information.
In the embodiment of the application, a first hash value of an image to be processed can be calculated, so that the image to be processed can be represented by the first hash value; then, comparing the first hash value with at least two preset hash values stored in advance respectively to obtain hash value similarity of the first hash value for each preset hash value respectively, wherein each preset hash value corresponds to a preset identifier, and the preset identifiers corresponding to different preset hash values are different; and determining the identifier corresponding to the image to be processed from each preset identifier according to the similarity of the hash values. At this time, according to the similarity of the hash values, a preset hash value which is similar to the first hash value can be determined from the preset hash values, so that according to a preset identifier corresponding to the preset hash value which is similar to the first hash value, an identifier corresponding to the image to be processed is determined from the preset identifier, long-time feature extraction and data processing do not need to be performed by adopting a complex algorithm such as a convolutional neural network model, the calculated amount is small, and the loss of terminal equipment during operation is small.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 4 shows a block diagram of a structure of an image identification device provided in the embodiment of the present application, which corresponds to the above-mentioned image identification method in the above embodiment, and only shows the relevant parts in the embodiment of the present application for convenience of description.
Referring to fig. 4, the image identification means 4 comprises:
a calculating module 401, configured to calculate a first hash value of an image to be processed;
a comparing module 402, configured to compare the first hash value with at least two pre-stored preset hash values respectively, to obtain hash value similarities of the first hash value respectively for the preset hash values, where each preset hash value corresponds to a preset identifier, and the preset identifiers corresponding to different preset hash values are different;
a determining module 403, configured to determine, according to the hash value similarity, an identifier corresponding to the image to be processed from each preset identifier, where the identifier corresponding to the image to be processed is used to indicate category information of the image to be processed.
Optionally, the comparing module 402 specifically includes:
a first calculating unit, configured to calculate, for each preset hash value, a first hamming distance between the first hash value and the preset hash value;
the first determining unit is used for determining the hash value similarity between the first hash value and the preset hash value according to a first Hamming distance between the first hash value and the preset hash value.
Optionally, the first determining unit is specifically configured to:
taking a first Hamming distance between the first hash value and the preset hash value as a hash value similarity between the first hash value and the preset hash value;
the determining module 403 is specifically configured to:
and if the minimum value in the hash value similarities is smaller than a preset similarity threshold, taking a preset identifier of a preset hash value corresponding to the minimum value in the hash value similarities as the identifier corresponding to the image to be processed.
Optionally, the image identification device 4 further includes:
the second calculation module is used for calculating a second Hamming distance between every two preset Hash values;
and the second determining module is used for determining the similarity threshold according to the minimum value in the second Hamming distances.
Optionally, the image identification device 4 further includes:
the screenshot module is used for screenshot on a display interface of the terminal equipment after a preset instruction is detected to obtain a first image;
the cutting module is used for cutting the designated area in the first image and taking the cut image as a second image;
the size adjusting module is used for adjusting the size of the second image to obtain a second image after size adjustment, wherein the size of the second image after size adjustment is a preset size;
and the gray processing module is used for carrying out gray processing on the second image after the size adjustment to obtain the image to be processed.
Optionally, the image identification device 4 further includes:
the acquisition module is used for acquiring at least two third images, wherein one third image corresponds to a preset identifier, the preset identifiers corresponding to different third images are different, the size of each third image is a preset size, and the third images are grayscale images;
and the third calculating module is used for calculating the hash value of each third image and taking the hash value of each third image as a preset hash value.
Optionally, the image identification device 4 further includes:
the third acquisition module is used for acquiring the first color information of the second image after the size adjustment;
the determining module 403 specifically includes:
the comparison unit is used for comparing the first color information with each piece of preset color information respectively to obtain the color similarity of the first color information aiming at each piece of preset color information respectively, wherein each preset hash value corresponds to one piece of preset color information;
and the second determining unit is used for determining the identifier corresponding to the image to be processed according to the color similarity and the hash value similarity.
In the embodiment of the application, a first hash value of an image to be processed can be calculated, so that the image to be processed can be represented by the first hash value; then, comparing the first hash value with at least two preset hash values stored in advance respectively to obtain hash value similarity of the first hash value for each preset hash value respectively, wherein each preset hash value corresponds to a preset identifier, and the preset identifiers corresponding to different preset hash values are different; and determining an identifier corresponding to the image to be processed from each preset identifier according to the similarity of the hash values, wherein the identifier corresponding to the image to be processed is used for representing the category information of the image to be processed. At this time, according to the similarity of the hash values, a preset hash value which is similar to the first hash value is determined from the preset hash values, so that according to a preset identifier corresponding to the preset hash value which is similar to the first hash value, an identifier corresponding to the image to be processed is determined from the preset identifier, the type of the image to be processed is determined, long-time feature extraction and data processing do not need to be performed by adopting a complex algorithm such as a convolutional neural network model, the calculated amount is small, and the loss of terminal equipment during operation is small.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 5, the terminal device 5 of this embodiment includes: at least one processor 50 (only one is shown in fig. 5), a memory 51, and a computer program 52 stored in the memory 51 and executable on the at least one processor 50, wherein the processor 50 executes the computer program 52 to implement the steps in the embodiment of the method for identifying any of the images.
The terminal device 5 may be a server, a mobile phone, a wearable device, an Augmented Reality (AR)/Virtual Reality (VR) device, a desktop computer, a notebook, a desktop computer, a palmtop computer, or other computing devices. The terminal device may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is merely an example of the terminal device 5, and does not constitute a limitation of the terminal device 5, and may include more or less components than those shown, or combine some of the components, or different components, such as may also include input devices, output devices, network access devices, etc. The input device may include a keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of a fingerprint), a microphone, a camera, and the like, and the output device may include a display, a speaker, and the like.
The Processor 50 may be a Central Processing Unit (CPU), and the Processor 50 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. In other embodiments, the memory 51 may also be an external storage device of the terminal device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 5. Further, the memory 51 may include both an internal storage unit and an external storage device of the terminal device 5. The memory 51 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of the computer programs. The above-mentioned memory 51 may also be used to temporarily store data that has been output or is to be output.
In addition, although not shown, the terminal device 5 may further include a network connection module, such as a bluetooth module Wi-Fi module, a cellular network module, and the like, which is not described herein again.
In this embodiment, when the processor 50 executes the computer program 52 to implement the steps in any of the above embodiments of the image identification method, a first hash value of an image to be processed may be calculated, so that the image to be processed may be represented by the first hash value; then, comparing the first hash value with at least two preset hash values stored in advance respectively to obtain hash value similarity of the first hash value for each preset hash value respectively, wherein each preset hash value corresponds to a preset identifier, and the preset identifiers corresponding to different preset hash values are different; and determining an identifier corresponding to the image to be processed from each preset identifier according to the similarity of the hash values, wherein the identifier corresponding to the image to be processed is used for representing the category information of the image to be processed. At this time, according to the similarity of the hash values, a preset hash value which is similar to the first hash value is determined from the preset hash values, so that according to a preset identifier corresponding to the preset hash value which is similar to the first hash value, an identifier corresponding to the image to be processed is determined from the preset identifier, the type of the image to be processed is determined, long-time feature extraction and data processing do not need to be performed by adopting a complex algorithm such as a convolutional neural network model, the calculated amount is small, and the loss of terminal equipment during operation is small.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above method embodiments.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image identification method, comprising:
calculating a first hash value of an image to be processed;
comparing the first hash value with at least two preset hash values stored in advance respectively to obtain hash value similarity of the first hash value respectively aiming at each preset hash value, wherein each preset hash value corresponds to one preset identifier, and the preset identifiers corresponding to different preset hash values are different;
and determining an identifier corresponding to the image to be processed from each preset identifier according to the hash value similarity, wherein the identifier corresponding to the image to be processed is used for representing the category information of the image to be processed.
2. The image identification method according to claim 1, wherein the comparing the first hash value with at least two pre-set hash values stored in advance respectively to obtain the hash value similarity of the first hash value respectively for each pre-set hash value comprises:
calculating a first Hamming distance between the first hash value and each preset hash value;
and determining the hash value similarity between the first hash value and the preset hash value according to a first Hamming distance between the first hash value and the preset hash value.
3. The method for identifying an image according to claim 2, wherein said determining the hash value similarity between the first hash value and the preset hash value according to the first hamming distance between the first hash value and the preset hash value comprises:
taking a first Hamming distance between the first hash value and the preset hash value as a hash value similarity between the first hash value and the preset hash value;
determining the identifier corresponding to the image to be processed from each preset identifier according to the hash value similarity, including:
and if the minimum value in the hash value similarities is smaller than a preset similarity threshold, taking a preset identifier of a preset hash value corresponding to the minimum value in the hash value similarities as the identifier corresponding to the image to be processed.
4. The image identification method according to claim 3, before determining the identifier corresponding to the image to be processed from each of the preset identifiers according to the hash value similarity, further comprising:
calculating a second Hamming distance between every two preset Hash values;
and determining the similarity threshold value according to the minimum value in the second Hamming distances.
5. The method for identifying an image according to any one of claims 1 to 4, wherein before calculating the first hash value of the image to be processed, it further comprises:
after a preset instruction is detected, screenshot is conducted on a display interface of the terminal device, and a first image is obtained;
cutting a designated area in the first image, and taking the cut image as a second image;
carrying out size adjustment on the second image to obtain a second image after size adjustment, wherein the size of the second image after size adjustment is a preset size;
and carrying out graying processing on the second image after the size adjustment to obtain the image to be processed.
6. The image identification method according to claim 5, wherein before comparing the first hash value with at least two pre-set hash values stored in advance respectively to obtain the hash value similarity of the first hash value respectively for each pre-set hash value, the method further comprises:
acquiring at least two third images, wherein one third image corresponds to a preset identifier, the preset identifiers corresponding to different third images are different, the size of each third image is a preset size, and the third images are grayscale images;
and calculating the hash value of each third image, and taking the hash value of the third image as a preset hash value.
7. The method for identifying an image according to claim 5, wherein after resizing the second image to obtain a resized second image, further comprising:
acquiring first color information of the second image after size adjustment;
determining the identifier corresponding to the image to be processed from each preset identifier according to the hash value similarity, including:
respectively comparing the first color information with each preset color information to obtain the color similarity of the first color information respectively aiming at each preset color information, wherein each preset hash value corresponds to one preset color information;
and determining the identifier corresponding to the image to be processed according to the color similarity and the hash value similarity.
8. An apparatus for identifying an image, comprising:
the calculation module is used for calculating a first hash value of the image to be processed;
the comparison module is used for comparing the first hash value with at least two preset hash values stored in advance respectively to obtain hash value similarity of the first hash value aiming at each preset hash value respectively, wherein each preset hash value corresponds to one preset identifier, and the preset identifiers corresponding to different preset hash values are different;
and the determining module is used for determining the identifier corresponding to the image to be processed from each preset identifier according to the hash value similarity, wherein the identifier corresponding to the image to be processed is used for representing the category information of the image to be processed.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method of identification of an image according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a method for identifying an image according to any one of claims 1 to 7.
CN202011192599.5A 2020-10-30 2020-10-30 Image identification method and device and terminal equipment Pending CN112215302A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011192599.5A CN112215302A (en) 2020-10-30 2020-10-30 Image identification method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011192599.5A CN112215302A (en) 2020-10-30 2020-10-30 Image identification method and device and terminal equipment

Publications (1)

Publication Number Publication Date
CN112215302A true CN112215302A (en) 2021-01-12

Family

ID=74057777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011192599.5A Pending CN112215302A (en) 2020-10-30 2020-10-30 Image identification method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN112215302A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203461A (en) * 2015-05-07 2016-12-07 中国移动通信集团公司 A kind of image processing method and device
US20180276528A1 (en) * 2015-12-03 2018-09-27 Sun Yat-Sen University Image Retrieval Method Based on Variable-Length Deep Hash Learning
CN109918532A (en) * 2019-03-08 2019-06-21 苏州大学 Image search method, device, equipment and computer readable storage medium
CN110472650A (en) * 2019-06-25 2019-11-19 福建立亚新材有限公司 A kind of recognition methods and system of fiber appearance grade
CN110516100A (en) * 2019-08-29 2019-11-29 武汉纺织大学 A kind of calculation method of image similarity, system, storage medium and electronic equipment
CN110942021A (en) * 2019-11-25 2020-03-31 腾讯科技(深圳)有限公司 Environment monitoring method, device, equipment and storage medium
CN111340109A (en) * 2020-02-25 2020-06-26 深圳市景阳科技股份有限公司 Image matching method, device, equipment and storage medium
CN111522989A (en) * 2020-07-06 2020-08-11 南京梦饷网络科技有限公司 Method, computing device, and computer storage medium for image retrieval
US20200301961A1 (en) * 2018-03-12 2020-09-24 Tencent Technology (Shenzhen) Company Limited Image retrieval method and apparatus, system, server, and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203461A (en) * 2015-05-07 2016-12-07 中国移动通信集团公司 A kind of image processing method and device
US20180276528A1 (en) * 2015-12-03 2018-09-27 Sun Yat-Sen University Image Retrieval Method Based on Variable-Length Deep Hash Learning
US20200301961A1 (en) * 2018-03-12 2020-09-24 Tencent Technology (Shenzhen) Company Limited Image retrieval method and apparatus, system, server, and storage medium
CN109918532A (en) * 2019-03-08 2019-06-21 苏州大学 Image search method, device, equipment and computer readable storage medium
CN110472650A (en) * 2019-06-25 2019-11-19 福建立亚新材有限公司 A kind of recognition methods and system of fiber appearance grade
CN110516100A (en) * 2019-08-29 2019-11-29 武汉纺织大学 A kind of calculation method of image similarity, system, storage medium and electronic equipment
CN110942021A (en) * 2019-11-25 2020-03-31 腾讯科技(深圳)有限公司 Environment monitoring method, device, equipment and storage medium
CN111340109A (en) * 2020-02-25 2020-06-26 深圳市景阳科技股份有限公司 Image matching method, device, equipment and storage medium
CN111522989A (en) * 2020-07-06 2020-08-11 南京梦饷网络科技有限公司 Method, computing device, and computer storage medium for image retrieval

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
VIJETHA GATTUPALLI 等: "" Weakly Supervised Deep Image Hashing Through Tag Embeddings"", 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) *
林计文 等: ""面向图像检索的深度汉明嵌入哈希"", 《模式识别与人工智能》, vol. 33, no. 6 *

Similar Documents

Publication Publication Date Title
CN110889379B (en) Expression package generation method and device and terminal equipment
CN112580643B (en) License plate recognition method and device based on deep learning and storage medium
CN110503682B (en) Rectangular control identification method and device, terminal and storage medium
CN108961183B (en) Image processing method, terminal device and computer-readable storage medium
CN111290684B (en) Image display method, image display device and terminal equipment
CN110008997B (en) Image texture similarity recognition method, device and computer readable storage medium
CN111400553A (en) Video searching method, video searching device and terminal equipment
CN109215037B (en) Target image segmentation method and device and terminal equipment
CN111696080B (en) Face fraud detection method, system and storage medium based on static texture
CN112380978B (en) Multi-face detection method, system and storage medium based on key point positioning
CN111833285A (en) Image processing method, image processing device and terminal equipment
CN110969046A (en) Face recognition method, face recognition device and computer-readable storage medium
CN115529837A (en) Face recognition method and device for mask wearing, and computer storage medium
CN107992872B (en) Method for carrying out text recognition on picture and mobile terminal
CN112966719B (en) Method and device for recognizing instrument panel reading and terminal equipment
CN110619597A (en) Semitransparent watermark removing method and device, electronic equipment and storage medium
CN110287943B (en) Image object recognition method and device, electronic equipment and storage medium
CN113129298A (en) Definition recognition method of text image
CN112200109A (en) Face attribute recognition method, electronic device, and computer-readable storage medium
CN110610178A (en) Image recognition method, device, terminal and computer readable storage medium
WO2020124442A1 (en) Pushing method and related product
CN112215302A (en) Image identification method and device and terminal equipment
CN112989924B (en) Target detection method, target detection device and terminal equipment
CN111931794B (en) Sketch-based image matching method
CN111325656B (en) Image processing method, image processing device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240913