CN111860272A - Image processing method, chip and electronic device - Google Patents

Image processing method, chip and electronic device Download PDF

Info

Publication number
CN111860272A
CN111860272A CN202010671321.XA CN202010671321A CN111860272A CN 111860272 A CN111860272 A CN 111860272A CN 202010671321 A CN202010671321 A CN 202010671321A CN 111860272 A CN111860272 A CN 111860272A
Authority
CN
China
Prior art keywords
descriptor
target
image
template image
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010671321.XA
Other languages
Chinese (zh)
Other versions
CN111860272B (en
Inventor
张靖恺
龙文勇
李准
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inferpoint Systems Shenzhen Ltd
Original Assignee
Inferpoint Systems Shenzhen Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inferpoint Systems Shenzhen Ltd filed Critical Inferpoint Systems Shenzhen Ltd
Priority to CN202010671321.XA priority Critical patent/CN111860272B/en
Priority to TW109137332A priority patent/TWI796610B/en
Publication of CN111860272A publication Critical patent/CN111860272A/en
Application granted granted Critical
Publication of CN111860272B publication Critical patent/CN111860272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Photoreceptors In Electrophotography (AREA)

Abstract

The application provides an image processing method, comprising the following steps: acquiring a template image acquired by image acquisition equipment; acquiring a key point set and a corresponding descriptor set in the template image; confirming whether a descriptor region of the descriptor set exceeds the edge of the template image or not; when the description area of the descriptor exceeds the edge of the template image, marking the descriptor of which the description area exceeds the edge of the template image as a target descriptor, and marking the key point in the target descriptor as a target key point; regenerating a target descriptor according to the target key point and the sample image; and updating the template image based on the regenerated object descriptor. The application also provides an image processing chip and an electronic device. The method and the device can improve the accuracy of image matching.

Description

Image processing method, chip and electronic device
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to an image processing method, a chip, and an electronic device.
Background
In the conventional image matching process, key points are basically extracted first, and then a region is divided according to the positions of the key points to extract descriptors. However, some images (such as fingerprints) have small areas, and when the positions of the key points are close to the edges of the images, the area of the descriptors generated according to the key points is incomplete, so that the information of the descriptors is incomplete, and the matching effect is poor.
Disclosure of Invention
In view of the above problems, the present application provides an image processing method, a chip and an electronic device to improve accuracy of image matching.
A first aspect of the present application provides an image processing method, the method comprising:
acquiring a template image acquired by image acquisition equipment;
extracting key points in the template image to obtain a key point set;
generating a descriptor based on each key point in the key point set to obtain a descriptor set;
confirming whether a descriptor region of the descriptor set exceeds the edge of the template image or not;
when the description area of the descriptor exceeds the edge of the template image, marking the descriptor of which the description area exceeds the edge of the template image as a target descriptor, and marking the key point in the target descriptor as a target key point;
regenerating a target descriptor according to the target key point and the sample image; and
updating the template image based on the regenerated object descriptor.
According to some embodiments of the application, the method further comprises:
calculating the size of an area of the target descriptor beyond the edge of the template image;
Comparing whether the calculated area is larger than or equal to a preset area;
and when the calculated area is larger than or equal to the preset area, regenerating a target descriptor according to the target key point and the sample image.
According to some embodiments of the application, regenerating the target descriptor from the target descriptor and the sample image comprises:
overlaying the target descriptor with the sample image;
obtaining the target key point according to the target descriptor;
determining a target position of the target keypoint in the sample image;
and taking the target position as a key point of the sample image, and regenerating a target descriptor according to the key point of the sample image.
According to some embodiments of the application, the sample image is an image that matches the template image.
According to some embodiments of the application, the updating the template image based on the regenerated target descriptor comprises:
confirming key points in the template image based on the regenerated target descriptors;
matching a target descriptor in the template image according to the confirmed key points;
replacing the matched object descriptor with the regenerated object descriptor to update the template image.
A second aspect of the present application provides an image processing chip, the chip comprising:
the acquisition module is used for acquiring a template image acquired by the image acquisition equipment;
the extraction module is used for extracting key points in the template image to obtain a key point set;
a generating module, configured to generate a descriptor based on each key point in the key point set, so as to obtain a descriptor set;
a confirming module, configured to confirm whether a descriptor region in the descriptor set exceeds an edge of the template image;
the marking module is used for marking the descriptor of which the description area exceeds the edge of the template image as a target descriptor when the description area of the descriptor exceeds the edge of the template image;
the generating module is further used for regenerating a target descriptor according to the target descriptor and the sample image; and
an update module to update the template image based on the regenerated target descriptor.
According to some embodiments of the present application, the generating module is further configured to:
calculating the size of an area of the target descriptor beyond the edge of the template image;
comparing whether the calculated area is larger than or equal to a preset area;
And when the calculated area is larger than or equal to the preset area, regenerating a target descriptor according to the target key point and the sample image.
According to some embodiments of the present application, the generating module is further configured to:
overlaying the target descriptor with the sample image;
obtaining the target key point according to the target descriptor;
determining a target position of the target keypoint in the sample image;
and taking the target position as a key point of the sample image, and regenerating a target descriptor according to the key point of the sample image.
According to some embodiments of the present application, the update module is further configured to:
confirming key points in the template image based on the regenerated target descriptors;
matching a target descriptor in the template image according to the confirmed key points;
and replacing the matched target descriptor by the regenerated target descriptor, and updating the template image.
A third aspect of the present application provides an electronic apparatus, comprising: a processor; and a memory in which a plurality of program modules are stored, the program modules being loaded by the processor and executing the image processing method as described above.
According to the image processing method, the chip and the electronic device, the key points at the edge of the image in the template image are determined, and the descriptors corresponding to the key points are updated through the sample image, so that the possibility that the key points at the edge of the image are matched can be increased, and the accuracy of subsequent matching of the image is improved.
Drawings
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of key points in a template image a according to an embodiment of the present application.
Fig. 3 is a schematic diagram of descriptors generated according to keypoints in a template image a according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a sample image B overlaid on the template image a according to an embodiment of the present application.
Fig. 5 is a schematic diagram of an image processing chip according to an embodiment of the present application.
Fig. 6 is a schematic view of an electronic device according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the present application can be more clearly understood, a detailed description of the present application will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present application, and the described embodiments are merely a subset of the embodiments of the present application and are not intended to be a complete embodiment.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Referring to fig. 1, fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs. For convenience of explanation, only portions related to the embodiments of the present application are shown. The image processing method is applied to an electronic device, and comprises the following steps as shown in FIG. 1.
And step S1, acquiring the template image acquired by the image acquisition equipment.
In an embodiment, the image capturing device may be a fingerprint capturing device, and the fingerprint capturing device may be disposed in an intelligent terminal such as a mobile phone, a tablet computer, and an industrial device, and is configured to capture a fingerprint of a user for identity authentication. The fingerprint acquisition equipment can also be arranged in an attendance device and is used for acquiring fingerprints of users to check attendance and the like. The fingerprint acquisition device can be a device for acquiring fingerprint images by means of an optical fingerprint acquisition technology, a capacitive sensor fingerprint acquisition technology, an ultrasonic fingerprint acquisition technology, an electromagnetic fingerprint acquisition technology or the like.
In other embodiments, the image capture device may also be a camera. The template image can be a person image, an animal image, a scene image and the like shot by the camera.
In this embodiment, when images acquired by the image acquisition device in the use process need to be matched, the template image can be used for matching the subsequently acquired images. However, the matching effect is not good because the descriptor region of the template image may have key points at the edge. In order to improve matching accuracy, the template image can be processed by the image processing method provided by the application, and descriptor regions of key points at the edge of the template image are completed.
And step S2, extracting key points in the template image to obtain a key point set.
The keypoints represent points associated with features or characteristics of the template image, and may be referred to as points of interest or feature points. For example, keypoints may be located at contours of objects in the template image. For example, the keypoints may be minutiae points in the fingerprint image, such as end points, bifurcation points, isolated points, loop points, center points, triangular points, and the like of fingerprint ridges. The key points may also be a left eye region, an eye region, a nose region, a left mouth corner region, a right mouth corner region, and the like in the face image.
In this embodiment, the algorithm for extracting the key points in the template image includes a Harris corner detection algorithm, a Scale-invariant feature transform (SIFT) feature detection algorithm, an accelerated Up Robust Features (SURF) feature detection algorithm, an orb (organized fast oriented feature brief) feature detection algorithm, and the like.
For example, as shown in the template image a shown in fig. 2, keypoints a1 to a keypoint a9 in the template image a are extracted, and a keypoint set { a1, a2, A3, a4, a5, a6, a7, A8, a9} is obtained.
Before step S2, the image processing method may further include: and preprocessing the template image. Specifically, the preprocessing the template image includes graying the template image, binarizing the characteristic to be identified, and the like.
And step S3, generating descriptors based on each key point in the key point set to obtain a descriptor set.
In this embodiment, appropriate keypoints are generated from the template image for creating corresponding descriptors to characterize the image. In one embodiment, the descriptor D may be a SIFT type descriptor. Specifically, for each keypoint, the local region around said keypoint is rotated clockwise to ensure its rotational invariance. In the rotated region, a rectangular window 16X16 centered on the key point is uniformly divided into 16 sub-regions, and the descriptor window is 16 4 × 4 sub-blocks. The gradient accumulation values of eight directions n × pi/4 (n ═ 0,1, … 7) are calculated on each sub-block, and the total of 16 sub-blocks is 128 values, and the 1 × 128 vector is defined as a descriptor D of a keypoint.
Furthermore, even if SIFT-type descriptors are referred to in the discussed examples, similar considerations apply also in the case of using different types of descriptors, e.g. Speeded Up Robust Features (SURF) and Histogram of Oriented Gradients (HOG) or possibly other types. In addition, in other embodiments, different types of descriptors may be considered in addition to descriptors of gradient-related data, including, for example, data related to chroma gradients, saturation gradients, or even color (including luminance, saturation, and chroma) gradients.
In this embodiment, the descriptor may be implemented as a multi-dimensional descriptor (e.g., 128-dimensional) for a particular radius (support region) around a given keypoint. For example, the specific radius is set to 15 pixels.
As shown in the template image A of FIG. 3, a descriptor may be generated based on each keypoint of the set of keypoints { A1, A2, A3, A4, A5, A6, A7, A8, A9 }. For example, based on keypoint A1, descriptor D may be generatedA1(ii) a Based on the keypoint A2, a descriptor D may be generatedA2(ii) a By analogy, based on keypoint A9, descriptor D may be generatedA9. The resulting descriptor set is { D }A1、DA2、DA3、DA4、DA5、DA6、DA7、DA8、DA9}。
And step S4, confirming whether the description area of the descriptor in the descriptor set exceeds the edge of the template image. When there is a description region of the descriptor beyond the edge of the template image, the flow advances to step S5; when the description area without the descriptor exceeds the edge of the template image, the flow ends.
In order to ensure the separability of the descriptors of the key points, the description area of the descriptors is selected to be larger as much as possible. In order to ensure the number of the key points in the template image, the key points may be distributed in the edge region of the template image. This may result in the descriptor having a description area that exceeds the edge of the image. For example, in the template image A shown in FIG. 3, the descriptor DA1、DA2、DA3、DA8And DA9Beyond the edges of the template image.
Step S5, marking a descriptor of the description region beyond the edge of the template image as a target descriptor, and marking a keypoint in the target descriptor as a target keypoint.
Since the information of the descriptors exceeding the edge of the template image is incomplete, a poor matching effect may occur in the subsequent image matching process. Therefore, in order to solve the problem of poor matching effect, the descriptors exceeding the edge of the template image need to be completely supplemented, and the image matching precision is improved.
In one embodiment, after marking the target descriptor, it may be determined whether the target descriptor needs to be updated. When the area of the description region of the target descriptor beyond the edge of the template image is small, the description region of the target descriptor can accurately describe the template image without updating the target descriptor; when the area of the description of the object descriptor beyond the edge of the template image is large, the object descriptor cannot accurately describe the template image, and the object descriptor needs to be updated.
Specifically, the size of an area beyond the edge of the template image in the description area of the target descriptor is calculated, and whether the calculated area is greater than or equal to a preset area is compared. When the calculated area is greater than or equal to the preset area, it is determined that the target descriptor needs to be updated, and the flow proceeds to step S6; and when the calculated area is smaller than the preset area, confirming that the target descriptor does not need to be updated, and ending the process.
For example, in the template image A shown in FIG. 3, the descriptor DA1And DA9Is smaller than the area beyond the edge of the template image, without updating the descriptor DA1And DA9(ii) a And descriptor DA2、DA3And DA8The area beyond the edge of the template image is large, and the descriptor D needs to be updatedA2、DA3And DA8
And step S6, regenerating a target descriptor according to the target key point and the sample image.
In this embodiment, the sample image is an image matched with the template image, and the sample image is complete and has good quality. For example, the similarity between the sample image and the template image is greater than a first preset value, or the matching ratio of the key points in the sample image to the key points in the template image is greater than a second preset value. It should be noted that the sample image has complete descriptor information of the keypoints located at the edge in the template image, that is, the region described by the descriptor of the edge of the template image must be complete in the corresponding region in the sample image.
It is to be understood that, in the present embodiment, the sample image may be an image stored in the electronic device in advance, or may be an image captured by the image capturing apparatus.
Specifically, the regenerating of the target descriptor from the target keypoints and the sample image comprises:
overlaying the target descriptor with the sample image;
obtaining the target key point according to the target descriptor;
determining a target position of the target keypoint in the sample image;
and taking the target position as a key point of the sample image, and regenerating a target descriptor according to the key point of the sample image.
It should be noted that when the sample image covers the target descriptor, the key point at the edge in the template image a needs to fall within the overlapping region with the sample image B. As shown in fig. 4, the target key points a2, A3, and A8 in the template image all fall within the overlapping area of the template image and the sample image, i.e., are covered by the sample image B; the positions of the target keypoints in the sample image B are determined, with the determined positions as the keypoints of the sample image B, such as keypoints B1, B2, and B3. Generating a target descriptor D with the key points B1, B2 and B3, respectively, in the sample image B1、DB2And DB3
Step S7, updating the template image based on the regenerated object descriptor.
In this embodiment, the target descriptor in the template image may be replaced with the regenerated target descriptor. Specifically, confirming key points in the template image based on the regenerated target descriptors; matching a target descriptor in the template image according to the confirmed key points; replacing the matched object descriptor with the regenerated object descriptor to update the template image.
It should be noted that the key points of the regenerated target descriptor correspond to the key points of the target descriptor in the template image. For example, as shown in FIG. 4, based on the regenerated object descriptor DB1Identifying keypoints A2 in a template image, matching target descriptors D in the template image according to the identified keypoints A2A2Finally using the regenerated object descriptor DB1Replacing object descriptor D in the template imageA2. Similarly, with the regenerated object descriptor DB2Replacing object descriptor D in the template imageA3And using the regenerated object descriptor DB3Replacing object descriptor D in the template image A8
It should be noted that, when the template image is updated based on the regenerated target descriptor, the image area integrity and the image quality jamming control can be increased, and the updating accuracy can be ensured.
Fig. 1-4 illustrate in detail the image processing of the present application, by which the efficiency and accuracy of image processing can be improved. The functional modules of the software chip and the hardware device architecture for implementing the image processing are described below with reference to fig. 5 and 6. It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
Fig. 5 is a structural diagram of a chip for image processing according to an embodiment of the present application.
In some embodiments, the image processing chip 200 may include a plurality of functional modules composed of program code segments to implement the image processing function.
Referring to fig. 5, in this embodiment, the chip 200 for image processing may be divided into a plurality of functional modules according to the functions performed by the chip, and the functional modules are used for executing the steps in the corresponding embodiment of fig. 1 to realize the functions of image processing. In this embodiment, the functional blocks of the image processing chip 200 include: an acquisition module 201, an extraction module 202, a generation module 203, a confirmation module 204, a marking module 205, and an update module 206. The functions of the respective functional blocks will be described in detail in the following embodiments.
The acquiring module 201 is configured to acquire a template image acquired by an image acquiring device.
The extracting module 202 is configured to extract the key points in the template image to obtain a key point set.
In one embodiment, the keypoints may be minutiae points in the fingerprint image, such as end points, bifurcation points, isolated points, loop points, center points of the fingerprint, triangle points, and the like of fingerprint lines. The key points may also be a left eye region, an eye region, a nose region, a left mouth corner region, a right mouth corner region, and the like in the face image.
In this embodiment, the algorithm for extracting the key points in the template image includes a Harris corner detection algorithm, a Scale-invariant feature transform (SIFT) feature detection algorithm, an accelerated Up Robust Features (SURF) feature detection algorithm, an orb (organized fast oriented feature brief) feature detection algorithm, and the like.
The generating module 203 is configured to generate a descriptor based on each keypoint in the set of keypoints, resulting in a descriptor set.
The confirming module 204 is configured to confirm whether a descriptor region of the descriptor set exceeds an edge of the template image.
The marking module 205 is configured to mark a descriptor that describes an edge of an area beyond the template image as a target descriptor, and mark a keypoint in the target descriptor as a target keypoint.
In one embodiment, after marking the target descriptor, the marking module 205 is further configured to determine whether the target descriptor needs to be updated. When the area of the description region of the target descriptor beyond the edge of the template image is small, the description region of the target descriptor can accurately describe the template image without updating the target descriptor; when the area of the description of the object descriptor beyond the edge of the template image is large, the object descriptor cannot accurately describe the template image, and the object descriptor needs to be updated.
The generation module 204 is further configured to regenerate a target descriptor from the target keypoints and the sample image.
In particular, the generation module 204 is configured to overwrite the target descriptor with the sample image;
obtaining the target key point according to the target descriptor;
determining a target position of the target keypoint in the sample image;
And taking the target position as a key point of the sample image, and regenerating a target descriptor according to the key point of the sample image.
The update module 206 is configured to update the template image based on the regenerated object descriptor.
In this embodiment, the target descriptor in the template image may be replaced with the regenerated target descriptor. Specifically, confirming key points in the template image based on the regenerated target descriptors; matching a target descriptor in the template image according to the confirmed key points; replacing the matched object descriptor with the regenerated object descriptor to update the template image.
Fig. 6 is a schematic diagram of functional modules of an electronic device according to an embodiment of the present disclosure. The electronic device 10 comprises a memory 11, a processor 12 and a computer program 13, such as a program for image processing, stored in the memory 11 and executable on the processor 12.
In this embodiment, the electronic device 10 may be, but is not limited to, a smart phone, a tablet computer, a smart industrial device, a fingerprint attendance machine, and the like.
The processor 12, when executing the computer program 13, implements the steps of image processing in the above-described method embodiments for identifying the fingerprint image captured by the fingerprint capture unit 11. Alternatively, the processor 13 executes the computer program 13 to realize the functions of the modules/units in the chip embodiments.
Illustratively, the computer program 13 may be partitioned into one or more modules/units, which are stored in the memory 11 and executed by the processor 12 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 13 in the electronic device 10. For example, the computer program 13 may be partitioned into modules 201 and 206 in FIG. 5.
It will be understood by those skilled in the art that the schematic diagram 6 is merely an example of the electronic apparatus 10, and does not constitute a limitation to the electronic apparatus 10, and that the electronic apparatus 10 may include more or less components than those shown, or some components may be combined, or different components, for example, the electronic apparatus 10 may further include input and output devices, etc.
The Processor 12 may be a Central Processing Unit (CPU), and may include other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 12 is the control center of the electronic device 10 and connects the various parts of the entire electronic device 10 using various interfaces and lines.
The memory 11 can be used for storing the computer program 13 and/or the module/unit, and the processor 12 can implement various functions of the electronic device 10 by running or executing the computer program and/or the module/unit stored in the memory 11 and calling data stored in the memory 11. The storage 11 may include an external storage medium and may also include a memory. In addition, the memory 11 may include a high speed random access memory, and may also include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The integrated modules/units of the electronic device 10, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present application and not for limiting, and although the present application is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present application without departing from the spirit and scope of the technical solutions of the present application.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a template image acquired by image acquisition equipment;
extracting key points in the template image to obtain a key point set;
generating a descriptor based on each key point in the key point set to obtain a descriptor set;
confirming whether a descriptor region of the descriptor set exceeds the edge of the template image or not;
when the description area of the descriptor exceeds the edge of the template image, marking the descriptor of which the description area exceeds the edge of the template image as a target descriptor, and marking the key point in the target descriptor as a target key point;
regenerating a target descriptor according to the target key point and the sample image; and
updating the template image based on the regenerated object descriptor.
2. The image processing method of claim 1, wherein the method further comprises:
calculating the size of an area of the target descriptor beyond the edge of the template image;
comparing whether the calculated area is larger than or equal to a preset area;
and when the calculated area is larger than or equal to the preset area, regenerating a target descriptor according to the target key point and the sample image.
3. The image processing method of claim 1, wherein regenerating a target descriptor from the target descriptor and a sample image comprises:
overlaying the target descriptor with the sample image;
obtaining the target key point according to the target descriptor;
determining a target position of the target keypoint in the sample image;
and taking the target position as a key point of the sample image, and regenerating a target descriptor according to the key point of the sample image.
4. The image processing method according to claim 1, wherein the sample image is an image matching the template image.
5. The image processing method of claim 1, wherein the updating the template image based on the regenerated target descriptor comprises:
Confirming key points in the template image based on the regenerated target descriptors;
matching a target descriptor in the template image according to the confirmed key points;
and replacing the matched target descriptor by the regenerated target descriptor, and updating the template image.
6. An image processing chip, characterized in that the chip comprises:
the acquisition module is used for acquiring a template image acquired by the image acquisition equipment;
the extraction module is used for extracting key points in the template image to obtain a key point set;
a generating module, configured to generate a descriptor based on each key point in the key point set, so as to obtain a descriptor set;
a confirming module, configured to confirm whether a descriptor region in the descriptor set exceeds an edge of the template image;
the marking module is used for marking the descriptor of which the description area exceeds the edge of the template image as a target descriptor and marking the key point in the target descriptor as a target key point when the description area of the descriptor exceeds the edge of the template image;
the generation module is further used for regenerating a target descriptor according to the target key point and the sample image; and
An update module to update the template image based on the regenerated target descriptor.
7. The image processing chip of claim 6, wherein the generation module is further to:
calculating the size of an area of the target descriptor beyond the edge of the template image;
comparing whether the calculated area is larger than or equal to a preset area;
and when the calculated area is larger than or equal to the preset area, regenerating a target descriptor according to the target key point and the sample image.
8. The image processing chip of claim 6, wherein the generation module is further to:
overlaying the target descriptor with the sample image;
obtaining the target key point according to the target descriptor;
determining a target position of the target keypoint in the sample image;
and taking the target position as a key point of the sample image, and regenerating a target descriptor according to the key point of the sample image.
9. The image processing chip of claim 6, wherein the update module is further to:
confirming key points in the template image based on the regenerated target descriptors;
Matching a target descriptor in the template image according to the confirmed key points;
and replacing the matched target descriptor by the regenerated target descriptor, and updating the template image.
10. An electronic device, comprising:
a processor; and
a memory in which a plurality of program modules are stored, the program modules being loaded by the processor and executing the image processing method according to any one of claims 1 to 5.
CN202010671321.XA 2020-07-13 2020-07-13 Image processing method, chip and electronic device Active CN111860272B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010671321.XA CN111860272B (en) 2020-07-13 2020-07-13 Image processing method, chip and electronic device
TW109137332A TWI796610B (en) 2020-07-13 2020-10-27 Image processing method, chip, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010671321.XA CN111860272B (en) 2020-07-13 2020-07-13 Image processing method, chip and electronic device

Publications (2)

Publication Number Publication Date
CN111860272A true CN111860272A (en) 2020-10-30
CN111860272B CN111860272B (en) 2023-10-20

Family

ID=72984323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010671321.XA Active CN111860272B (en) 2020-07-13 2020-07-13 Image processing method, chip and electronic device

Country Status (2)

Country Link
CN (1) CN111860272B (en)
TW (1) TWI796610B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308027A (en) * 2020-11-23 2021-02-02 敦泰电子(深圳)有限公司 Image matching method, biological recognition chip and electronic device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160155011A1 (en) * 2014-12-02 2016-06-02 Xerox Corporation System and method for product identification
CN106415606A (en) * 2014-02-14 2017-02-15 河谷控股Ip有限责任公司 Edge-based recognition, systems and methods
CN106485264A (en) * 2016-09-20 2017-03-08 河南理工大学 Divided based on gradient sequence and the curve of mapping policy is described and matching process
US20170140206A1 (en) * 2015-11-16 2017-05-18 MorphoTrak, LLC Symbol Detection for Desired Image Reconstruction
CN107797733A (en) * 2016-09-01 2018-03-13 奥多比公司 For selecting the technology of the object in image
CN110197184A (en) * 2019-04-19 2019-09-03 哈尔滨工业大学 A kind of rapid image SIFT extracting method based on Fourier transformation
CN110717927A (en) * 2019-10-10 2020-01-21 桂林电子科技大学 Indoor robot motion estimation method based on deep learning and visual inertial fusion
CN110765857A (en) * 2019-09-12 2020-02-07 敦泰电子(深圳)有限公司 Fingerprint identification method, chip and electronic device
CN110781911A (en) * 2019-08-15 2020-02-11 腾讯科技(深圳)有限公司 Image matching method, device, equipment and storage medium
CN110929741A (en) * 2019-11-22 2020-03-27 腾讯科技(深圳)有限公司 Image feature descriptor extraction method, device, equipment and storage medium
CN111369605A (en) * 2020-02-27 2020-07-03 河海大学 Infrared and visible light image registration method and system based on edge features

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106415606A (en) * 2014-02-14 2017-02-15 河谷控股Ip有限责任公司 Edge-based recognition, systems and methods
US20160155011A1 (en) * 2014-12-02 2016-06-02 Xerox Corporation System and method for product identification
US20170140206A1 (en) * 2015-11-16 2017-05-18 MorphoTrak, LLC Symbol Detection for Desired Image Reconstruction
CN107797733A (en) * 2016-09-01 2018-03-13 奥多比公司 For selecting the technology of the object in image
CN106485264A (en) * 2016-09-20 2017-03-08 河南理工大学 Divided based on gradient sequence and the curve of mapping policy is described and matching process
CN110197184A (en) * 2019-04-19 2019-09-03 哈尔滨工业大学 A kind of rapid image SIFT extracting method based on Fourier transformation
CN110781911A (en) * 2019-08-15 2020-02-11 腾讯科技(深圳)有限公司 Image matching method, device, equipment and storage medium
CN110765857A (en) * 2019-09-12 2020-02-07 敦泰电子(深圳)有限公司 Fingerprint identification method, chip and electronic device
CN110717927A (en) * 2019-10-10 2020-01-21 桂林电子科技大学 Indoor robot motion estimation method based on deep learning and visual inertial fusion
CN110929741A (en) * 2019-11-22 2020-03-27 腾讯科技(深圳)有限公司 Image feature descriptor extraction method, device, equipment and storage medium
CN111369605A (en) * 2020-02-27 2020-07-03 河海大学 Infrared and visible light image registration method and system based on edge features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹健: "基于局部特征的图像目标识别技术研究", 中国博士学位论文电子期刊网 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308027A (en) * 2020-11-23 2021-02-02 敦泰电子(深圳)有限公司 Image matching method, biological recognition chip and electronic device
CN112308027B (en) * 2020-11-23 2022-04-01 敦泰电子(深圳)有限公司 Image matching method, biological recognition chip and electronic device

Also Published As

Publication number Publication date
TWI796610B (en) 2023-03-21
CN111860272B (en) 2023-10-20
TW202117591A (en) 2021-05-01

Similar Documents

Publication Publication Date Title
Rutishauser et al. Is bottom-up attention useful for object recognition?
CN111814194B (en) Image processing method and device based on privacy protection and electronic equipment
Ortega-Delcampo et al. Border control morphing attack detection with a convolutional neural network de-morphing approach
CN103150561A (en) Face recognition method and equipment
Hammudoglu et al. Portable trust: biometric-based authentication and blockchain storage for self-sovereign identity systems
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN106650568B (en) Face recognition method and device
CN111191568A (en) Method, device, equipment and medium for identifying copied image
CN110866466A (en) Face recognition method, face recognition device, storage medium and server
Saad et al. Defocus blur-invariant scale-space feature extractions
JP7121132B2 (en) Image processing method, apparatus and electronic equipment
CN112101200A (en) Human face anti-recognition method, system, computer equipment and readable storage medium
KR102558736B1 (en) Method and apparatus for recognizing finger print
CN110751071A (en) Face recognition method and device, storage medium and computing equipment
CN111222452A (en) Face matching method and device, electronic equipment and readable storage medium
CN109117693B (en) Scanning identification method based on wide-angle view finding and terminal
Kumar et al. Non-overlapped blockwise interpolated local binary pattern as periocular feature
CN111860272B (en) Image processing method, chip and electronic device
WO2022199395A1 (en) Facial liveness detection method, terminal device and computer-readable storage medium
Brogan et al. Spotting the difference: Context retrieval and analysis for improved forgery detection and localization
Fawwad Hussain et al. Gray level face recognition using spatial features
Verma et al. Secure rotation invariant face detection system for authentication
CN112633281A (en) Vehicle identity authentication method and system based on Hash algorithm
CN110210425B (en) Face recognition method and device, electronic equipment and storage medium
CN112766162A (en) Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant