CN114779975A - Processing method and device for finger and palm print image viewing interface and electronic system - Google Patents

Processing method and device for finger and palm print image viewing interface and electronic system Download PDF

Info

Publication number
CN114779975A
CN114779975A CN202210335801.8A CN202210335801A CN114779975A CN 114779975 A CN114779975 A CN 114779975A CN 202210335801 A CN202210335801 A CN 202210335801A CN 114779975 A CN114779975 A CN 114779975A
Authority
CN
China
Prior art keywords
image
area
tool
comparison
ratio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210335801.8A
Other languages
Chinese (zh)
Inventor
张之蔚
张晓华
王刚
汤林鹏
邰骋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jianmozi Technology Co ltd
Original Assignee
Moqi Technology Beijing Co ltd
Beijing Jianmozi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Moqi Technology Beijing Co ltd, Beijing Jianmozi Technology Co ltd filed Critical Moqi Technology Beijing Co ltd
Priority to CN202210335801.8A priority Critical patent/CN114779975A/en
Publication of CN114779975A publication Critical patent/CN114779975A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Abstract

The invention provides a processing method, a device and an electronic system for a finger and palm print image viewing interface, wherein the method comprises the following steps: displaying a first image in the first image area and displaying a second image in the second image area; in response to the in-contrast region tool in the viewing tool region being in the selected state, displaying a first marker on a first in-contrast region of the first image and a second marker on a second in-contrast region of the second image; wherein the first and second ratio regions are at least one ratio region of the first and second images corresponding to each other; the first mark and the second mark correspond. The invention can improve the inspection efficiency and the accuracy of the inspection result.

Description

Processing method and device for finger and palm print image viewing interface and electronic system
Technical Field
The invention relates to the technical field of image processing, in particular to a processing method and device for a finger and palm print image inspection interface and an electronic system.
Background
For the current fingerprint and palm print comparison system, if a query image is input into the fingerprint and palm print comparison system, the system will output a plurality of (e.g. 100) candidate images corresponding to the query image. Since the fingerprints collected in the field are often incomplete, the comparison result may not be accurate enough, and a viewing worker needs to perform further inspection on the candidate images to determine which of the candidate images really match the query image.
The existing inspection system usually performs feature visualization through conventional components and interaction modes in GUI design languages, and provides conventional tools such as a zooming tool, a marking tool and a marking tool, which are usually independent in function, and the inspection worker usually can only perform integral operation on a canvas where an image is located. The mode is difficult to meet the operation requirement of comparing the minutiae in the query image and the candidate image by a viewing worker, the readability is poor, and the viewing efficiency and the accuracy of the viewing result are influenced.
Disclosure of Invention
In view of the above, the present invention provides a processing method, a processing device and an electronic system for a finger-palm print image inspection interface, so as to improve the efficiency of finger-palm print inspection and the accuracy of finger-palm print inspection result.
In a first aspect, an embodiment of the present invention provides a processing method for a finger/palm print image viewing interface, where the finger/palm print image includes a fingerprint image and/or a palm print image, a graphical user interface of a viewing system is provided by an electronic device, and content displayed on the graphical user interface includes a first image area, a second image area, and a viewing tool area, and the method includes: displaying a first image in the first image area and displaying a second image in the second image area; in response to a region tool being in a selected state in a contrast in the viewing tool area, displaying a first marker on a first contrast area of the first image and a second marker on a second contrast area of the second image; wherein the first and second ratio middle regions are at least one ratio middle region of the first and second images that correspond to each other; the first mark and the second mark correspond.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the method further includes: in response to the feature point tool in the viewing tool zone being in the selected state, displaying a first on-scale feature point on the first image as a first feature point marker and a second on-scale feature point on the second image as a second feature point marker; wherein the first ratiometric feature points and the second ratiometric feature points are feature points matched with each other in the first ratiometric region and the second ratiometric region.
With reference to the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the method further includes: and in response to the feature point tool in the inspection tool area being in the selected state, displaying a first unmatched feature point on the first image as a third feature point mark and a second unmatched feature point on the second image as a fourth feature point mark.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where at least one of the first ratiometric region and the second ratiometric region is provided; the step of displaying a first marker on a first alignment area of the first image and a second marker on a second alignment area of the second image comprises: for any first comparison region, determining a current first mark corresponding to the first comparison region and a current second mark corresponding to the second comparison region corresponding to the first comparison region according to the confidence degree corresponding to the first comparison region; displaying the current first mark on the first comparison area, and displaying the current second mark on the second comparison area corresponding to the first comparison area; wherein the current first marker and the current second marker are the same; the confidence coefficient corresponding to the first comparison area is the confidence coefficient in the mutual comparison between the first comparison area and the corresponding second comparison area; and the current first marks corresponding to the first comparison areas with the same corresponding confidence degrees are the same.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the method further includes: in response to detecting that a cursor of the electronic device selects a current first comparison area in the first comparison areas, displaying a first selected marker in the current first comparison area; determining a current second comparison middle area corresponding to the current first comparison middle area in a second comparison middle area according to the current first comparison middle area; displaying a second selected indicia in the current second ratio; and/or, in response to detecting that the cursor selects a current first ratio middle feature point in the first ratio middle feature points, displaying the current first ratio middle feature point as a third selection mark, and determining a current second ratio middle feature point corresponding to the current first ratio middle feature point in the second ratio middle feature points according to the current first ratio middle feature point; displaying the feature points in the current second ratio corresponding to the feature points in the current first ratio as a fourth selected mark; the cursor comprises a mouse cursor or a touch point cursor.
With reference to the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the method further includes: acquiring linear transformation matrixes corresponding to the first image and the second image; the linear transformation matrix is determined according to the position information of the first target object; wherein the first target object comprises at least one of: all the mutually corresponding first comparison middle areas and second comparison middle areas, part of the mutually corresponding first comparison middle areas and second comparison middle areas, all the mutually corresponding first comparison middle characteristic points and second comparison middle characteristic points, and part of the mutually corresponding first comparison middle characteristic points and second comparison middle characteristic points; the position information comprises coordinates of the first target object under a corresponding image coordinate system; in response to an alignment tool or a synchronization tool in the viewing tool field being in a selected state, aligning the first image and the second image according to the linear transformation matrix.
With reference to the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the step of aligning the first image and the second image according to the linear transformation matrix includes: according to the linear transformation matrix, performing coordinate transformation on at least one of the second image, the second mark and the selected mark in the second image area to obtain the first image and the second image which are aligned; the selected mark is a mark corresponding to a characteristic point in a region and/or a ratio in a ratio selected by a cursor of the electronic equipment, and the cursor comprises a mouse cursor or a touch point cursor.
With reference to the first aspect, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, where the first image area corresponds to a first canvas, and the second image area corresponds to a second canvas, and the method further includes: detecting the position of a cursor of the electronic equipment on a first canvas when a synchronous tool in the viewing tool area is in a selected state; the cursor comprises a mouse cursor or a touch point cursor; and determining the corresponding position of the cursor on a second canvas according to the position of the cursor on the first canvas and the linear transformation matrix, and displaying a mark symbol corresponding to the cursor at the corresponding position of the second canvas.
With reference to the first aspect, an embodiment of the present invention provides an eighth possible implementation manner of the first aspect, where the method further includes: in response to a first operation event for the first image in the selected state of the synchronization tool in the viewing tool region, synchronously executing a response action associated with the first operation event on the first image and the second image according to the linear transformation matrix; the first operational event includes at least one of: a reset event, a translation event, a rotation event, a brightness contrast event, an enhancement event, and a zoom event.
With reference to the first aspect, an embodiment of the present invention provides a ninth possible implementation manner of the first aspect, where the method further includes: in response to a change in status for a first tool in the viewing tool zone, setting a status of a second tool according to a preconfigured association of the first tool with the second tool; the states of the first tool and the second tool include: a selected state and a locked state.
With reference to the first aspect, an embodiment of the present invention provides a tenth possible implementation manner of the first aspect, where a web browser of the electronic device provides a graphical user interface of a viewing system, and the graphical user interface is rendered in a virtual DOM tree structure.
In a second aspect, an embodiment of the present invention further provides a processing apparatus for a finger/palm print image viewing interface, where the finger/palm print image includes a fingerprint image and/or a palm print image, a graphical user interface of a viewing system is provided through an electronic device, and content displayed on the graphical user interface includes a first image area, a second image area, and a viewing tool area, the apparatus includes: the image display module is used for displaying a first image in the first image area and displaying a second image in the second image area; a mid-contrast region marking module for displaying a first mark on a first contrast region of the first image and a second mark on a second contrast region of the second image in response to a mid-contrast region tool in the viewing tool region being in a selected state; wherein the first and second in-ratio regions are at least one in-ratio region of the first and second images that correspond to each other; the first mark and the second mark correspond.
In a third aspect, an embodiment of the present invention further provides an electronic system, where the electronic system includes: the system comprises an image acquisition device, a processing device and a storage device; the image acquisition equipment is used for acquiring an image to be detected; the storage device is stored with a computer program which executes the processing method of the finger-palm print image viewing interface when the computer program is run by the processing equipment.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processing device, the computer program performs the steps of the above processing method for a fingerprint and palm print image viewing interface.
The embodiment of the invention brings the following beneficial effects:
when a tool in a comparison area in a detection tool area is in a selected state, displaying a first mark on a first comparison area of a first image and displaying a second mark on a second comparison area of a second image; by displaying the image marked with the middle area in the comparison when the middle area tool is selected, the user can intuitively determine the corresponding areas in the two images, so that the user can conveniently and carefully check the information in the areas in the comparison, and further determine the matching degree of the first image (such as a finger and palm print candidate image) and the second image (such as a finger and palm print query image).
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part may be learned by the practice of the above-described techniques of the disclosure, or may be learned by practice of the disclosure.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic system according to an embodiment of the invention;
FIG. 2 is a flowchart illustrating a processing method for a finger-palm print image viewing interface according to a second embodiment of the present invention;
FIG. 3 is a diagram of a graphical user interface according to a second embodiment of the present invention;
FIG. 4 is a diagram of another graphical user interface provided by a second embodiment of the present invention;
FIG. 5 is a diagram of another graphical user interface provided by a second embodiment of the present invention;
FIG. 6 is a diagram of another graphical user interface provided by a second embodiment of the present invention;
FIG. 7 is a diagram of another graphical user interface provided by a second embodiment of the present invention;
FIG. 8 is a diagram of another graphical user interface provided by a second embodiment of the present invention;
FIG. 9 is a diagram of another graphical user interface provided by a second embodiment of the present invention;
FIG. 10 is a diagram of another graphical user interface provided by a second embodiment of the present invention;
FIG. 11 is a diagram illustrating another example of a graphical user interface provided by the second embodiment of the present invention;
FIG. 12 is a diagram illustrating another graphical user interface provided by a second embodiment of the present invention;
FIG. 13 is a schematic view of a tool region for inspection according to a second embodiment of the present invention;
FIG. 14 is a schematic structural diagram illustrating a processing apparatus for a finger-palm print image inspection interface according to a third embodiment of the present disclosure;
FIG. 15 is a schematic structural diagram of another processing apparatus for a finger-palm print image viewing interface according to a fourth embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The inspection system is a system for manually comparing the similarity between a query image and candidate images, wherein the candidate images are usually a plurality of (for example, 50 or 100) candidate images with higher similarity determined by the comparison system by inputting one query image into the comparison system. And the inspection worker inspects the detail points in the query image and the candidate images displayed by the inspection system, and further selects a candidate image which is most similar to the query image from the plurality of candidate images.
In order to provide an effective interaction mode which accords with inspection habits and improve the inspection efficiency of a user, the processing method, the processing device and the processing system for the finger-palm print image inspection interface provided by the embodiment of the invention can display information such as a comparison middle area and a correspondence between comparison middle areas where the detail points in the image are located, and improve the inspection efficiency and the accuracy of the inspection result of the image, particularly the finger-palm print image.
For ease of understanding, the following detailed description of embodiments of the invention is provided.
The first embodiment is as follows:
first, an example electronic system 100 for implementing the method and apparatus for processing a finger-palm print image viewing interface according to an embodiment of the present invention is described with reference to fig. 1.
As shown in FIG. 1, an electronic system 100 includes one or more processing devices 102, one or more memory devices 104, an input device 106, an output device 108, and one or more image capture devices 110, which are interconnected via a bus system 112 and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic system 100 shown in fig. 1 are exemplary only, and not limiting, and that the electronic system may have other components and structures as desired.
The processing device 102 may be a server, an intelligent terminal, or a device including a Central Processing Unit (CPU) or other form of processing unit having data processing capability and/or instruction execution capability, may process data of other components in the electronic system 100, and may control other components in the electronic system 100 to perform the functions of processing of the finger-palm print image viewing interface.
Storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer-readable storage medium and executed by processing device 102 to implement client functionality (implemented by the processing device) and/or other desired functionality in embodiments of the present invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image acquisition device 110 may acquire an image to be detected, for example, an inquiry image of a fingerprint and palm print, and store the acquired image in the storage device 104 for use by other components, and the image acquisition device 110 may be a camera device with a network communication function, and the like, and may acquire an on-site image, transmit the acquired image to the processing device 102 in time through a network, and store the acquired image in the storage device 104, so as to use the acquired image as an inquiry image to perform subsequent comparison and inspection processing processes; alternatively, for an application scenario with a low requirement for viewing time effectiveness, the image acquisition device 110 may also be an image acquisition interface, which is used to connect to an external device such as a camera or a camera memory card, and read a query image from the external device, so as to perform subsequent comparison and viewing processes on the query image.
It can be understood that, a query image is input to the comparison system, the comparison system determines a plurality of candidate images corresponding to the query image, and provides a comparison result between the query image and each candidate image, where the comparison result includes a region in the comparison corresponding to each other in the query image and the candidate image, and a feature point in the comparison corresponding to each other, and the comparison result may also include information such as a confidence of the region in the comparison corresponding to each other in the query image and the candidate image, a similarity of the feature point in the comparison, whether the feature point in the comparison is located in the region in the comparison, and the inspection system displays information required for user inspection according to the comparison result.
For example, the processing method, apparatus and electronic system for implementing the finger-palm print image viewing interface according to the embodiment of the present invention may be integrally configured, or may be separately configured, such as integrally configuring the processing device 102, the storage device 104, the input device 106 and the output device 108, and configuring the image capturing device 110 separately and connected to the processing device 102 via a wireless network. When the devices in the above electronic system are integrally provided, the electronic system may be implemented as an intelligent terminal such as a smart phone, a tablet computer, a computer, and the like.
The second embodiment:
the embodiment of the invention provides a processing method of a finger and palm print image viewing interface, wherein an execution main body of the method is electronic equipment, and the electronic equipment can be terminal equipment (such as a mobile phone, a computer and the like) which has a man-machine interaction function and can carry out communication; a graphical user interface of the viewing system is provided by the electronic device, the graphical user interface displaying content including a first image region, a second image region, and a viewing tool region. The display positions of the first image area, the second image area and the inspection tool area can be specifically set according to actual needs, for example, the first image area and the second image area can be located at the lower side of the inspection tool area, and the display positions are not limited; the inspection tool area can display a plurality of inspection tools which are interactive operation tools, and the inspection system is provided with processing programs corresponding to the inspection tools so as to realize the corresponding response actions of the inspection tools through the processing programs. Referring to fig. 2, the processing method of the finger-palm print image viewing interface includes the following steps:
in step S202, a first image is displayed in the first image area, and a second image is displayed in the second image area.
Taking fig. 3 as an example, reference is made to a schematic diagram of a graphical user interface shown in fig. 3, in which a first image area is located on the left side, a second image area is located on the right side, a viewing tool area is located above the first image area and the second image area, the viewing tool area is provided with n tools (M1 to Mn in fig. 3), an image a (i.e., a first image) is displayed in the first image area, and an image B (i.e., the second image) is displayed in the second image area.
Step S204, in response to the region tool in the comparison in the inspection tool area being in the selected state, displaying a first mark on a first comparison region of the first image and displaying a second mark on a second comparison region of the second image; wherein the first and second ratio regions are at least one ratio region of the first and second images corresponding to each other; the first mark and the second mark correspond.
The above middle region comparing tool may be triggered to enter a selected state through a certain operation (which may be referred to as "selecting operation" for short), where the operation (which may be referred to as "selecting operation" for short) may specifically be a non-touch operation performed by clicking a middle region comparing tool with a mouse (such as clicking a mouse, or double clicking a mouse), or by pressing a designated key of a keyboard (such as pressing a shortcut key corresponding to the middle region comparing tool), or a touch operation performed by pressing the middle region comparing tool with a touch medium (such as a finger, a stylus, or the like), and may specifically be determined by itself according to actual needs, which is not limited.
In addition, the first mark and the second mark may be the same or different, and both of them may include a color, an icon, a character, a frame, and the like, which is not limited thereto. Continuing with the previous example, in fig. 3, image a (i.e., the first image) includes two comparing areas a1, a2 (i.e., the first comparing area) and a non-comparing area A3, and image B (i.e., the second image) includes two comparing areas B1, B2 (i.e., the second comparing area) and a non-comparing area B3, where a1 corresponds to B1, a2 corresponds to B2, and A3 and B3 are called non-comparing areas because there is no matching area; taking the area-in-ratio tool M1 as an example, when the user performs the above-mentioned selection operation on M1, the electronic device sets M1 to the selected state (the selected state is indicated by hatching in fig. 3) in response to the selection operation, and displays a first mark on the area-in-ratio a1 and a2 of the image a, and displays a second mark corresponding to the first mark on the area-in-ratio B1 and B2 of the image B.
In the processing method for the finger-palm print image viewing interface provided by this embodiment, when a tool in a middle comparison area in a tool area is in a selected state, a first mark is displayed on a first middle comparison area of the first image, and a second mark is displayed on a second middle comparison area of the second image; the mode of displaying the image marked with the middle area of the comparison when the middle area of the comparison tool is selected enables a user to intuitively determine the proportion of the middle areas of the two images and the mutual corresponding relation of the middle areas of the two images, so that the user can macroscopically understand and determine the matching degree of the first image (such as a finger and palm print candidate image) and the second image (such as a finger and palm print query image) without looking up characteristic point information, and the operation mode improves the viewing efficiency and the accuracy of a viewing result.
On the basis of the processing method of the finger-palm print image viewing interface, in order to further improve the flexibility of image viewing, the method may further include the following operation modes: in response to the feature point tool in the viewing tool field being in the selected state, the first ratioed feature point is displayed on the first image as a first feature point label and the second ratioed feature point is displayed on the second image as a second feature point label.
It can be understood that, when the comparison system performs comparison between the query image and the images in the base library, the comparison system calculates a similarity score between the query image and each image in the base library, and then the comparison system determines the top N images in the base library, whose similarity scores with the query image are higher than a certain value (e.g. 80) and/or whose similarity scores are the highest, as candidate images corresponding to the query image; after determining a plurality of candidate images corresponding to the query image, the comparison system provides a comparison result of the query image and each candidate image, wherein the comparison result comprises a comparison middle region and a comparison middle feature point which are corresponding to each other in the query image and the candidate image, and the comparison result also comprises information such as the confidence degree of the comparison middle region, the similarity of the feature point in the comparison, whether the feature point in the comparison middle region is located in the comparison middle region and the like in the comparison image and the candidate image which are corresponding to each other.
When the checking system compares the inquiry image with a candidate image, the inquiry image is the first image, and the candidate image compared with the inquiry image is the second image. A pair of feature points with a high similarity in the first image and the second image is called a feature point in the ratio (also called a matching feature point pair), and the feature point in the ratio may be located in the ratio middle region (in general, the feature point in the ratio middle region) or may be located in the non-ratio middle region. The first characteristic point mark and the second characteristic point mark may be the same or different, and both may include a color, an icon, a text, a frame, and the like, which is not limited. Continuing with fig. 3, taking fig. 4 as an example, the area-in-ratio tool is M1, the feature point tool is M4, a1 is the first area-in-ratio, the user performs a selection operation on M4, the electronic device sets M4 to a selected state (the selected state is indicated by hatching in fig. 4) in response to the selection operation, and displays feature points P1 in a1 (i.e., the first area-in-ratio feature point) and P1' in B1 (i.e., the feature points in the second area-in-ratio feature point) as triangle marks (i.e., the first feature point mark and the second feature point mark at this time) on image a (i.e., the first image) and image B (i.e., the second image), respectively. Alternatively, feature point P1 (i.e., the first in-ratio feature point) in a1 and feature point P1' (i.e., the second in-ratio feature point) in B1 are both displayed in a first color, such as purple, and the first color at this time is the first feature point mark and the second feature point mark described above.
By adopting the mode of displaying the characteristic point marks on the images, a user can conveniently and visually determine the characteristic points which are matched with each other in the two compared images, so that whether the matching is correct or not can be conveniently inspected, the intuitiveness of image inspection is improved, the operation complexity of image inspection is reduced, and the efficiency of image inspection is further improved.
As a possible implementation manner, in order to be able to visually distinguish the feature point and the non-feature point, and the feature point in the comparison and the feature point in the non-comparison, the processing method of the fingerprint and palm print image viewing interface may further include the following operation manners: in response to the feature point tool in the viewing tool field being in the selected state, the first unmatched feature point is displayed on the first image as a third feature point marker and the second unmatched feature point is displayed on the second image as a fourth feature point marker. The non-feature points are pixel points except feature points in the image, and the un-compared feature points refer to feature points except feature points in the image. The third feature point mark and the fourth feature point mark may be both in a second color different from the first color, for example: the first color is purple and the second color is yellow.
Specifically, after matching the feature points in the first matching area with the feature points in the second matching area, not only a plurality of matched feature point pairs but also some feature points (i.e., unmatched feature points) with failed matching can be obtained, the unmatched feature points in the first image are the first unmatched feature points, and the unmatched feature points in the second image are the second unmatched feature points. The third feature point mark and the fourth feature point mark may be the same as or different from each other, and may include, but are not limited to, colors, icons, characters, frames, and the like.
Continuing with fig. 4, taking fig. 5 as an example, feature point Q1 in a1 is the first unmatched feature point, feature point Q2 in B1 is the second unmatched feature point, and in response to M4 being in the selected state (the selected state is shaded in fig. 5), the electronic device displays Q1 and Q2 as circular marks (i.e., the third feature point mark and the fourth feature point mark in this case) on image a (i.e., the first image) and image B (i.e., the second image), respectively. By adopting the operation mode, the identification degree of the feature points in the two compared images can be further improved, so that the user can further distinguish the matched feature points and the unmatched feature points in the two compared images.
As a possible embodiment, the first ratio-middle region and the second ratio-middle region are at least one; in the step S204, the following operation modes may be specifically adopted to display the first mark on the first scale region of the first image and the second mark on the second scale region of the second image: for any first comparison region, determining a current first mark corresponding to the first comparison region and a current second mark corresponding to a second comparison region corresponding to the first comparison region according to the confidence degree corresponding to the first comparison region; displaying a current first mark on the first ratio area, and displaying a current second mark on a second ratio area corresponding to the first ratio area; wherein the current first mark and the current second mark are the same; the confidence corresponding to the first comparing region is the confidence in the mutual comparison between the first comparing region and the corresponding second comparing region; and the current first marks corresponding to the first comparison areas with the same corresponding confidence degrees are the same.
It will be appreciated that the confidence of the regions in the ratio indicates the confidence in the mutual ratio of the regions in the ratio in one image and the corresponding regions in the other image, the confidence of the regions in the mutually corresponding ratios being the same. For example, in FIG. 3, A1 for image A corresponds to B1 for image B, and the confidence levels for A1 and B1 are the same. Obtaining the confidence of the area in each ratio of a first image displayed in a first image area and a second image displayed in a second image area; for each area in the ratio, determining the color marking of the area in the ratio according to the confidence degree of the area in the ratio, and respectively marking the area in the ratio on the first image and the second image according to the determined marking color.
Specifically, for example, in the case that the current first marker and the current second marker are color markers, the confidence level (i.e., the similarity probability) may be divided into a plurality of different confidence levels (e.g., less than 60%, 60% -80%, more than 80%, etc.), each confidence level corresponds to one color marker, and for each region in the ratio, the color marker corresponding to the region in the ratio may be determined according to the confidence level corresponding to the region in the ratio, and the color marker may be displayed on the region in the ratio. For example, as shown in fig. 3, the confidence levels are divided into three confidence levels (i.e., less than 60%, 60% -80%, and greater than 80%), the color markers corresponding to the three confidence levels are red markers, yellow markers, and blue markers, and after the electronic device obtains that the confidence levels of a1 and B1 are 70% and the confidence levels of a2 and B2 are 62%, it is determined that the color markers corresponding to a1 and a2 (i.e., the current first marker) and the color markers corresponding to B1 and B2 (i.e., the current second marker) are yellow markers, and the yellow markers are displayed on a1, B1, a2, and B2.
By adopting the mode of displaying the mark on the comparison middle region, a user can conveniently and intuitively determine the corresponding regions in the two compared images, the intuitiveness of image inspection is improved, the operation complexity of image inspection is reduced, and the efficiency of image inspection is further improved. Furthermore, different marks are displayed on the regions in the ratios with different corresponding confidence degrees, so that on one hand, the region boundaries in each ratio in the same image can be distinguished conveniently, and on the other hand, the credibility degree in the mutual ratio of the regions in each ratio in different images can be visually displayed.
It is understood that the inspection system can display the middle comparison region, the confidence of the middle comparison region, and the feature points in the comparison, determine the middle comparison region in the image B according to the middle comparison region in the image a, and determine the corresponding feature points in the image B according to the feature points in the comparison in the image a, because the comparison result provided by the comparison system includes the above information. In the matching process of the comparison system, not only the matching at the characteristic point level is performed, but also the matching at the area level is performed, so as to obtain the matching degree and the matching relationship at the characteristic point/area level (for example, which area in the image a is matched with which area in the image B (i.e., in comparison), which characteristic point in the image a is matched with which characteristic point in the image B (i.e., in comparison)), and a linear transformation matrix can be determined according to the matching relationship of the areas in the characteristic point/ratio in the comparison.
On the basis of the processing method for the palm print image viewing interface, in order to further improve the accuracy of image viewing, the method may further include at least one of the following two operation modes:
operation mode 1: responding to the detection that a cursor of the electronic equipment selects a current first comparison area in the first comparison areas, and displaying a first selected mark in the current first comparison area; determining a current second comparison middle area corresponding to the current first comparison middle area in the second comparison middle area according to the current first comparison middle area; displaying a second selected marker in the current second ratio;
operation mode 2: responding to the detection that the cursor selects the current first comparison feature point in the first comparison feature points, displaying the current first comparison feature point as a third selected mark, and determining the current second comparison feature point corresponding to the current first comparison feature point in the second comparison feature points according to the current first comparison feature point; displaying the feature points in the current second ratio corresponding to the feature points in the current first ratio as a fourth selected mark;
the cursor comprises a mouse cursor or a touch point cursor.
For the above operation mode 1, as an example, continuing to fig. 3, taking fig. 6 as an example, M1 is the area-in-ratio tool, a1 is the first area-in-ratio, B1 is the area-in-ratio-second, the user moves the mouse cursor to select a1 (i.e., the operation of selecting a1 can be implemented by holding the mouse cursor in the a1 area for more than a preset time), the electronic device displays a first selected mark (a shaded layer covering a1 in fig. 6) on the a1 in response to the operation, and determines that the area-in-ratio-second corresponding to a1 in the area-in-ratio is B1 based on the preset area-in-ratio correspondence, and then displays a second selected mark (a shaded layer covering B1 in fig. 6) in B1; the display style (such as shape, color, frame, etc.) of the first selected mark and the second selected mark can be defined by self according to actual needs, and is not limited. By adopting the operation mode, the user can conveniently determine the area in the comparison corresponding to the area in the current comparison according to the area in the current comparison selected by the cursor, so that the accuracy of image viewing is improved, and the efficiency of image viewing is improved.
For the above operation mode 2, as an example, continuing to fig. 4, taking fig. 7 as an example, M1 is the area in ratio tool, M4 is the feature point tool, a1 has a first ratio feature point P1, B1 has a second ratio feature point P1 ', P1 and P1 ' are both displayed as triangle marks, the user moves the mouse cursor to select P1 (i.e., the current first ratio feature point), the electronic device, in response to the operation, displays P1 as a third selected mark (a triangle shadow layer covering P1 in fig. 7), determines a current second ratio feature point corresponding to P1 in the second ratio as P1 ' based on the preset ratio feature point corresponding relationship, and then displays P1 ' as a fourth selected mark (a triangle shadow layer covering P1 ' in fig. 7); the display style (such as shape, color, frame, etc.) of the third selected mark and the fourth selected mark can be defined by self according to actual needs, and the display style is not limited. By adopting the operation mode, a user can conveniently determine the feature points in the ratio corresponding to the feature points in the current ratio according to the feature points in the current ratio selected by the cursor, so that the accuracy of image inspection is improved, and the efficiency of image inspection is improved.
As a possible implementation, the method may further include the following operation modes:
(11) acquiring linear transformation matrixes corresponding to the first image and the second image; the linear transformation matrix is determined according to the position information of the first target object; wherein the first target object includes at least one of: all the mutually corresponding first comparison middle areas and second comparison middle areas, part of the mutually corresponding first comparison middle areas and second comparison middle areas, all the mutually corresponding first comparison middle characteristic points and second comparison middle characteristic points, and part of the mutually corresponding first comparison middle characteristic points and second comparison middle characteristic points; the position information includes coordinates of the first target object corresponding to the image coordinate system.
The linear transformation matrix is used for mapping the area in the ratio and/or the characteristic points in the ratio with the corresponding relation to the same coordinate system. Taking fig. 8 as an example, the image a (i.e., the first image) is a fingerprint image captured in the field, and since the fingerprint image is usually a trace left by the target person accidentally, the fingerprint image is usually incomplete or blurred. The image B (i.e., the second image) is a fingerprint image pre-stored in the fingerprint database, and the fingerprint image is acquired by an offline fingerprint input mode, so that the image B is clear and complete. According to the characteristic point information of the image AAnd searching by a comparison system to obtain a candidate image set of the image A, wherein the candidate image set of the image A comprises the image B, and a worker can quickly compare whether the image A is really matched with the image B or not through a graphical user interface of a viewing system. In the comparison system, the coordinates of the pixel points on the image a and the image B are both expressed by the coordinates in the image coordinate system, and the comparison system can usually determine the feature points (i.e., feature points in the comparison) where the image a and the image B are matched with each other, the middle region of the comparison in the comparison, and the confidence degrees corresponding to the middle region of the comparison. In fig. 8, the first image area is located on the left side of the second image area, the vertex coordinate of the image a displayed in the first image area is the position of the point a1, the vertex coordinate of the image B displayed in the second image area (i.e., the second image) is the position of the point B1, the coordinate systems of both images are from the vertices, the horizontal rightward direction is the abscissa axis, and the vertical downward direction is the ordinate axis; feature points in all the ratios can be selected in the ratio areas corresponding to each other in the image A and the image B to form a plurality of matched feature point pairs; because two feature points in each matching feature point pair have coordinate information in respective image coordinate systems, a linear transformation matrix X corresponding to the image A and the image B can be fitted in a linear fitting manner according to the coordinate information of the two feature points in each matching feature point pair in the respective image coordinate systemsA-B(ii) a By the linear transformation matrix, the coordinate of the pixel a11(x11, y11) in the image coordinate system in the image a can be transformed, and the pixel B11(x12, y12) in the image coordinate system corresponding to the image B in a11 is obtained.
The linear transformation matrix can minimize an error between coordinates of the region in the coordinate system of each image in the ratios corresponding to each other, or can minimize an error between coordinates of the feature point in the coordinate system of each image in the ratios corresponding to each other.
It is understood that the linear transformation matrix may be included in the alignment results, and the linear transformation matrix included in the alignment results is determined according to the region in all the ratios and/or the feature points in all the ratios. The linear transformation matrix may also be calculated after the user has selected certain regions in the ratio and/or feature points in the ratio (e.g., the user selects the feature points in the regions in the ratio/ratio that the user views as having a higher confidence in calculating the linear transformation matrix), in which case the linear transformation matrix can minimize the error between the coordinates of the selected regions in the ratio in the coordinate system of the respective images, or can minimize the error between the coordinates of the feature points in the selected ratios in the coordinate system of the respective images. Alternatively, as a possible implementation, the linear transformation matrix may also be determined according to a specific mid-ratio region, which may be a mid-ratio region with a confidence level greater than a first set threshold (e.g., 75%). Alternatively, the linear transformation matrix may be determined according to feature points in a specific ratio, where the feature points are feature points in a ratio with a confidence level greater than a second set threshold (e.g., 80%).
(12) In response to the alignment tool or the synchronization tool in the viewing tool area being in the selected state, the first image and the second image are aligned according to the linear transformation matrix.
Specifically, according to the linear transformation matrix, coordinate transformation may be performed on at least one of a second image, a second mark, and a selected mark located in the second image area to obtain a first image and a second image after alignment; the selected mark is a mark corresponding to a region in the ratio selected by the cursor of the electronic equipment and/or a characteristic point in the ratio, and the cursor comprises a mouse cursor or a touch point cursor.
Continuing with fig. 8, taking fig. 9 as an example, M2 is the above synchronization tool, M3 is the above alignment tool, M3 is set to the selected state (the selected state is indicated by hatching in fig. 9) through a selection operation performed by the user on M3 or other triggering manners, and after the electronic device responds to M3 that the selected state is set, the electronic device obtains a linear transformation matrix corresponding to image a and image B, and performs coordinate transformation on image a by using the linear transformation matrix to obtain aligned image a and image B (see fig. 9). It will be appreciated that when the alignment tool is selected, the image B may also be co-ordinate transformed using a linear transformation matrix to obtain the aligned image a and image B, where the linear transformation matrix used is the inverse of the linear transformation matrix used to transform image a to image B.
In addition, when the first image and the second image are operated, sometimes the two images need to be operated synchronously, for example, the first image is enlarged, and the second image is also enlarged, if the two images are respectively enlarged, the enlargement operation is complicated, so that the viewing tool area may include a synchronization tool, and if the synchronization tool is in a selected state, the subsequent operations are all synchronous operations for the first image and the second image.
In order to prompt the effect of the synchronization operation, the step (12) may specifically include the following operation modes: setting the alignment tool in the inspection tool area to be in the selected state in response to the synchronization tool in the inspection tool area being in the selected state; the first image and the second image are aligned according to the linear transformation matrix. Continuing with FIG. 3, taking FIG. 10 as an example, M2 is the above synchronization tool, M3 is the above alignment tool, the user can synchronize the selection operation performed by M2 or otherwise bring M2 to the selected state (the selected state is shaded in FIG. 10), the electronic device will set the alignment tool M3 to the selected state (the selected state is shaded in FIG. 10) in response to M2 being in the selected state, and align the first image and the second image according to the linear transformation matrix. Through the operation mode, the alignment tool can be triggered to be in the selected state while the synchronization tool is in the selected state, so that the alignment process of the two images is completed before the image synchronization operation is carried out, and the display effect of the synchronous operation is more convenient for visual viewing on the basis of the aligned images.
With the steps (11) and (12) above, the user can view the two images in alignment, and it is easier to determine whether the region in the ratio and/or the feature points in the ratio match. In addition, the alignment of the two images is automatically triggered for the selected state through the alignment tool or the synchronization tool, so that the user operation is simplified, and the interaction performance of the system is improved.
As a possible implementation, the first image area corresponds to a first canvas, and the second image area corresponds to a second canvas, and the method may further include: detecting the position of a cursor of the electronic equipment on a first canvas when a synchronous tool in the tool viewing area is in a selected state; the cursor comprises a mouse cursor or a touch point cursor; and determining the corresponding position of the cursor on the second canvas according to the position of the cursor on the first canvas and the linear transformation matrix, and displaying a mark symbol corresponding to the cursor at the corresponding position of the second canvas.
Continuing with fig. 8, taking fig. 11 as an example, in fig. 11, the vertex coordinate of the first canvas of the first image area is the position of the point a, the vertex coordinate of the second canvas of the second image area is the position of the point b, the coordinate systems of the two canvases are both from the vertex, the horizontal direction is the abscissa axis, and the vertical direction is the ordinate axis. The first image area displays an image A, and the second image area displays an image B, wherein the vertex coordinate of the image coordinate system of the image A is the position of a1 point, and the vertex coordinate of the image coordinate system of the image B is the position of a B1 point. Based on the coordinates of the region in the ratio and/or the feature points in the ratio, a linear transformation matrix X corresponding to the image A and the image B can be determinedA-B. As the first image area and the second image area respectively correspond to the canvas, the linear transformation matrix X between the first canvas and the image A can be determined according to the vertex coordinate of the image A and the vertex coordinate of the first canvas1Similarly, a linear transformation matrix X between image B and the second canvas is determined2. In this manner, the markup symbols displayed at the respective locations of the second canvas can be moved smoothly, rather than merely jumping between feature points.
When the user moves the mouse cursor to the image a to operate the image a in the selected state of the synchronization tool M2 (indicated by hatching in fig. 11), the electronic device detects that the coordinates of the mouse cursor in the coordinate system corresponding to the first canvas are (X21, y21), and the coordinates are (X) in the linear transformation matrix X1The coordinates of the mouse cursor in the image coordinate system of the image A can be determined as (x31, y 31); and the linear transformation matrix X between the image A and the image BA-BIt can be determined that the coordinates (X31, y31) correspond to the coordinates (X32, y32) in the image coordinate system of image B, which is further determined by the linear transformation matrix X between image B and the second canvas2The coordinates (x22, y22) of the coordinates (x32, y32) in the second canvas coordinate system can be determined, and the mouse in the image A can be obtainedThe position in the second image area corresponding to the icon is the coordinate (x22, y22), and the corresponding marker symbol of the cursor is displayed at the coordinate position of the second canvas (the "cross" icon in fig. 11). Furthermore, the image a and the image B may be manually aligned by selecting the alignment tool in the operation manner of the step (12) or automatically aligned when the synchronization tool is selected, and by this alignment operation, the area in the ratio in the image a and/or the coordinates under the first canvas and the corresponding area in the ratio in the image B and/or the coordinates of the feature point in the ratio under the second canvas may be made the same to the greatest extent possible. After the alignment operation, if the alignment is realized by performing linear transformation on the image A, the linear transformation matrix X between the image A and the first canvas1Changed to be a linear conversion matrix X'1(if the alignment is achieved by linearly transforming image B, the linear transformation matrix between image B and the second canvas changes), similar to the above process, according to the position of the mouse cursor on the aligned first canvas and the linear transformation matrix XA-BThe position of the mouse cursor in the second canvas may be calculated.
On the basis of the processing method of the palm print image inspection interface, in order to further improve the accuracy of image inspection, the method may further include the following operation modes: under the condition that a synchronous tool in the inspection tool area is in a selected state, responding to a cursor movement event of the first image, and synchronously displaying a movement event of a mark symbol corresponding to a cursor on the second image according to the cursor movement event; the movement track of the mark symbol corresponding to the cursor is matched with the movement track of the cursor; the cursor movement event includes a movement event of a mouse cursor or a movement event of a touch point cursor.
Specifically, taking fig. 11 as an example, when the synchronization tool M2 is in the selected state (the selected state is indicated by hatching in fig. 7), the user hovers the mouse cursor at a certain position of the image a (i.e., the first image), and the electronic device generates a cross-shaped icon (i.e., a mark symbol corresponding to the mouse cursor) at a position of the image B (i.e., the second image) corresponding to the hovering position of the mouse cursor in response to the mouse cursor hovering operation; when the mouse cursor moves on the image a (i.e. a cursor movement event occurs on the image a), the electronic device controls the cross-shaped icon to synchronously move on the image B (i.e. a movement event of a marker symbol corresponding to the cursor is synchronously displayed on the image B) after responding to the cursor movement event, and the movement track of the cross-shaped icon matches with the movement track of the mouse cursor. By adopting the operation mode, the cursor movement condition on one image can be synchronously displayed on the other image, so that the user can quickly and accurately judge the similarity degree of the two images near the corresponding point positions, the operation complexity of the inspection is reduced, and the efficiency and the accuracy of the inspection are further improved.
On the basis of the processing method for the palm print image inspection interface, in order to further improve the accuracy of inspection, the method may further include the following operation modes: in response to a first operation event for the first image, a response action associated with the first operation event is synchronously executed on the first image and the second image according to the linear transformation matrix in the state that the synchronous tool in the inspection tool area is selected.
The first operation event may be a reset event, a translation event, a rotation event, a zoom event, an enhancement event, a brightness contrast event, an add graphics event (e.g., by drawing an add feature point through a setpoint), a delete graphics event, etc., and accordingly, the response action associated with the first operation event may be reset, translation, rotation, zoom, enhancement, change brightness and contrast, add graphics, delete graphics, etc., and the response actions associated with the first operation event and the first operation event may be configured by themselves according to the actual situation, which is not limited herein. For example, on the premise that the rotation angle and the displacement required for aligning the image a to the image B are known, that is, the linear transformation matrix M is known, feature point drawing operation is performed on the image a by a feature point drawing tool (the feature point drawing includes feature point starting position drawing and feature point direction selection, and the feature point direction selection is completed by dragging the drawn feature points in a state of keeping clicking after the feature point drawing is clicked), and when the drawn feature points are synchronized to the image B, the starting positions and the directions of the feature points in the image B are obtained according to the linear transformation matrix M. Conversely, the inverse matrix M ' of the linear transformation matrix M may be calculated in advance, and on the premise that M ' is known, the feature point drawing tool performs the feature point drawing operation on the image B, and when the drawn feature points are synchronized into the image a, the initial positions and directions of the feature points in the image a are obtained according to M '.
The method is particularly suitable for inquiring a finger and palm print image (Target) and a candidate finger and palm print image (Probe) aiming at an application scene of finger and palm print inspection, and can rotate, zoom and translate one image by operating an inspection tool to make the size and the direction of a pixel point group consistent with those of the other image, so that the situation that the position, the size and the angle of the original finger and palm print in the image are possibly inconsistent is changed into consistency. When the synchronous tool is selected, the viewing worker can use the marking, painting brush, zooming, displacement and other tools for one image, and synchronously execute corresponding operations on the other image, so that a more visual display effect is achieved.
On the basis of the processing method for the finger and palm print image inspection interface, in order to further realize linkage between tools to improve the efficiency of finger and palm print inspection, the method can further comprise the following operation modes: in response to a change in the status for the first tool in the viewing tool zone, setting a status for the second tool according to a preconfigured association of the first tool with the second tool; the states of the first tool and the second tool include: a selected state and a locked state.
Taking fig. 12 as an example, M2 is a synchronization tool (i.e., the first tool at this time), M3 is an alignment tool (i.e., the second tool at this time), and M2 and M3 are associated in advance; the associated event corresponding to the selected state of M2 is an event for setting M3 to be the selected state, and the response action corresponding to the event is to set M3 to be the selected state; the user selects the M2, and the electronic device sets the M2 to be in the selected state and sets the M3 to be in the selected state after responding to the selection operation. Taking fig. 8 as an example, M5 is an add graphic tool (i.e., the first tool at this time), M6 is a delete tool (i.e., the second tool at this time), and M5 and M6 are associated in advance; the associated event corresponding to the selected state of M5 is an event for setting M6 to be in a locked state, and the corresponding response action of the event is to set M6 to be in a locked state; the relevant event corresponding to the selected state of M6 is an event for setting M5 to be in a locking state, and the corresponding response action of the event is to set M5 to be in a locking state; the user selects the M5, and the electronic device sets the M5 to be in a selected state and sets the M6 to be in a locked state (that is, the selection operation on the M6 cannot be performed) after responding to the selection operation; when a user selects M6, the electronic device sets M6 to the selected state and sets M5 to the locked state (i.e., M5 cannot be selected) in response to the selection operation.
The interactive operation process can be realized by subscribing the corresponding relation between the interactive object-event-response action. For example: an event sending function is realized in each subclass of the viewing tool, a subscriber sends a specific event and information corresponding to the specific event, and a receiver executes corresponding response actions according to the received event and information, so that the graphical user interface presents action effects corresponding to the subclass. For example, for a zoom tool, the sent event will contain the event name zoom and the corresponding zoom factor, such that the corresponding image or region in the image will be zoomed at the zoom factor in the graphical user interface.
The subscription method can make the interactive objects (such as image a, image B, viewing tool 1, viewing tool 2, …), events and response actions independent from each other, and implement the interactive actions in the mapping relationship of the three. Each interactive object may subscribe to the subscriber for an event (e.g., subscribe to an event corresponding to an image zoom size change, and when the event occurs, the interactive object will receive a message), and perform a specified response action (e.g., program logic processing) based on the message that the event occurred. The subscriber can only maintain the event itself, such as zooming, displacement, synchronization function started, thumbnail function started, etc., and when the event is triggered and how the event is received and responded are controlled by the interactive object itself, that is, the subscriber plays a forwarding function.
That is, multiple interaction objects may subscribe to the same event (the event may be sent by other interaction objects), and the same event may correspond to one or more response actions, and it is not necessary to directly maintain the direct mapping relationship between the interaction objects and the event and the response actions, but only need to maintain the relationship between the interaction objects and which events, and which response actions the event corresponds to, which is beneficial to maintenance.
The viewing tool in embodiments of the present invention may also subscribe to events of other tools, or tools of the same type in other components, through the subscriber. For example, in the synchronization operation, for two finger-palm print images A, B that need to be synchronized, the zoom tool corresponding to image B subscribes to the event of the zoom tool corresponding to image a, and ensures that its zoom factor is consistent with the zoom factor of the tool in a. If the user clicks the synchronization tool and clicks the zoom tool, the A and B will be zoomed synchronously. For example: the zoom tool of this interactive object of figure a hears the mouse movement to the zoom tool and performs the responsive action: and (4) selecting a zooming tool, and adding the monitoring of the mouse wheel in the monitoring list. And when the event that the mouse wheel rolls in the area of the graph A is monitored, executing a response action: "zoom by N" is broadcast, where N is associated with the direction of the mouse wheel and the number of scroll turns. In this scenario, if the zooming tool of the interactive object in fig. a monitors that the synchronization tool is selected, the response action is executed: the zoom tool of fig. B is selected and "zoom N times" is broadcast. By analogy, image a listens for the broadcast information of the zoom tool of image a (zoom by N times). Then a responsive action is performed: zooming and displaying in the image area; image B listens to the broadcast information of the zoom tool of image B (zoom by N times), then in response to the action: scaled and displayed in the image area.
Referring to FIG. 13, the inspection tool area may include the following tools:
(1) and the system characteristic tool is used for displaying the matched characteristic related information and the unmatched characteristic related information in the query image and the candidate image.
(2) And the digital identification tool is used for displaying the serial numbers of the characteristic points in the ratio specified by the system at the positions of the characteristic points in the ratio corresponding to the query image and the candidate image, and the serial numbers of the characteristic points in the mutual ratio are the same.
(3) And the characteristic pattern tool is used for setting the pattern of the characteristic points in the query image and/or the candidate image.
(4) And the feature drawing tool is used for adding feature points corresponding to the query image and/or the candidate image, and determining the initial position and the direction of the drawn feature points through the operation of clicking the feature points for drawing and dragging under the condition of keeping the clicking state.
(5) And the painting tool is used for drawing a line on the query image and/or the candidate image.
(6) A translation scaling tool (or translation scaling tool) for translating the query image and/or the candidate image and scaling the query image and/or the candidate image.
(7) A rotation tool for rotating the query image and/or the candidate image.
(8) A brightness contrast tool for changing the brightness of the query image and/or the candidate image and for changing the contrast of the query image and/or the candidate image.
(9) And the enhancement tool is used for carrying out preliminary image enhancement on the query image and/or the candidate image.
(10) Advanced enhancement tools for depth image enhancement of query images and/or candidate images.
(11) And the eraser tool is used for deleting the contents in the query image and/or the candidate image. The eraser and the painting brush tool are mutually exclusive, namely, the eraser function and the painting brush function cannot be used simultaneously, and if the eraser tool is selected in a state that the painting brush tool is selected, the painting brush tool automatically becomes an unselected state.
(12) An alignment tool for aligning the query image with the candidate images.
(13) And when the synchronization tool is selected, the operations performed on the query image or the candidate image are all the synchronization operations on the query image and the candidate image. For example, the user clicks the synchronization tool and clicks the displacement zoom tool, and then the user performs an image zoom operation on the query image through the displacement zoom tool, which triggers the electronic device to synchronously zoom the query image and the candidate image.
(14) And when the comparison middle area tool is selected, the comparison middle areas in the query image and the candidate image are displayed.
(15) Thumbnail, open state, display the thumbnail of the main picture (i.e. query image or candidate image).
In addition to the above-described tools, the viewing tool zone may further include: and the area-in-no-obvious-ratio prompt bar is used for prompting the result in the ratio of the query image and the candidate image, and if the query image and the candidate image do not have the area-in-obvious-ratio, the area-in-no-obvious-ratio prompt bar in the inspection tool area is highlighted.
On the basis of the processing method for the palm print image viewing interface, in order to further improve the universality of the method on different application software, a web browser of the electronic equipment can be used for providing a graphical user interface of the viewing system, and the graphical user interface is rendered in a virtual DOM tree structure.
Specifically, all the tools in the first image area, the second image area, the first image, the second image, the feature points, and the inspection tool area may be abstracted into a function set, a virtual DOM tree structure corresponding to the function set may be configured, attributes (such as a selected state, whether the selected state is in a synchronous state, and the like) shared by all the tools may be used as a base class in the virtual DOM tree structure, and a specific attribute (such as a scaling ratio, a rotation angle, and the like) of each tool may be used as a subclass in the virtual DOM tree structure. Corresponding to the function set, a DOM tree structure corresponding to the virtual DOM tree structure and used for displaying a graphical user interface needs to be configured, and the virtual DOM tree structure of the logic layer and the DOM tree structure of the view layer are combined. By adopting the design, when a new tool is added, the subclass corresponding to the tool is added in the DOM tree structure according to the function of the tool, so that the maintainability and the expansibility of the inspection tool are improved. In addition, by adopting the mode of rendering the graphical user interface by the virtual DOM tree structure, when the style of one tool is changed, the subclass which causes the change can be determined by the virtual DOM tree structure, and the changed style is only required to be rendered without rendering the whole graphical user interface, thereby avoiding the rendering of a large amount of frequent and repeated renderings. In the actual application process, in order to improve the rendering efficiency, the style change occurring in a short time (such as 1ms and 3ms) can be used as a rendering batch, and the graphical user interface is rendered in a batch rendering mode.
Example three:
the embodiment provides a processing device for a finger-palm print image viewing interface. Referring to fig. 14, the processing apparatus specifically includes the following modules:
an image display module 1402, configured to display a first image in the first image area and a second image in the second image area.
A marker display module 1404 for displaying a first marker on a first in-ratio region of the first image and a second marker on a second in-ratio region of the second image in response to the in-ratio region tool in the viewing tool region being in the selected state; wherein the first and second in-ratio regions are at least one in-ratio region of the first and second images that correspond to each other; the first mark and the second mark correspond.
In the processing apparatus for a finger-palm print image viewing interface provided in this embodiment, when a tool in a middle area of a viewing tool area is in a selected state, a first mark is displayed on a first middle area of the first image, and a second mark is displayed on a second middle area of the second image; by displaying the image marked with the middle area in the comparison when the middle area tool is selected, the user can intuitively determine the corresponding areas in the two images, so that the user can conveniently and carefully check the characteristic point information at the position of the area in the comparison, and further determine the matching degree of the first image (such as a finger and palm print candidate image) and the second image (such as a finger and palm print query image), and the operation mode improves the inspection efficiency and the accuracy of the inspection result.
The mark display module 1404 is further configured to: in response to the feature point tool in the viewing tool zone being in the selected state, displaying a first on-scale feature point on the first image as a first feature point marker and a second on-scale feature point on the second image as a second feature point marker; wherein the first ratiometric feature points and the second ratiometric feature points are feature points matched with each other in the first ratiometric region and the second ratiometric region.
The mark display module 1404 is further configured to: and in response to the feature point tool in the inspection tool area being in the selected state, displaying a first unmatched feature point on the first image as a third feature point mark and a second unmatched feature point on the second image as a fourth feature point mark.
At least one of the first ratiometric region and the second ratiometric region; based on this, the mark display module 1404 is further configured to: for any first comparison region, determining a current first mark corresponding to the first comparison region and a current second mark corresponding to the second comparison region corresponding to the first comparison region according to the confidence degree corresponding to the first comparison region; displaying the current first mark on the first comparison area, and displaying the current second mark on the second comparison area corresponding to the first comparison area; wherein the current first marker and the current second marker are the same; the confidence corresponding to the first comparing region is the confidence in the mutual comparison between the first comparing region and the corresponding second comparing region; and the current first marks corresponding to the first comparison areas with the same corresponding confidence degrees are the same.
The mark display module 1404 is further configured to: in response to detecting that a cursor of the electronic equipment selects a current first comparison area in first comparison areas, displaying a first selected mark in the current first comparison area; determining a current second ratio middle area corresponding to the current first ratio middle area in a second ratio middle area according to the current first ratio middle area; displaying a second selected marker in the current second ratio; and/or, in response to detecting that the cursor selects a current first ratio middle feature point in the first ratio middle feature points, displaying the current first ratio middle feature point as a third selection mark, and determining a current second ratio middle feature point corresponding to the current first ratio middle feature point in the second ratio middle feature points according to the current first ratio middle feature point; displaying the feature points in the current second ratio corresponding to the feature points in the current first ratio as a fourth selected mark; the cursor comprises a mouse cursor or a touch point cursor.
The processing device of the finger-palm print image viewing interface may further include:
a linear transformation matrix obtaining module 1406, configured to obtain linear transformation matrices corresponding to the first image and the second image; the linear transformation matrix is determined according to the position information of the first target object; wherein the first target object comprises at least one of: all the mutually corresponding first comparison middle areas and second comparison middle areas, part of the mutually corresponding first comparison middle areas and second comparison middle areas, all the mutually corresponding first comparison middle characteristic points and second comparison middle characteristic points, and part of the mutually corresponding first comparison middle characteristic points and second comparison middle characteristic points; the position information comprises coordinates of the first target object corresponding to an image coordinate system;
an alignment module 1408 for aligning the first image and the second image according to the linear transformation matrix in response to an alignment tool or a synchronization tool in the viewing tool region being in a selected state.
The alignment module 1408 is further configured to: according to the linear transformation matrix, performing coordinate transformation on at least one of the second image, the second mark and the selected mark in the second image area to obtain the first image and the second image which are aligned; the selected mark is a mark corresponding to a region in the ratio selected by the cursor of the electronic equipment and/or a characteristic point in the ratio, and the cursor comprises a mouse cursor or a touch point cursor.
The first image area corresponds to a first canvas, the second image area corresponds to a second canvas, and based on this, the processing device of the finger print image viewing interface may further include:
a mark symbol display module 1410, configured to detect a position of a cursor of the electronic device on a first canvas when a synchronization tool in the viewing tool region is in a selected state; the cursor comprises a mouse cursor or a touch point cursor; and determining the corresponding position of the cursor on a second canvas according to the position of the cursor on the first canvas and the linear transformation matrix, and displaying a mark symbol corresponding to the cursor at the corresponding position of the second canvas.
A synchronization module 1412, configured to, in a selected state of a synchronization tool in the viewing tool region, in response to a first operation event for the first image, perform a response action associated with the first operation event on the first image and the second image synchronously according to the linear transformation matrix; the first operational event comprises at least one of: a reset event, a translation event, a rotation event, a brightness contrast event, an enhancement event, and a zoom event.
A linkage module 1414 for setting a state of a second tool in the viewing tool zone according to a preconfigured association of the first tool with the second tool in response to a change in state for the first tool; the states of the first tool and the second tool include: a selected state and a locked state.
The implementation principle and the technical effects of the processing apparatus for a finger-palm print image viewing interface provided in this embodiment are the same as those of the foregoing method embodiments, and for brevity, reference may be made to the corresponding contents in the foregoing method embodiments for the parts of the apparatus embodiments that are not mentioned.
Example four:
the embodiment provides another processing device for a finger-palm print image viewing interface. Referring to fig. 15, the processing apparatus 200 includes: the processor 150, the memory 151, the bus 152 and the communication interface 153, wherein the processor 150, the communication interface 153 and the memory 151 are connected through the bus 152; the processor 150 is used to execute executable modules, such as computer programs, stored in the memory 151.
The Memory 151 may include a Random Access Memory (RAM) and a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is implemented through at least one communication interface 153 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like may be used.
Bus 152 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 15, but that does not indicate only one bus or one type of bus.
The memory 151 is used for storing a program, the processor 150 executes the program after receiving an execution instruction, and the method executed by the apparatus defined by the flow process disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 150, or implemented by the processor 150.
The processor 150 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by instructions in the form of hardware integrated logic circuits or software in the processor 150. The Processor 150 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 151, and the processor 150 reads the information in the memory 151 and performs the steps of the above method in combination with the hardware thereof.
Example five:
the embodiment provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the processing method for a fingerprint and palm print image viewing interface, and specific implementation can refer to the foregoing method embodiments, and details are not repeated here.
The method, the apparatus, and the computer program product for an electronic system for processing a finger-palm print image viewing interface provided in the embodiments of the present invention include a computer-readable storage medium storing program codes, where instructions included in the program codes may be used to execute the methods described in the foregoing method embodiments, and specific implementations may refer to the method embodiments, which are not described herein again.
Unless specifically stated otherwise, the relative steps, numerical expressions, and values of the components and steps set forth in these embodiments do not limit the scope of the present invention.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplification of description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that the following descriptions are only illustrative and not restrictive, and that the scope of the present invention is not limited to the above embodiments: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. A processing method for a finger-palm print image viewing interface is characterized in that the finger-palm print image comprises a fingerprint image and/or a palm print image, a graphical user interface of a viewing system is provided through electronic equipment, the content displayed by the graphical user interface comprises a first image area, a second image area and a viewing tool area, and the method comprises the following steps:
displaying a first image in the first image area and a second image in the second image area;
in response to a contrast-in-area tool in the viewing tool region being in a selected state, displaying a first marker on a first contrast area of the first image and a second marker on a second contrast area of the second image; wherein the first and second in-ratio regions are at least one in-ratio region of the first and second images that correspond to each other; the first mark and the second mark correspond.
2. The method of claim 1, further comprising:
in response to the feature point tool in the viewing tool zone being in the selected state, displaying a first on-scale feature point on the first image as a first feature point marker and a second on-scale feature point on the second image as a second feature point marker; wherein the first ratiometric feature points and the second ratiometric feature points are feature points matched with each other in the first ratiometric region and the second ratiometric region.
3. The method of claim 2, further comprising:
in response to the feature point tool in the viewing tool field being in the selected state, displaying a first unmatched feature point on the first image as a third feature point marker and a second unmatched feature point on the second image as a fourth feature point marker.
4. The method of any one of claims 1-3, wherein the first ratiometric region and the second ratiometric region are at least one;
the step of displaying a first marker on a first alignment area of the first image and a second marker on a second alignment area of the second image comprises:
for any first comparison region, determining a current first mark corresponding to the first comparison region and a current second mark corresponding to the second comparison region corresponding to the first comparison region according to the confidence degree corresponding to the first comparison region; displaying the current first marker on the first ratio area, and displaying the current second marker on the second ratio area corresponding to the first ratio area;
wherein the current first marker and the current second marker are the same; the confidence coefficient corresponding to the first comparison area is the confidence coefficient in the mutual comparison between the first comparison area and the corresponding second comparison area; and the current first marks corresponding to the first comparison areas with the same corresponding confidence degrees are the same.
5. The method of claim 2, further comprising:
in response to detecting that a cursor of the electronic equipment selects a current first comparison area in first comparison areas, displaying a first selected mark in the current first comparison area; determining a current second comparison middle area corresponding to the current first comparison middle area in a second comparison middle area according to the current first comparison middle area; displaying a second selected marker in the current second ratio;
and/or the presence of a gas in the atmosphere,
in response to the detection that the cursor selects a current first-ratio middle feature point in the first-ratio middle feature points, displaying the current first-ratio middle feature point as a third selection mark, and determining a current second-ratio middle feature point corresponding to the current first-ratio middle feature point in the second-ratio middle feature points according to the current first-ratio middle feature point; displaying the feature points in the current second ratio corresponding to the feature points in the current first ratio as a fourth selected mark;
the cursor comprises a mouse cursor or a touch point cursor.
6. The method according to any one of claims 1-5, further comprising:
acquiring linear transformation matrixes corresponding to the first image and the second image; the linear transformation matrix is determined according to the position information of the first target object; wherein the first target object comprises at least one of: all mutually corresponding first comparison areas and second comparison areas, part of mutually corresponding first comparison areas and second comparison areas, all mutually corresponding first comparison characteristic points and second comparison characteristic points, and part of mutually corresponding first comparison characteristic points and second comparison characteristic points; the position information comprises coordinates of the first target object under a corresponding image coordinate system;
in response to an alignment tool or a synchronization tool in the viewing tool field being in a selected state, aligning the first image and the second image according to the linear transformation matrix.
7. The method of claim 6, wherein the step of aligning the first image and the second image according to the linear transformation matrix comprises:
according to the linear transformation matrix, performing coordinate transformation on at least one of the second image, the second mark and the selected mark in the second image area to obtain the first image and the second image which are aligned; the selected mark is a mark corresponding to a characteristic point in a region and/or a ratio in a ratio selected by a cursor of the electronic equipment, and the cursor comprises a mouse cursor or a touch point cursor.
8. The method of claim 6 or 7, wherein the first image area corresponds to a first canvas and the second image area corresponds to a second canvas, the method further comprising:
detecting the position of a cursor of the electronic equipment on a first canvas when a synchronous tool in the inspection tool area is in a selected state; the cursor comprises a mouse cursor or a touch point cursor;
and determining the corresponding position of the cursor on a second canvas according to the position of the cursor on the first canvas and the linear transformation matrix, and displaying a mark symbol corresponding to the cursor at the corresponding position of the second canvas.
9. The method according to any one of claims 6-8, further comprising:
in response to a first operation event for the first image in the selected state of the synchronization tool in the viewing tool region, synchronously executing a response action associated with the first operation event on the first image and the second image according to the linear transformation matrix; the first operational event comprises at least one of: a reset event, a translation event, a rotation event, a brightness contrast event, an enhancement event, and a zoom event.
10. The method according to any one of claims 1-9, further comprising:
in response to a change in status for a first tool in the viewing tool zone, setting a status of a second tool according to a preconfigured association of the first tool with the second tool; the states of the first tool and the second tool include: a selected state and a locked state.
11. The method of any one of claims 1-10, wherein a graphical user interface of a viewing system is provided via a web browser of the electronic device, the graphical user interface being rendered in a virtual DOM tree structure.
12. A processing apparatus for a finger/palm print image viewing interface, wherein the finger/palm print image includes a fingerprint image and/or a palm print image, a graphical user interface of a viewing system is provided via an electronic device, and the content displayed on the graphical user interface includes a first image area, a second image area and a viewing tool area, the apparatus comprising:
the image display module is used for displaying a first image in the first image area and displaying a second image in the second image area;
a mid-contrast region marking module for displaying a first mark on a first mid-contrast region of the first image and a second mark on a second mid-contrast region of the second image in response to a mid-contrast region tool in the viewing tool region being in a selected state; wherein the first and second in-ratio regions are at least one in-ratio region of the first and second images that correspond to each other; the first mark and the second mark correspond.
13. An electronic system, comprising: the system comprises an image acquisition device, a processing device and a storage device;
the image acquisition equipment is used for acquiring an image to be detected;
the storage device stores a computer program, which when executed by the processing apparatus, executes the processing method of the fingerprint and palm print image viewing interface as claimed in any one of claims 1 to 11.
14. A computer-readable storage medium, on which a computer program is stored, the computer program, when being executed by a processing device, performing the steps of the processing method for a finger-palm print image viewing interface according to any one of claims 1 to 11.
CN202210335801.8A 2022-03-31 2022-03-31 Processing method and device for finger and palm print image viewing interface and electronic system Pending CN114779975A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210335801.8A CN114779975A (en) 2022-03-31 2022-03-31 Processing method and device for finger and palm print image viewing interface and electronic system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210335801.8A CN114779975A (en) 2022-03-31 2022-03-31 Processing method and device for finger and palm print image viewing interface and electronic system

Publications (1)

Publication Number Publication Date
CN114779975A true CN114779975A (en) 2022-07-22

Family

ID=82428096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210335801.8A Pending CN114779975A (en) 2022-03-31 2022-03-31 Processing method and device for finger and palm print image viewing interface and electronic system

Country Status (1)

Country Link
CN (1) CN114779975A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010053245A1 (en) * 2000-06-15 2001-12-20 Kaoru Sakai Image alignment method, comparative inspection method, and comparative inspection device for comparative inspections
US20050097475A1 (en) * 2003-09-12 2005-05-05 Fuji Photo Film Co., Ltd. Image comparative display method, image comparative display apparatus, and computer-readable medium
US20150199113A1 (en) * 2014-01-16 2015-07-16 Xerox Corporation Electronic content visual comparison apparatus and method
CN107168614A (en) * 2012-03-06 2017-09-15 苹果公司 Application for checking image
CN112150464A (en) * 2020-10-23 2020-12-29 腾讯科技(深圳)有限公司 Image detection method and device, electronic equipment and storage medium
CN112287957A (en) * 2020-01-22 2021-01-29 京东安联财产保险有限公司 Target matching method and device
CN112446936A (en) * 2019-08-29 2021-03-05 北京京东尚科信息技术有限公司 Image processing method and device
CN113627413A (en) * 2021-08-12 2021-11-09 杭州海康威视数字技术股份有限公司 Data labeling method, image comparison method and device
US20210373752A1 (en) * 2019-11-28 2021-12-02 Boe Technology Group Co., Ltd. User interface system, electronic equipment and interaction method for picture recognition
CN113918509A (en) * 2020-07-10 2022-01-11 珠海格力电器股份有限公司 Document comparison display method and document comparison display equipment
CN114168052A (en) * 2021-12-09 2022-03-11 深圳市慧鲤科技有限公司 Multi-graph display method, device, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010053245A1 (en) * 2000-06-15 2001-12-20 Kaoru Sakai Image alignment method, comparative inspection method, and comparative inspection device for comparative inspections
US20050097475A1 (en) * 2003-09-12 2005-05-05 Fuji Photo Film Co., Ltd. Image comparative display method, image comparative display apparatus, and computer-readable medium
CN107168614A (en) * 2012-03-06 2017-09-15 苹果公司 Application for checking image
US20150199113A1 (en) * 2014-01-16 2015-07-16 Xerox Corporation Electronic content visual comparison apparatus and method
CN112446936A (en) * 2019-08-29 2021-03-05 北京京东尚科信息技术有限公司 Image processing method and device
US20210373752A1 (en) * 2019-11-28 2021-12-02 Boe Technology Group Co., Ltd. User interface system, electronic equipment and interaction method for picture recognition
CN112287957A (en) * 2020-01-22 2021-01-29 京东安联财产保险有限公司 Target matching method and device
CN113918509A (en) * 2020-07-10 2022-01-11 珠海格力电器股份有限公司 Document comparison display method and document comparison display equipment
CN112150464A (en) * 2020-10-23 2020-12-29 腾讯科技(深圳)有限公司 Image detection method and device, electronic equipment and storage medium
CN113627413A (en) * 2021-08-12 2021-11-09 杭州海康威视数字技术股份有限公司 Data labeling method, image comparison method and device
CN114168052A (en) * 2021-12-09 2022-03-11 深圳市慧鲤科技有限公司 Multi-graph display method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
崔宝江;马丁;郝永乐;王建新;: "基于基本块签名和跳转关系的二进制文件比对技术", 清华大学学报(自然科学版), no. 10, 15 October 2011 (2011-10-15), pages 1351 - 1356 *
肖人岳;秦慕婷;: "一种复杂文本图像中快速文本行检测算法", 科学技术与工程, no. 23, 1 December 2008 (2008-12-01), pages 6253 - 6257 *

Similar Documents

Publication Publication Date Title
US10810698B2 (en) Information processing method and client
WO2021088422A1 (en) Application message notification method and device
CN109542278B (en) Touch data processing method and device and touch equipment
CN114416244B (en) Information display method and device, electronic equipment and storage medium
WO2019214641A1 (en) Optical tag based information apparatus interaction method and system
WO2023071861A1 (en) Data visualization display method and apparatus, computer device, and storage medium
CN111025039B (en) Method, device, equipment and medium for testing accuracy of touch display screen
CN110413161B (en) Component configuration method and device and electronic equipment
JP2011237552A (en) Map reading device, map reading program and map reading method
US10430458B2 (en) Automated data extraction from a chart from user screen selections
CN112667212A (en) Buried point data visualization method and device, terminal and storage medium
CN114779975A (en) Processing method and device for finger and palm print image viewing interface and electronic system
JP5266416B1 (en) Test system and test program
CN113204296B (en) Method, device and equipment for highlighting graphics primitive and storage medium
CN110968236A (en) Screenshot method and device based on webpage
CN114782955A (en) Buried point processing method, electronic device, and storage medium
CN110908570B (en) Image processing method, device, terminal and storage medium
JP7069076B2 (en) Information processing equipment, information processing systems, and programs
US11120207B2 (en) Information processing apparatus, information processing method, and storage medium
CN114895836B (en) Touch control method and device based on intelligent full-flow data touch screen and electronic equipment
CN117217993A (en) Full-coverage screenshot method, system, electronic equipment and storage medium
JP7252729B2 (en) Image processing device, image processing method
JP4328282B2 (en) Numerical input method and apparatus, and numerical input program
CN114494136A (en) Satellite-image-coverage-oriented detection method, device, equipment and storage medium
CN105488504A (en) Chinese character identification method based on camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20231214

Address after: Room 1508, 15th Floor, Quantum Core Building, No. 27 Zhichun Road, Haidian District, Beijing, 100083

Applicant after: Beijing jianmozi Technology Co.,Ltd.

Address before: Room 803, floor 8, No. 67, North Fourth Ring West Road, Haidian District, Beijing 100089

Applicant before: Beijing jianmozi Technology Co.,Ltd.

Applicant before: Moqi Technology (Beijing) Co.,Ltd.