CN117173741A - Two-image alignment method based on non-homologous dual-purpose palm brushing recognition - Google Patents

Two-image alignment method based on non-homologous dual-purpose palm brushing recognition Download PDF

Info

Publication number
CN117173741A
CN117173741A CN202210593831.9A CN202210593831A CN117173741A CN 117173741 A CN117173741 A CN 117173741A CN 202210593831 A CN202210593831 A CN 202210593831A CN 117173741 A CN117173741 A CN 117173741A
Authority
CN
China
Prior art keywords
image
palm
rgb
region roi
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210593831.9A
Other languages
Chinese (zh)
Inventor
黄龙祥
周响南
侯俊峰
杨煦
汪博
朱力
吕方璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guangjian Technology Co Ltd
Original Assignee
Shenzhen Guangjian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guangjian Technology Co Ltd filed Critical Shenzhen Guangjian Technology Co Ltd
Priority to CN202210593831.9A priority Critical patent/CN117173741A/en
Publication of CN117173741A publication Critical patent/CN117173741A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A method for recognizing alignment of two images based on non-homologous dual-purpose brush palm comprises the following steps: step S1: for the first image I ir And a second image I rgb Respectively detecting palms to respectively obtain the first images I ir Palm region ROI of (2) ir And the second image I rgb Palm region ROI of (2) rgb The method comprises the steps of carrying out a first treatment on the surface of the Step S2: for a palm region ROI of the first image ir And a palm region ROI of the second image rgb Is arranged on the palm side of (a)Extracting edges and generating new images E respectively ir And E is rgb The method comprises the steps of carrying out a first treatment on the surface of the Step S3: for a palm region ROI of the first image ir And a palm region ROI of the second image rgb Registering the palm edges of the palm, and calculating to obtain parallax b of the center of the palm; step S4: for the first image I ir And the second image I rgb Alignment. According to the application, the non-homologous binocular system consisting of the first camera and the second camera is utilized, the edge parallax is utilized to calculate the palm center parallax, and the first image and the second image are aligned, so that the palm brushing recognition calculation amount is greatly reduced, fewer dependent devices are needed, the device space is more easily saved, the compatibility is better, the cost is reduced, and the popularization of palm brushing application is facilitated.

Description

Two-image alignment method based on non-homologous dual-purpose palm brushing recognition
Technical Field
The application relates to the field of palm recognition cameras, in particular to a non-homologous dual-purpose palm recognition two-image alignment method.
Background
Because the lines of the palm of the person are more stable, the palm brushing identification is a biological identification technology which is more stable and safer than the face brushing identification, and the person identity can be identified through the palm brushing, so that the palm brushing identification device is used in the fields of security check, payment, identity identification and the like. The brush palm identification is a technology with wide application prospect.
In the prior art, a part of palm brushing system adopts a palm support to fix the palm so as to determine the position of the palm, but the practical application is inconvenient. The partial palm brushing system estimates palm depth by using a p-sensor, and then performs pixel level alignment on the first image and the second image according to the p-sensor. In practice, since it is generally required to take a complete photograph of the palm at a short distance and to clearly image the palm at a long distance for the purpose of measuring a range and a distance. This requires that the FOV of the camera be large enough, the duty ratio of the palm in the image at a far distance is not very large, and if the p-sensors are used for alignment of two images, the number of p-sensors needed is relatively large, so that mutual interference exists, the design is difficult, and the cost is also increased greatly.
In the prior art, the binocular systems all adopt homologous binocular, so that clear depth data of a target object can be obtained, but the binocular system has large calculation amount and needs to process related data by a separate system-on-chip, thereby causing a series of problems of transmission delay, recognition lag and the like.
Disclosure of Invention
Therefore, the application utilizes the non-homologous binocular system formed by the first camera and the second camera to directly process the first image and the second image, saves p-sensor, calculates palm center parallax by utilizing edge parallax, and aligns the first image and the second image, so that the palm recognition calculation amount is greatly reduced, the dependent equipment is fewer, the equipment space is easier to save, the compatibility is better, the cost is reduced, and the popularization of palm application is facilitated.
In a first aspect, the present application provides a method for aligning two images based on non-homologous dual-purpose brush palm recognition, which is characterized by comprising the following steps:
step S1: for the first image I ir And a second image I rgb Respectively detecting palms to respectively obtain the first images I ir Palm region ROI of (2) ir And the second image I rgb Palm region ROI of (2) rgb The method comprises the steps of carrying out a first treatment on the surface of the Wherein the first image and the second image are non-homologous images;
step S2: for a palm region ROI of the first image ir And a palm region ROI of the second image rgb Is extracted from the palm edges of the hand and generates new images E respectively ir And E is rgb
Step S3: for a palm region ROI of the first image ir And a palm region ROI of the second image rgb Registering the palm edges of the palm, and calculating to obtain parallax b of the center of the palm;
step S4: for the first image I ir And the second image I rgb Alignment.
Optionally, the method for identifying alignment of two images based on non-homologous dual-purpose brush palm is characterized in that the method further comprises, before step S1:
step S0: respectively carrying out distortion correction on an original first image and an original second image, and carrying out polar line correction to obtain a corrected first image I ir And corrected second image I rgb The method comprises the steps of carrying out a first treatment on the surface of the Wherein the original first image and the original second image are non-homologous images.
Optionally, the method for identifying alignment of two images based on non-homologous dual-purpose brush palm is characterized in that step S2 includes:
step S21: respectively for the first image E ir And the second image I rgb According to the palm region ROI ir And the palm region ROI rgb Image segmentation is carried out, and non-palm areas are set to be zero;
step S22: on the image obtained in step S21, a palm region ROI of the first image ir And a palm region ROI of the second image rgb Is extracted from the palm edges of the hand and generates new images E respectively ir And E is rgb
Optionally, the method for identifying alignment of two images based on non-homologous dual-purpose brush palm is characterized in that the step S3 includes:
step S31: for a palm region ROI of the first image ir And a palm region ROI of the second image rgb Is registered with the palm edge of the hand;
step S32: checking the registration, and eliminating error matching;
step S33: calculating to obtain the first image I ir And the second image I rgb The parallax d of (2);
step S34: in image E ir Calculating the position of the palm center according to the palm edge;
step S35: and calculating to obtain the parallax b of the palm center.
Optionally, in the step S35, the parallax b of the palm center is calculated according to the parallax weights of the left and right edges of the palm and the root of the finger.
Optionally, the method for recognizing alignment of two images based on non-homologous dual-purpose brush palm is characterized in that the first image I is processed before any step is executed ir Second image I rgb New image E ir Or a new image E rgb Compression is performed.
Optionally, the method for recognizing alignment of two images based on non-homologous dual-purpose brush palm is characterized in that the method comprises the following steps of ir And a second image I rgb Is different in compression ratio.
Optionally, the method for aligning two images based on non-homologous dual-purpose palm recognition is characterized in that the method is used for aligning a new image E ir Or a new image E rgb Is different in compression ratio.
In a second aspect, the present application provides a device for identifying alignment of two figures based on non-homologous dual-purpose brush palm, comprising:
a processor;
a memory module having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the non-homologous dual purpose brush palm based recognition two-map alignment method of any of the above via execution of the executable instructions.
In a third aspect, the present application provides a computer readable storage medium storing a program, where the program when executed implements a method for identifying alignment of two images based on non-homologous dual-purpose palm recognition according to any one of the above steps.
Compared with the prior art, the application has the following beneficial effects:
the application adopts the first image and the second image as the original data, does not need devices such as a p-sensor and the like, reduces the input conditions for palm recognition, simplifies corresponding hardware equipment, has smaller volume, is easy to integrate and is beneficial to the miniaturization of the equipment.
The image adopted by the application can be shared with other palm recognition functions, so that one image can be used for a plurality of functions, thereby maximizing the function of one image and saving steps and equipment space. For example, when the first image is an infrared image, the method can be used for alignment and reconstruction and also can be used for living body detection; when the second image is a color image, the method can be used for alignment and reconstruction and can also be used for palm print recognition, and the method can be used for aligning two different types of images, so that the steps of palm detection, palm segmentation, edge extraction, edge matching, depth reconstruction and the like are more convenient, the processing steps of subsequent images are saved, and the processing efficiency is improved.
The application uses the first camera and the second camera to image the palm, thereby realizing the pixel level alignment of the two images when no depth image exists. It is proposed herein to use a first camera and a second camera to create binocular, to reconstruct three-dimensionally the edges of the palm according to the binocular imaging theory, and then to achieve pixel level alignment of the first image and the second image according to the depth information of the palm edges.
The application constructs the first camera and the second camera which are not homologous into a binocular system, but the problems of matching and the like cannot be effectively solved in the prior art due to the non-homology. The application realizes the matching and three-dimensional reconstruction of the non-homologous binocular system. For example, when the first camera is a near infrared camera and the second camera is a color camera, when the palm image is shot, the vein is darker because the vein has a certain absorption effect on infrared light; and when the second image simultaneously shoots the palm image, the palm surface texture is mainly imaged, so that the difference between the first image and the second image is larger, and the effective processing cannot be performed by utilizing the prior art.
The application does not need to process the whole palm, only needs to process the palm edge in the image, so that the processed data volume is greatly reduced, thereby reducing the requirements on chips and the like, and no separate system-on-chip processing is needed, and the application can be integrated in the chip of the camera, thereby ensuring that the structure of the camera is simpler, the cost is reduced, realizing the low-cost palm brushing application and being beneficial to commercial popularization.
According to the application, the parallax b of the palm center is directly calculated by utilizing the parallax of the palm edge, the operation steps are simple and efficient, and the calculation result can be quickly obtained, so that the calculation result can be quickly aligned, and the processing speed of the image is improved. Compared with other processing methods, the scheme of the application can improve the response speed of the brush palm, thereby being more suitable for places needing quick response.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art. Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of steps of a method for recognizing alignment of two images based on non-homologous dual-purpose brush head according to an embodiment of the present application;
FIG. 2 is a diagram of a palm detection in an embodiment of the present application;
FIG. 3 is a diagram of a palm edge in an embodiment of the application;
FIG. 4 is a palm edge registration view in accordance with an embodiment of the present application;
FIG. 5 is a set of palm aligned images in an embodiment of the application;
FIG. 6 is a flowchart illustrating steps for acquiring a new image according to an embodiment of the present application;
fig. 7 is a flowchart showing a step of calculating a parallax b at the center of a palm according to an embodiment of the present application;
FIG. 8 is a palm edge disparity map according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of an alignment device for recognizing two images based on non-homologous dual-purpose brush holders according to an embodiment of the present application;
fig. 10 is a schematic diagram of a computer-readable storage medium according to an embodiment of the application.
Detailed Description
The present application will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present application, but are not intended to limit the application in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical scheme of the application is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
The embodiment of the application provides a non-homologous dual-purpose brush palm based two-image alignment recognition method, which aims to solve the problems in the prior art.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for aligning two images based on non-homologous dual-purpose palm recognition in an embodiment of the present application. As shown in fig. 1, in an embodiment of the present application, a method for aligning two images based on non-homologous dual-purpose palm recognition includes the following steps:
step S1: for the first image I ir And a second image I rgb Respectively detecting palms to respectively obtain the first images I ir Palm region ROI of (2) ir And the second image I rgb Palm region ROI of (2) rgb
In this step, the first image and the second image are non-homologous images, i.e. images obtained using different techniques. For the first image I, a palm detection model is adopted ir And the second image I rgb And respectively detecting, so that whether a palm exists can be judged. If a palm is present, obtaining the first image I ir Palm region ROI of (2) ir And the second image I rgb Palm region ROI of (2) rgb . As shown in fig. 2, the palm region ROI ir And palm region ROI rgb All are the smallest rectangles containing the palms, i.e. the four sides of the rectangle are tangent to the palms edges. The palm detection model used in this step may be any model that can implement the palm detection function, which is not limited in this embodiment.
Step S2: for a palm region ROI of the first image ir And a palm region ROI of the second image rgb Is extracted from the palm edges of the hand and generates new images E respectively ir And E is rgb
In this step, the edges of the palm are extracted according to an edge extraction algorithm, and new images E are generated respectively ir And E is rgb . The edge extraction algorithm may be performed using a variety of algorithms, such as an algorithm based on designing an edge extraction operator (convolution template), including but not limited to sobel, prewit, robert, loG, etc. The edge extraction algorithm may also be obtained by adopting an adaptive algorithm or a machine learning training model, which is not limited in this embodiment. As shown in fig. 3, the identified palm sideThe edges have a certain width, and because the grabbing objects are the same palm, compared with the inside of the palm, the edges have very good consistency, and better processing results can be obtained.
Step S3: for a palm region ROI of the first image ir And a palm region ROI of the second image rgb And registering the palm edges of the palm, and calculating to obtain the parallax b of the palm center.
In this step, in a new image E ir And E is rgb Registering by utilizing the palm edge, and calculating to obtain the first image E ir And the second image I rgb Is a parallax d of (a). The palm center is a point where the difference in distance from the palm to the left and right sides and the distance from the palm root is smallest. As shown in fig. 4, when palm edge registration is performed, all edge points may be matched, or only some of the edge points may be matched. Registration with 68 points of labeling in fig. 4 can achieve a balance between palm efficient representation and computation. Preferably, the number of marking points adopted in palm edge alignment is 39-136. If the number is less than 39, the palm shape cannot be sufficiently represented; if the number is greater than 136, the calculation amount is too large, and excessive calculation resources are consumed, but the improvement of the recognition effect is very limited.
Step S4: for the first image I ir And the second image I rgb Alignment.
In this step, the first image I is subjected to the data acquired in the previous step ir And a second image I rgb Alignment. In alignment, data obtained in any one or more of the preceding steps may be processed, such as alignment with the palm edge, or alignment with the palm center, or alignment with both the palm edge and the palm center. When the image is translated, only the first image I can be translated ir It is also possible to translate only the second image I rgb It is also possible to apply to the first image I ir And a second image I rgb Translation is performed as long as the palm centers are coincident.
Fig. 5 is a set of palm aligned images. Benefit (benefit)Translating the image with a polar plane in the binocular system, in particular with a polar plane at the palm center, such that the first image I ir And the second image I rgb The palm centers of the hands are coincident, thereby completing the alignment operation. As shown in fig. 5, the consistency of the aligned images is better, so that the palm can be better identified and processed.
In some embodiments, before step S1, the method further includes:
step S0: respectively carrying out distortion correction on an original first image and an original second image, and carrying out polar line correction to obtain a corrected first image I ir And corrected second image I rgb
In this step, the original first image and the original second image are non-homologous images, i.e. images obtained using different techniques. Carrying out distortion correction on the original first image, and carrying out polar line correction to obtain a corrected first image I ir . Respectively carrying out distortion correction on the original second images, and carrying out polar line correction to obtain corrected second images I rgb . Since the distortion is due to the lens imaging principle, the distortion correction for the original first image and the original second image needs to be corrected according to the respective parameters of the acquisition device. Epipolar correction is a correction for binocular systems by rotating the two cameras and redefining a new image plane, leaving the epipolar pair collinear and parallel to some coordinate axis of the image plane (usually the horizontal axis), which simultaneously creates a new stereopair. After correction is completed, the same matching point pair is located in the same row of the two views, which means that they have only a difference in horizontal coordinates (or column coordinates), which is called parallax. However, since the adopted images are the first image and the second image respectively, the photographed contents have differences, and the parallax cannot be directly solved by adopting the prior art. When the first camera shoots a palm image, the vein is darker because the vein has a certain absorption effect on infrared light; while the second image is used for imaging palm surface texture mainly when the palm image is shot at the same time, the imaging of the palm texture is difficult to be directly matched with the second image. Meanwhile, the palm difference of people with different fat and thin body constitutions is larger, so that the difference between the infrared image and the second image is larger, and the effective matching is difficult. The step corrects the two images so that the data is more accurate and the subsequent matching is more accurate. It should be noted that, in this embodiment, the original first image and the original second image are generally obtained by a calibrated binocular system, and one of the binocular systems is a first camera, and the other is a second camera. The first camera is used for acquiring a first image, the second camera is used for acquiring a second image, and the first camera and the second camera acquire a target image simultaneously. For example, the first camera is a near infrared camera, the first image is a near infrared image, the second camera is a color camera, and the second image is a color image.
The image with larger distortion can be better processed by the method, so that the palm is closer to the shooting equipment, namely, when the FOV is larger, an accurate result can be obtained, and the effective recognition distance range of the palm is improved.
In some embodiments, the first image I is processed before the processing of the image in any of the steps is performed ir Second image I rgb New image E ir Or a new image E rgb Compression is performed. For the first image I ir And a second image I rgb Is different in compression ratio. For new image E ir Or a new image E rgb Is different in compression ratio. For example, before executing step S1, the first image I is processed ir And the second image I rgb The compression is carried out respectively, the compression ratio is 4 and 2 respectively, so that the sizes of the images are closer, and the processing precision can be ensured. Preferably, the compression ratio is greater as the palm is closer to the camera; the compression ratio is smaller as the palm is farther from the camera. When the palm occupies 70% of the area of the image, the compression ratio of the first image is not less than 5; when the palm occupies 50% of the area of the image, the compression ratio to the first image is not less than 2. The compression ratio for the first image is 3 times or more the compression ratio for the second image.
Fig. 6 is a flowchart illustrating a process for acquiring a new image according to an embodiment of the present application. As shown in fig. 6, unlike the foregoing embodiment, a method for acquiring a new image according to an embodiment of the present application includes the steps of:
step S21: respectively for the first image E ir And the second image I rgb According to the palm region ROI ir And the palm region ROI rgb Image segmentation is performed and non-palm regions are zeroed out.
In this step, in addition to dividing the image, the non-palm region is zeroed out, so as to improve the extraction effect of the subsequent edge extraction algorithm and cope with more complex scenes. When various decorations such as finger rings, handornaments and the like are worn on the palm, the scene is more complex, so that the non-palm area is zeroed, and the subsequent processing is easier.
Step S22: on the image obtained in step S21, a palm region ROI of the first image ir And a palm region ROI of the second image rgb Is extracted from the palm edges of the hand and generates new images E respectively ir And E is rgb
In the step, only the palm area needs to be processed, and the palm edge is extracted, so that the processed area is smaller, and the processing speed is higher. New image E ir And E is rgb Only the palm edge is included, and the method can be used for processing a plurality of subsequent steps, so that the processing amount is greatly reduced, and the efficiency is improved.
Fig. 7 is a flowchart illustrating a step of calculating the parallax b at the palm center according to an embodiment of the present application. As shown in fig. 7, unlike the foregoing embodiment, a method for calculating a parallax b of a palm center in the embodiment of the present application includes the steps of:
step S31: for the palm region ROIir of the first image and the palm region ROI of the second image rgb Is registered with the palm edge of the hand.
In this step, the palm edges in the two images are registered first. Based on prior knowledge, ROI ir And ROI rgb Using conventional methods or deep learning to opposite sidesThe edges are registered. Preferably, the registration is performed by a feature-based method, and the labeled points in the foregoing embodiment are feature points. Registration of the two images can be achieved by registration.
Step S32: and checking the registration, and eliminating the error matching.
In the step, according to priori knowledge, error matching is eliminated; such as parallax, needs to have consistency, topological relationships, etc. Compared with the matching scheme in the prior art, the two images matched in the embodiment are acquired by different technologies, so that there is mismatching caused by different technical characteristics, but these usually appear to have a certain regularity. When the error matching is eliminated, a corresponding filtering model can be adopted according to different technical characteristics.
Step S33: calculating to obtain the first image I ir And the second image I rgb Is a parallax d of (a).
In this step, the palm region ROI ir And the palm region ROI rgb Parallax is calculated at the palm edge to obtain the first image I ir And the second image I rgb Is a parallax d of (a). As shown in fig. 8, the parallax d is obtained by calculation of x1 and x 2.
Step S34: in image E ir And calculating the position of the palm center according to the palm edge.
In this step, use E ir And calculating the position of the palm center according to the image edge and the result of palm detection. The palm center is defined herein as the distance from the palm to the left and right sides and the distance from the root of each palm is the smallest difference. In the present embodiment, the calculation of the palm center is not performed with all edges of the palm, but with a part of the edges, and the calculation amount can be reduced, so that the quick response can be achieved. The palm center in this embodiment has a certain deviation from the actual palm center, but this does not affect the alignment of the two images.
Step S35: and calculating to obtain the parallax b of the palm center.
In this step, the parallax b of the palm center can be calculated from the parallax of the palm edge. For example, the parallax b of the palm center may be calculated from the parallax weights of the left and right edges of the palm and the root of the finger. It should be noted that, when calculating the parallax b at the center of the palm, only the parallax at the edge of the palm is required, and the parallax inside the palm is not required, unlike in the prior art. Because of the irregularity of the shape of the palm, each point of the palm edge is not the same from the center point, so different weight values are adopted for calculation according to the distance between each point and the center of the palm. The assignment of the weight values is not fixed, but can be adjusted according to the position of the palm center on the palm and the posture of the palm. The parallax b of the palm center obtained by calculation in the embodiment has a certain difference from the actual parallax of the palm center, but this does not affect the alignment operation of the palm, and on the contrary, makes the subsequent alignment operation simpler and more accurate.
According to the method, the parallax b of the palm center is directly calculated by utilizing the parallax of the palm edge, so that the parallax b of the palm center can be obtained without calculating the inside of the palm, the problem that the inside of the palm is difficult to match and calculate is solved, the data accuracy is ensured, the data calculation amount is smaller, and the efficiency is improved.
The embodiment of the application also provides a device for identifying alignment of two images based on non-homologous dual-purpose brush palm, which comprises a processor. A memory having stored therein executable instructions of a processor. Wherein the processor is configured to identify the steps of the two-map alignment method based on non-homologous dual purpose brush legs, which are performed via execution of executable instructions.
As described above, in this embodiment, the first image and the second image are acquired by using the depth camera of the binocular system formed by the first camera and the second camera, and the two different types of images are aligned by the method in the foregoing embodiment, so as to overcome the difference between the different types of images, and achieve the purpose of stable and rapid alignment.
Those skilled in the art will appreciate that the various aspects of the application may be implemented as a system, method, or program product. Accordingly, aspects of the application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" platform.
Fig. 9 is a schematic structural diagram of a device for recognizing alignment of two figures based on non-homologous dual purpose brush palm in an embodiment of the present application. An electronic device 600 according to this embodiment of the application is described below with reference to fig. 9. The electronic device 600 shown in fig. 9 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 9, the electronic device 600 is in the form of a general purpose computing device. Components of electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different platform components (including memory unit 620 and processing unit 610), a display unit 640, etc.
Wherein the storage unit stores program code that is executable by the processing unit 610 such that the processing unit 610 performs the steps according to various exemplary embodiments of the present application described in the section of the present specification that is based on a non-homologous dual purpose palm recognition two-map alignment method. For example, the processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 6201 and/or cache memory unit 6202, and may further include Read Only Memory (ROM) 6203.
The storage unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 630 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 600, and/or any device (e.g., router, modem, etc.) that enables the electronic device 600 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 650. Also, electronic device 600 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 over the bus 630. It should be appreciated that although not shown in fig. 9, other hardware and/or software modules may be used in connection with electronic device 600, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage platforms, and the like.
The embodiment of the application also provides a computer readable storage medium for storing a program, and the method is realized based on the steps of the non-homologous dual-purpose palm recognition two-image alignment method when the program is executed. In some possible embodiments, the aspects of the application may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the application as described in the section of the description above which is based on a non-homologous dual purpose palm recognition two-figure alignment method, when the program product is run on a terminal device.
As described above, the program of the computer-readable storage medium of this embodiment, when executed, acquires the first image and the second image by using the depth camera of the binocular system constituted by the first camera and the second camera, aligns the two different types of images by the method of the foregoing embodiment, overcomes the difference of the different types of images, and achieves the purpose of stable and rapid alignment.
Fig. 10 is a schematic structural view of a computer-readable storage medium in an embodiment of the present application. Referring to fig. 10, a program product 800 for implementing the above-described method according to an embodiment of the present application is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
In the embodiment of the application, the first image and the second image are acquired by using the depth camera of the binocular system formed by the first camera and the second camera, and the two different types of images are aligned by the method in the embodiment, so that the difference of the different types of images is overcome, and the aim of stably and rapidly aligning is fulfilled.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing describes specific embodiments of the present application. It is to be understood that the application is not limited to the particular embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the claims without affecting the spirit of the application.

Claims (10)

1. The alignment method for identifying the two images based on the non-homologous dual-purpose brush palm is characterized by comprising the following steps of:
step S1: for the first image I ir And a second image I rgb Respectively detecting palms to respectively obtain the first images I ir Palm region ROI of (2) ir And the second image I rgb Palm region ROI of (2) rgb The method comprises the steps of carrying out a first treatment on the surface of the Wherein the first image and the second image are non-homologous images;
step S2: for a palm region ROI of the first image ir And a palm region ROI of the second image rgb Is extracted from the palm edges of the hand and generates new images E respectively ir And E is rgb
Step S3: for a palm region ROI of the first image ir And a palm region ROI of the second image rgb Registering the palm edges of the palm, and calculating to obtain parallax b of the center of the palm;
step S4: for the first image I ir And the second image I rgb Alignment.
2. The method for aligning two patterns based on non-homologous dual purpose brush head recognition according to claim 1, further comprising, before step S1:
step S0: respectively carrying out distortion correction on an original first image and an original second image, and carrying out polar line correction to obtain a corrected first image I ir And corrected second image I rgb The method comprises the steps of carrying out a first treatment on the surface of the Wherein the original first image and the original second image are non-homologous images.
3. The method for aligning two patterns based on non-homologous dual purpose brush head recognition according to claim 1, wherein the step S2 comprises:
step S21: respectively toThe first image I ir And the second image I rgb According to the palm region ROI ir And the palm region ROI rgb Image segmentation is carried out, and non-palm areas are set to be zero;
step S22: on the image obtained in step S21, a palm region ROI of the first image ir And a palm region ROI of the second image rgb Is extracted from the palm edges of the hand and generates new images E respectively ir And E is rgb
4. The method for aligning two patterns based on non-homologous dual purpose brush head recognition according to claim 1, wherein the step S3 comprises:
step S31: for a palm region ROI of the first image ir And a palm region ROI of the second image rgb Is registered with the palm edge of the hand;
step S32: checking the registration, and eliminating error matching;
step S33: calculating to obtain the first image I ir And the second image I rgb The parallax d of (2);
step S34: in image E ir Calculating the position of the palm center according to the palm edge;
step S35: and calculating to obtain the parallax b of the palm center.
5. The method according to claim 1, wherein in step S35, the parallax b of the palm center is calculated according to the parallax weights of the left and right edges of the palm and the root of the finger.
6. The method for aligning two images based on non-homologous dual purpose palm recognition as recited in claim 1, wherein the first image I is processed before any of the steps is performed ir Second image I rgb New image E ir Or a new image E rgb Compression is performed.
7. The method for aligning two images based on non-homologous dual purpose brush palm recognition as recited in claim 6, wherein for the first image I ir And a second image I rgb Is different in compression ratio.
8. The method for aligning two images based on non-homologous dual purpose brush palm recognition as recited in claim 6, wherein for the new image E ir Or a new image E rgb Is different in compression ratio.
9. A non-homologous dual purpose brush palm based two-map alignment apparatus comprising:
a processor;
a memory module having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the non-homologous dual purpose brush palm based recognition two-map alignment method of any of claims 1 to 8 via execution of the executable instructions.
10. A computer readable storage medium storing a program, wherein the program when executed implements the steps of a method of identifying two-pattern alignment based on non-homologous dual purpose palm brushing as claimed in any one of claims 1 to 8.
CN202210593831.9A 2022-05-28 2022-05-28 Two-image alignment method based on non-homologous dual-purpose palm brushing recognition Pending CN117173741A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210593831.9A CN117173741A (en) 2022-05-28 2022-05-28 Two-image alignment method based on non-homologous dual-purpose palm brushing recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210593831.9A CN117173741A (en) 2022-05-28 2022-05-28 Two-image alignment method based on non-homologous dual-purpose palm brushing recognition

Publications (1)

Publication Number Publication Date
CN117173741A true CN117173741A (en) 2023-12-05

Family

ID=88928622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210593831.9A Pending CN117173741A (en) 2022-05-28 2022-05-28 Two-image alignment method based on non-homologous dual-purpose palm brushing recognition

Country Status (1)

Country Link
CN (1) CN117173741A (en)

Similar Documents

Publication Publication Date Title
US11762475B2 (en) AR scenario-based gesture interaction method, storage medium, and communication terminal
CN108764048B (en) Face key point detection method and device
TWI569229B (en) Method for registering data
WO2018176938A1 (en) Method and device for extracting center of infrared light spot, and electronic device
CN103839277B (en) A kind of mobile augmented reality register method of outdoor largescale natural scene
Li et al. A 4-point algorithm for relative pose estimation of a calibrated camera with a known relative rotation angle
CN108230384B (en) Image depth calculation method and device, storage medium and electronic equipment
CN108369741A (en) Method and system for registration data
CN110866977B (en) Augmented reality processing method, device, system, storage medium and electronic equipment
CN108010123B (en) Three-dimensional point cloud obtaining method capable of retaining topology information
US9460513B1 (en) Method for reconstructing a 3D scene as a 3D model using images acquired by 3D sensors and omnidirectional cameras
WO2021136386A1 (en) Data processing method, terminal, and server
CN104156998A (en) Implementation method and system based on fusion of virtual image contents and real scene
CN113850865A (en) Human body posture positioning method and system based on binocular vision and storage medium
Anbarjafari et al. 3D face reconstruction with region based best fit blending using mobile phone for virtual reality based social media
CN107330930B (en) Three-dimensional image depth information extraction method
CN113592015B (en) Method and device for positioning and training feature matching network
CN111079470A (en) Method and device for detecting living human face
CN111709269A (en) Human hand segmentation method and device based on two-dimensional joint information in depth image
CN114005169B (en) Face key point detection method and device, electronic equipment and storage medium
CN117173741A (en) Two-image alignment method based on non-homologous dual-purpose palm brushing recognition
CN117173740A (en) Two-image alignment method based on non-homologous dual-purpose palm brushing recognition
Wang et al. Im2fit: Fast 3d model fitting and anthropometrics using single consumer depth camera and synthetic data
CN117173736A (en) Two-image alignment method based on non-homologous dual-purpose palm brushing recognition
CN109472741A (en) Three-dimensional splicing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination