WO2021057582A1 - Image matching, 3d imaging and pose recognition method, device, and system - Google Patents

Image matching, 3d imaging and pose recognition method, device, and system Download PDF

Info

Publication number
WO2021057582A1
WO2021057582A1 PCT/CN2020/115736 CN2020115736W WO2021057582A1 WO 2021057582 A1 WO2021057582 A1 WO 2021057582A1 CN 2020115736 W CN2020115736 W CN 2020115736W WO 2021057582 A1 WO2021057582 A1 WO 2021057582A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
current
deformed
matching
image block
Prior art date
Application number
PCT/CN2020/115736
Other languages
French (fr)
Chinese (zh)
Inventor
陈森淼
Original Assignee
鲁班嫡系机器人(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 鲁班嫡系机器人(深圳)有限公司 filed Critical 鲁班嫡系机器人(深圳)有限公司
Publication of WO2021057582A1 publication Critical patent/WO2021057582A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the invention relates to the technical field of image matching, in particular to methods, devices and systems for image matching, 3D imaging and gesture recognition.
  • the control unit matches a single image collected by a single image sensor with a reference image, or matches two images collected by the left and right image sensors, and then performs 3D imaging or performs posture recognition of the target object according to the matching result.
  • the present invention provides a method, device and system for image matching, 3D imaging and gesture recognition.
  • the first aspect of the present invention provides an image matching method.
  • the image matching method includes:
  • Detect the deformed area and/or non-deformed area in the initial image generate a matching result for the deformed area and/or generate a matching result for the non-deformed area.
  • the detecting the deformed area and/or the non-deformed area in the initial image includes:
  • the current image block is a current deformed image block or a current non-deformable image block; wherein the non-deformable area is a set of the current non-deformable image block; the deformed area Is the set of the current deformed image blocks.
  • the detecting the deformed area and/or the non-deformed area in the initial image includes:
  • the current image block is the current deformed image block; if yes, the current image block is the current non-deformed image block; wherein, the non-deformed area is a set of the current non-deformed image blocks; the deformed area Is the set of the current deformed image blocks.
  • the generating a matching result for the deformed region includes:
  • Matching is performed on the reference image to obtain a matching result for the deformed area.
  • the converting the deformed area into a reference image includes:
  • the current deformed image block is converted into the current reference image block.
  • the converting the deformed area into a reference image includes:
  • the deformed area is converted into a reference image based on Fourier transform.
  • the converting the deformed area into a reference image includes:
  • the current deformed image block is converted into a current reference image block.
  • the converting the deformed area into a reference image includes:
  • the generating a matching result for the deformed region includes:
  • Matching is performed on the image group to obtain a matching result.
  • the generating a matching result for the deformed region includes:
  • Matching is performed on the template image group to obtain a matching result.
  • the initial image is an image collected by an image sensor after an image is projected to the target object; wherein the projected image has a periodic gradual change pattern and is unique within a certain spatial range, or is within a certain spatial range It is unique.
  • the second aspect of the present invention provides a gesture recognition method, the gesture recognition method includes:
  • the posture information of the target object is generated.
  • a third aspect of the present invention provides a 3D imaging method, the 3D imaging method includes:
  • a fourth aspect of the present invention provides an image matching device, the image matching device includes:
  • Image acquisition module to acquire the initial image
  • the image matching module is used to detect the deformed area and/or non-deformed area in the initial image, generate a matching result for the deformed area and/or generate a matching result for the non-deformed area.
  • a fifth aspect of the present invention provides a gesture recognition device, and the gesture recognition device includes:
  • the posture generation module is used to generate posture information of the target object according to the matching result.
  • a sixth aspect of the present invention provides a 3D imaging device, the 3D imaging device includes:
  • the image generation module is used to generate a 3D image of the target object according to the matching result.
  • a seventh aspect of the present invention provides a system, which includes: an image projector, an image sensor, and a control unit;
  • the image projector is used to project an image to a target object
  • the image sensor is used to collect the initial image of the target object after the image is projected
  • the control unit is configured to implement the image matching method described in the first aspect; the gesture recognition method described in the second aspect; and/or the steps of the 3D imaging method described in the third aspect.
  • An eighth aspect of the present invention provides a computer device that includes a memory, a processor, and a computer program that is stored in the memory and can run on the processor.
  • the processor executes the computer program, Implement the steps of the image matching method described in the first aspect; the gesture recognition method described in the second aspect; and/or the 3D imaging method described in the third aspect.
  • a ninth aspect of the present invention provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the image matching method described in the first aspect is implemented; The gesture recognition method described above; and/or the steps of the 3D imaging method described in the third aspect.
  • Fig. 1A is a first schematic structural diagram of an embodiment of a system provided by the present invention
  • Fig. 1B is a second schematic structural diagram of an embodiment of a system provided by the present invention
  • Fig. 1C is a third schematic structural diagram of an embodiment of the system provided by the present invention ;
  • FIG. 2 is a schematic flowchart of an embodiment of an image matching method provided by the present invention.
  • FIG. 3 is a schematic diagram of the first process of an embodiment of detecting a deformed area and/or a non-deformed area in an initial image provided by the present invention
  • FIG. 4 is a schematic diagram of a second process of an embodiment of detecting a deformed area and/or a non-deformed area in an initial image provided by the present invention
  • FIG. 5 is a schematic diagram of the first process of an embodiment of generating a matching result for a deformed area provided by the present invention
  • FIG. 6 is a schematic diagram of a second process of an embodiment of generating a matching result for a deformed area provided by the present invention.
  • FIG. 7 is a schematic diagram of a third process of an embodiment of generating a matching result for a deformed area provided by the present invention.
  • FIG. 8 is a schematic diagram of the first process of an embodiment of detecting that a current image block is a current deformed image block or a current non-deformable image block based on image features according to the present invention
  • FIG. 9 is a schematic diagram of a second process of an embodiment of detecting whether a current image block is a current deformed image block or a current non-deformable image block based on image characteristics according to the present invention.
  • FIG. 10 is a schematic diagram of the first process of an embodiment of converting a deformed area into a reference image provided by the present invention
  • FIG. 11 is a schematic diagram of the second process of an embodiment of converting a deformed area into a reference image provided by the present invention.
  • FIG. 12 is a schematic diagram of the third process of an embodiment of converting a deformed area into a reference image provided by the present invention.
  • FIG. 13 is a schematic flowchart of an embodiment of 3D imaging provided by the present invention.
  • FIG. 14 is a schematic flowchart of an embodiment of a gesture recognition method provided by the present invention.
  • FIG. 15 is a first structural block diagram of an embodiment of a matching device provided by the present invention.
  • FIG. 16 is a second structural block diagram of an embodiment of a matching device provided by the present invention.
  • FIG. 17 is a third structural block diagram of an embodiment of a matching device provided by the present invention.
  • FIG. 18 is a fourth structural block diagram of an embodiment of a matching device provided by the present invention.
  • FIG. 19 is a fifth structural block diagram of an embodiment of a matching device provided by the present invention.
  • FIG. 20 is a sixth structural block diagram of an embodiment of a matching device provided by the present invention.
  • 21 is a structural block diagram of an embodiment of a gesture recognition device provided by the present invention.
  • FIG. 22 is a structural block diagram of an embodiment of a 3D imaging device provided by the present invention.
  • Figure 23 is a structural block diagram of an embodiment of a computer device provided by the present invention.
  • FIG. 24 is a schematic diagram of an embodiment of image conversion provided by the present invention.
  • FIG. 25A is a first schematic diagram of an embodiment of an initial image provided by the present invention
  • FIG. 25B is a second schematic diagram of an embodiment of an initial image provided by the present invention
  • FIG. 25C is a third schematic diagram of an embodiment of an initial image provided by the present invention
  • Figure 25D is a fourth schematic diagram of an embodiment of an initial image provided by the present invention
  • Fig. 26A is a schematic diagram of an embodiment of a current preprocessed image block provided by the present invention
  • Fig. 26B is a schematic diagram of an embodiment of a current preprocessed image block after conversion provided by the present invention
  • FIG. 27 is a schematic diagram of intermediate results produced by the method for detecting deformed regions provided by the present invention.
  • an embodiment of the present invention provides a system, which includes an image sensor 11, an image projector 12, and a control unit 13;
  • the image projector 12 is used to project an image to a target object.
  • the image projector 12 can be used to project a periodic gradual and unique image within a certain spatial range to the target object.
  • the image projector 12 can be used to project a periodic gradual and unique image within a certain spatial range to the target object.
  • a periodic gradual and unique image within a certain spatial range to the target object.
  • the periodic gradual change rule may include, but is not limited to, periodically changing sine wave (or cosine wave, etc.) fringes.
  • the sine wave does not necessarily fully meet the sine wave standard, and may also be fringes close to sine.
  • incompletely regular sine fringes, or linearly changing sine fringes also called triangular waves.
  • being unique within a certain spatial range means that an image frame of a certain specification moves in any area of the image, and the image in the corresponding area within the image frame is always unique. For example: sine wave stripes with random patterns.
  • the image projector 12 can also be used to project a unique image within a certain spatial range, such as a scattered pattern, to the target object.
  • the image projector 12 can also project any other images that can meet the requirements of subsequent matching.
  • the image projector 12 may be a projector or a laser projector that can project the above-mentioned image that is currently or will be developed in the future.
  • the image sensor 11 is used to collect the initial image of the target object after the projected image.
  • the image sensor 11 sends the collected initial image to the control unit 13, the memory or the server, and so on.
  • the image sensor 11 may be, but is not limited to, a camera, a video camera, a scanner, or other devices with related functions (mobile phones, computers, etc.), and so on.
  • the image sensor 11 may be a group (as shown in FIG. 1A, 1B) or multiple groups (as shown in FIG. 1C) arranged around the target object; wherein, each group of image sensors may include one image sensor (as shown in FIG. 1A). (Shown), 2 image sensors (as shown in Figures 1B and 1C), or more than 2 image sensors (illustration omitted).
  • the image sensor 11 can be fixedly arranged relative to the target object, or can be movably arranged relative to the target object, which is not limited in this specific embodiment.
  • the control unit 13 respectively communicates with the image projector 11 and the image sensor 12 in a wired or wireless manner, and communicates with the image projector 11 and the image sensor 12.
  • Wireless methods may include, but are not limited to: 3G/4G, WIFI, Bluetooth, WiMAX, Zigbee, UWB (ultra wideband), and other wireless connection methods that are currently or will be developed in the future.
  • the control unit 13 may be a part of an independent computing device; it may also be a part of the image projector 11; and/or a part of the image sensor 12. In this specific embodiment, for convenience of description, the control unit 13 is a separate component Show.
  • control unit 13 can be Programmable Logic Controller (PLC), Field-Programmable Gate Array (FPGA), Computer (Personal Computer, PC), Industrial Control Computer (Industrial Personal Computer, IPC) , Smart phone, tablet or server, etc.
  • PLC Programmable Logic Controller
  • FPGA Field-Programmable Gate Array
  • PC Computer
  • IPC Industrial Control Computer
  • Smart phone Smart phone, tablet or server, etc.
  • the control unit generates program instructions according to a pre-fixed program, combined with manually input information, parameters, or data collected by an external image sensor.
  • the present invention provides an image matching method.
  • the image matching method includes the following method steps:
  • Step S110 acquiring an initial image
  • Step S120 detects the deformed area and/or the non-deformed area in the initial image, and generates a matching result for the deformed area and/or generates a matching result for the non-deformed area.
  • the deformed area and/or non-deformed area in the initial image are determined by detection, that is, the initial image may include both deformed areas and non-deformed areas at the same time, or all deformed areas, or all non-deformed areas.
  • the image matching method described in the following embodiment can be used to directly match the non-deformed area, and for the deformed area, the deformed area needs to be processed (further detailed description will be given in the following embodiment). Then, perform matching according to the image matching method described in the following embodiment.
  • Step S110 acquiring an initial image
  • a group of image sensors can be set, each group of image sensors includes 1 image sensor, the initial image is one, and a series of different distances are stored in advance. Download the reference image of the target object after the projected image collected by the image sensor, and subsequently match the initial image with a series of reference images.
  • a group of image sensors may be provided, each group of image sensors includes two image sensors 11, and each group of initial images includes a first initial image and a second initial image. Two images.
  • N groups of image sensors are set according to the above description, where N is an integer greater than or equal to 2, and each group of images is composed of two image sensors 11, that is, each group of initial The image includes the first initial image and the second initial image.
  • image splicing methods currently available or developed in the future can be used, such as: feature-based splicing and region-based splicing.
  • the region-based splicing method can start from the gray value of the image to be spliced, and calculate the gray value of an area of the image to be spliced with the same size in the reference image using other methods such as least squares. The difference is compared to the difference to determine the similarity of the overlapping area of the image to be spliced, so as to obtain the range and position of the overlapping area of the spliced image, and then realize image splicing.
  • the feature-based stitching is to derive the features of the image through pixels, and then use the image features as a standard to search and match the corresponding feature regions of the overlapping parts.
  • each group of initial images varies according to the number of image sensors.
  • the number of image sensors can be any number greater than or equal to 1. Since the imaging mode of more than 2 image sensors is also a combination of any two of them, the imaging method is The corresponding imaging methods under two image sensors are repeated. Therefore, this specific embodiment only uses a single group of image sensors, and the group of image sensors includes one image sensor or two image sensors as an example for description.
  • Step S120 detects the deformed area and/or the non-deformed area in the initial image, and generates a matching result for the non-deformed area and/or generates a matching result for the deformed area, respectively.
  • the deformed area refers to some areas in the initial image that are inconsistent with the reference image.
  • the deformed area is due to the target object (1 image sensor as an example) or part of the surface on the target object (2 image sensors as an example) relative to the reference surface (taking 2 image sensors as an example, the reference surface can be parallel to the image sensor)
  • the surface of the chip; taking an image sensor as an example, the reference surface can refer to the deflection, bending, etc.
  • the inconsistency may be manifested as: the deformation area is deflected, stretched, compressed, and/or bent relative to the reference image.
  • the sine wave image in the deformed area is deflected by 2 left (as shown in Fig. 25A), right deflection (as shown in Fig. 25B), and the sine wave in the deformed area Stretching (as shown in FIG. 25C), compression (as shown in FIG. 25D), or bending (illustration omitted), etc.
  • the non-deformed area refers to the area in the initial image that is consistent with the reference image.
  • the image matching method may include but is not limited to the following method steps:
  • the matching is performed between the two initial images.
  • the initial image includes a first initial image and a second initial image.
  • a fixed-size image frame is usually set with the matched pixel as the center. Match the image blocks in the image frame.
  • An n*n image block in the first initial image will be compared with the N image blocks of the same size in the second initial image along the epipolar direction of the two image sensors (N is one of the parallaxes of the two images Search scope).
  • the method of comparison is to calculate the absolute value of the brightness difference of the corresponding pixels of the two image blocks, and then sum the absolute value to obtain a matching score.
  • N matching scores can be obtained, and the minimum value of these N matching scores can be obtained, and the pixel point in the second initial image corresponding to this minimum value is the matched pixel point in the first initial image Correspondingly, and so on, so as to obtain the matching result of multiple pixels in the two images corresponding to each other.
  • one image sensor is taken as an example.
  • the above embodiment is to match between the first initial image and the second initial image, and in the embodiment of one image sensor, the initial The image is matched with the reference image, and the specific matching method can be referred to the above embodiment, which will not be repeated here.
  • step S120 may include the following method steps:
  • Step S121 acquiring the current image block in the initial image
  • the current image block is the image block for current matching.
  • Step S123 based on the image characteristics of the current image block, detect that the current image block is the current deformed image block or the current non-deformable image block;
  • the initial image includes a deformed area and/or a non-deformed area.
  • the deformed area refers to the current set of deformed image blocks;
  • the non-deformed area refers to the current set of non-deformed image blocks.
  • the deformed area can be detected by judging whether these characteristics are deformed, and then the deformed area can be converted to the image generated when the corresponding reference surface is generated.
  • the image includes the characteristic part of the fringe with a sinusoidal gradual change.
  • the sinusoidal fringe is parallel to the reference plane, the horizontal X is the repeated change of multiple cycles, and the vertical Y is aligned, which is Upright state.
  • the period of the stripe in the horizontal X direction may be stretched or compressed, and/or the vertical Y direction may be inclined, and the current image block may be deformed according to the changes of these characteristic parts in the image.
  • step S120 may include the following method steps:
  • Step S122 obtains the current image block in the initial image
  • Step S124 match the current image block; determine whether a matching result can be generated
  • step S126 use the current image block as the current deformed image block
  • step S128 use the current image block as the current non-deformable image block.
  • an image frame of a preset size is sequentially moved by a unit amount (e.g., the first initial image) along the initial image (e.g., the first initial image).
  • step S120 can be implemented through but not limited to the following method steps:
  • step S120 may include the following method steps:
  • Step S221 Obtain a deformed area
  • the deformed area may be a collection of multiple current deformed image blocks, so the current deformed image block can be obtained, and the current deformed image block can be converted into the current reference image block in the following step S222.
  • Step S222 Convert the deformed area into a reference image
  • Step S223 performs matching on the reference image to obtain a matching result for the deformed area.
  • the deformed area is replaced with a reference image, and the reference image is matched.
  • the matching method please refer to the relevant description in the image matching method in the above embodiment, which will not be repeated here.
  • step S120 may include the following method steps:
  • Step S321 obtains a template image group in which the pre-generated deformation area has a unit deformation in sequence
  • the template image group may be a pre-generated image template group obtained after the current deformed image block sequentially undergoes at least one unit deformation amount.
  • a current template image group is generated in advance corresponding to each current image block.
  • each template corresponds to the image block
  • the template image can be generated in advance according to the deflection unit, combined with the spatial position conversion information of the image projector and the image sensor; or according to the method of fitting function in the embodiment shown in FIG. 11, the corresponding Template image.
  • Step S322 performs matching on the template image group to obtain a matching result.
  • the template image corresponding to the current image block in the first initial image is sequentially obtained (for example: deflection -60, -40, -20, 0, 20, 40, and 60 template images), the corresponding second initial image is located on the same epipolar line, and it can include multiple image blocks to be matched.
  • Each template image and the second initial The sum of the absolute values of the gray-scale differences between the image blocks to be matched in the image corresponds to the second initial image with the smallest sum of the absolute values of the gray-scale differences and the current image block of the first initial image.
  • the pixels corresponding to the two matching image blocks are matched pixels, and the matching result is obtained.
  • the initial image and the reference image can be matched according to the method described in the above embodiment.
  • the reference image can be regarded as the second initial image to generate the matching result.
  • the template image group can be generated in advance according to the deflection unit, combined with the spatial position conversion information of the image projector and the image sensor; or according to the method of fitting function in the embodiment shown in FIG. 11, the corresponding Set of template images.
  • step S120 may include the following method steps:
  • Step S421 sequentially generates image groups after unit deformation of the deformed area occurs
  • Step S422 performs matching on the image group to obtain a matching result.
  • the deformed image block group formed after the current image block in the first initial image is deflected by a unit angle and the plurality of waiting images located on the same epipolar line in the second initial image are sequentially acquired.
  • the matching image blocks are matched, and the sum of the absolute values of the gray-scale differences between each deformed image block and the image block to be matched in the second initial image is calculated, and then the second one with the smallest sum of the absolute values of the gray-scale differences is calculated.
  • the to-be-matched image block of the initial image and the current image block of the first initial image are matched image blocks, so the pixels corresponding to the two matched image blocks are matched pixels, thereby obtaining the matching result.
  • the image group after unit deformation can be generated according to the deflection unit, combined with the spatial position conversion information of the image projector and the image sensor; it can also be based on the fitting function in the embodiment shown in FIG. 9 below. Method to generate the corresponding image group.
  • step S123 based on the image characteristics of the current image block, detecting that the current image block is the current deformed image block or the current non-deformable image block can be implemented by, but not limited to, the following method steps:
  • step S123 may include the following method steps:
  • the image frame can be moved along the epipolar direction of the image sensor on the initial image, and each time it moves one unit, the current image on the initial image corresponding to the image frame is the current image block.
  • the current deformed image block when it is necessary to further convert the current deformed image block into a reference image based on the detection method, there may be a blank part on the edge of the current deformed image block after the conversion (as shown in FIG. 26B As shown), in order to ensure the integrity of the content of the current image block after conversion, it is usually necessary to obtain a current image block whose size is larger than the actual matching image block (as shown in Figure 26A), and then perform the conversion. The latter image is cropped to obtain the current deformed image block completely converted into the reference image (as shown in FIG. 26C).
  • the peak or trough of each sine wave should be on the same reference line.
  • the pixel point at the peak or valley is the maximum value pixel point, for example, the gray value is the highest or the lowest.
  • S1235 fits the most value pixel points in each row to obtain the current fitting line of the current image block
  • S1237 detects the current image shape variable of the current fitted line relative to the reference line
  • a threshold can be preset. If the deformation is less than When it is equal to or less than the threshold (in one embodiment, the threshold may also be a theoretical value of zero), the current image block is considered to be the current non-deformed image block; if it is greater than or equal to a certain threshold, the current image block is determined Is the current deformed image block, and the shape variable of the fitted line relative to the reference line is the current image shape variable of the current image block.
  • step S123 may include the following method steps:
  • the current image block is fitted to obtain a fitting function.
  • A is the amplitude of the sine function;
  • 360/B is the period of the function's brightness change (unit: Pixels),
  • C represents the inclination of the pixel in the Y direction.
  • B pixels the inclination angle is arccotangent (C/B);
  • D is the translation amount of the function in the lateral X direction;
  • E is the translation amount of the function in the Z direction; where A, D and E are fixed values.
  • C represents the inclination of the pixel in the Y direction.
  • C is 0;
  • arccotangent is the image shape variable of the current deformed image block.
  • it can be determined whether the image deformation amount is greater than or equal to a certain threshold; if so, the current image block is a deformed image block; if not, the current image block is a non-deformable image block.
  • step S222 converting the deformed area into a reference image can be achieved through but not limited to the following method steps:
  • step S222 may include the following method steps:
  • Step S2221 obtains the current deformed image block
  • Step S2222 extracts the most value pixel points in each row of the current deformed image block
  • Step S2223 fits the most value pixel points in each row to obtain the current fitted line of the current image block
  • Step S2224 detects the current image shape variable of the current fitting line relative to the reference line
  • step S2221 to step S2224 can refer to the description of step S1231 to step S1237 in step S123 in the above embodiment.
  • steps S2221-step S2224 here can be omitted.
  • Step S2225 converts the current deformed image block into the current reference image block based on the current image shape variable.
  • step S222 may include the following method steps:
  • Step S2231 obtains the current image block
  • Step S2232 performs fitting on the current image block to obtain the current fitting function
  • step S2231 to step S2232 can be found in the description of step S1232 to step S1234 in the above embodiment.
  • steps S2231-step S2232 here can be omitted.
  • Step S2233 converts the current deformed image block into the current reference image block based on the current fitting function
  • the deformed area can be converted based on the fitting function, that is, the period of the deformed area is the reference period, and the C of the image is 0, so as to obtain the deformed area after conversion.
  • the size of the image block to be acquired is (width W: 33, height H: 33), since there will be blanks at the edges of the converted image (as shown in Figure 26B), in order to ensure the integrity of the image block content, it is usually necessary to intercept An initial image block larger than the size of the image block, for example, as shown in Figure 26A, a wider area is selected as the initial image block (W: 63, H: 33), where the center of the initial image block Form a rectangular frame, the size of the rectangular frame corresponds to the size of the image block (W: 33, H: 33); in addition, the initial image block can also be any other size, as long as it is larger than the preset image block And ensure that the content in the converted image block is complete.
  • the initial image block is fitted with a three-dimensional curve according to the above fitting function.
  • the transformation method is as follows: Since the height of the image is H: 33, take the middle row (that is, the 17th row) without transformation.
  • the middle row 17 is used as the 0 point reference, and the number of other rows is i, and the i-th row is translated along the positive direction of the X axis by -(i-17)*(C/B) pixels. After all rows are traversed, the transformation is completed. such as:
  • the initial image is converted, and the converted initial image becomes as shown in FIG. 26B.
  • the area within the rectangular frame (W: 33, H: 33) located in the middle of the deformed initial image is intercepted as the converted image block, and the converted image block after the interception is shown in FIG. 26C.
  • step S222 may include the following method steps:
  • Step S2241 obtains the current deformed image block
  • Step S2242 generates the current deformation of the target object based on the current deformation image block
  • Step S2243 based on the current deformation amount, converts the current deformed image block into the current reference image block;
  • the actual spatial position coordinates of point A in the image corresponding to the image sensor coordinate system can be obtained according to the initial image; according to the deformation of point A relative to the reference plane L'(for example: the deformation can be implemented according to the above
  • the image shape variable obtained in the example is converted based on the calibration result of the image sensor; or obtained based on the template image group described in the above embodiment)
  • the point A corresponding to the direction of the projected image of the image projector 12 can be calculated
  • the position coordinates of the virtual point A'projected on the reference plane O in the image sensor coordinate system According to the conversion between the first and second image sensor coordinate systems and the first and second image coordinate systems, the positions A'can be obtained respectively The coordinates in the first and second image coordinate systems, and so on, can obtain the corrected image of the deformed area.
  • step S222 may be based on Fourier transform to convert the deformed area into a reference image.
  • a 3D imaging method is further provided, and the 3D imaging method includes the matching method described in the above embodiment, and further includes the steps:
  • S130 generates a 3D image of the target object according to the matching result.
  • the posture information of the corresponding point of the matching point pair in the three-dimensional space can be obtained, based on the multiple matching point pairs included in the matching result , That is, you can draw a 3D image of the object in a three-dimensional space.
  • the posture information can be 3d coordinates in a preset coordinate system for the target, and the motion of a rigid body in a 3-dimensional space can be described by 3d coordinates (a total of 6 degrees of freedom). Specifically, it can be divided into rotation and translation, each It is 3 degrees of freedom.
  • the translation of a rigid body in a 3-dimensional space is an ordinary linear transformation, and a 3x1 vector can be used to describe the translation position; and the commonly used description methods of rotation pose include but are not limited to: rotation matrix, rotation vector, quaternion, Euler angle and Lie algebra.
  • a gesture recognition method is also provided.
  • the gesture recognition method includes the matching method described in the above embodiment, and further includes the steps:
  • S140 generates a posture recognition result of the target object according to the matching result.
  • each matching point pair is based on the triangulation algorithm or based on the corresponding reference image, and the posture information of the corresponding point in the three-dimensional space of the matching point pair can be obtained, based on the posture of one or more three-dimensional space points Information to obtain the result of the posture recognition of the target object.
  • the result of the posture recognition of the target object may be the posture information representing the position and posture of the entire object or the posture information of a certain target position associated with the target object (for example, the target position may be located on the target object or on the including frame of the target object).
  • the posture information of the target object can be obtained.
  • the Linemod method refers to storing the corresponding point cloud images at multiple angles based on the 3D model of the target object (such as: CAD model) in advance, and matching the point cloud image obtained in the above embodiment with the image in the image library to determine The posture information of the target object.
  • an image matching device is provided, and the image matching device includes:
  • the image acquisition module 110 is used to acquire an initial image
  • the result generating module 120 is configured to detect the deformed area and/or the non-deformed area in the initial image, generate a matching result for the deformed area and/or generate a matching result for the non-deformed area.
  • the above result generation module 120 includes:
  • the image acquisition unit 121 is configured to acquire the current image block in the initial image
  • the image detection unit 123 is configured to detect whether the current image block is a current deformed image block or a current non-deformable image block based on the image characteristics of the current image block;
  • the above result generation module 120 includes:
  • the image acquisition unit 122 is configured to acquire the current image block in the initial image
  • the image matching unit 124 is used to match the current image block, and the result judgment unit 126 is used to judge whether a matching result can be generated;
  • the image determining unit 128 is configured to, if not, the current image block is the current deformed image block; if so, the current image block is the current non-deformable image block.
  • the above result generation module 120 includes:
  • the deformation acquiring unit 221 is configured to acquire the deformation area
  • the deformation conversion unit 222 is configured to convert the deformation area into a reference image
  • the reference matching unit 223 is configured to perform matching on the reference image to obtain a matching result for the deformed area.
  • the above result generating module 120 includes:
  • the template obtaining unit 321 is configured to obtain a template image group in which the pre-generated deformation area has a unit deformation in sequence;
  • the template matching unit 322 is configured to perform matching on the template image group to obtain a matching result.
  • the above result generation module 120 includes:
  • the image generating unit 421 is configured to sequentially generate image groups after unit deformation of the deformed area occurs;
  • the image matching unit 422 is configured to perform matching on the image group to obtain a matching result.
  • the image detection unit 123 includes:
  • the image acquisition unit 1231 is used to acquire the current image block
  • the maximum value extraction unit 1233 is used to extract the maximum value pixel points in each row of the current image block
  • the best value fitting unit 1235 is used to fit the best value pixel points in each row to obtain the current fitting line of the current image block;
  • the deformation detection unit 1237 is used to detect the current image deformation of the current fitting line relative to the reference line;
  • the image detection unit 1239 is configured to detect whether the current image block is the current deformed image block or the current non-deformable image block according to the current image deformation.
  • the image detection unit 123 includes:
  • the image acquisition unit 1232 acquires the current image block
  • the function extraction unit 1234 fits the current image block to obtain a fitting function
  • the image detection unit 1236 detects whether the current image block is the current deformed image block or the current non-deformed image block according to the fitting function.
  • the deformation conversion unit 222 includes:
  • the image acquiring unit 2221 is used to acquire the current deformed image block in the deformed area
  • the maximum value extraction unit 2222 is configured to extract the maximum value pixel points in each row of the current deformed image block
  • the best value fitting unit 2223 is configured to fit the best value pixel points in each row to obtain the current fitted line;
  • the deformation calculation unit 2224 is used to calculate the current image deformation of the current fitting line relative to the reference line;
  • the image conversion unit 2225 is configured to convert the current deformed image block into the current reference image block based on the current image shape variable.
  • the deformation conversion unit 222 includes:
  • the image acquiring unit 2231 is used to acquire the current deformed image block in the deformed area
  • the function extraction part 2232 is used for fitting the current deformed image block to obtain the current fitting function
  • the image conversion unit 2233 is configured to convert the current deformed image block into the current reference image block based on the current fitting function.
  • the deformation conversion unit 222 includes:
  • the image acquiring unit 2241 is used to acquire the current deformed image block of the deformed area
  • the deformation generating unit 2242 is configured to generate the current deformation of the target object based on the current deformed image block;
  • the image conversion unit 2243 is configured to convert the current deformed image block into the current reference image block based on the current deformation amount.
  • the deformation conversion unit 222 includes:
  • the image conversion unit 2251 is configured to convert the deformed area into a reference image based on Fourier transform.
  • a gesture recognition device As shown in FIG. 21, in one embodiment, a gesture recognition device is provided, and the gesture recognition device includes:
  • the posture generation module 130 is configured to generate posture information of the target object according to the matching result.
  • a 3D imaging device is provided, and the 3D imaging device includes:
  • the 3D imaging module 140 is configured to generate a 3D image of the target object according to the matching result.
  • each module in each of the above-mentioned devices may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer-readable storage medium stores a computer program, and the computer program implements the image matching method described in the above embodiments when the computer program is executed by a processor, Steps of 3D imaging method and/or gesture recognition method.
  • a computer device is also provided.
  • the computer device includes a memory, a processor, and a computer program that is stored in the memory and can run on the processor.
  • the processor executes the computer program, the steps of the image matching method, 3D imaging method, and/or gesture recognition method described in the above embodiments are implemented.
  • Industrial control computers have important computer attributes and characteristics. Therefore, they all have computer central processing unit (CPU), hard disk, memory and other internal memories, as well as plug-in hard disks. External storage such as Smart Media Card (SMC), Secure Digital (SD) Card, Flash Card (Flash Card), etc., with operating system, control network and protocol, computing power, and friendly man-machine interface, It is to provide reliable, embedded, intelligent computers and industrial control computers for other structures/equipment/systems.
  • SMC Smart Media Card
  • SD Secure Digital
  • Flash Card Flash Card
  • the computer program may be divided into one or more modules/units, and the one or more modules/units are stored in the memory and executed by the processor to complete the present invention.
  • the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program in the control unit.
  • the so-called processor can be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory may be a storage device built into the terminal, such as a hard disk or a memory.
  • the memory may also be an external storage device of the control unit, such as a plug-in hard disk equipped on the control unit, a Smart Media Card (SMC), a Secure Digital (SD) card, and a flash memory. Card (Flash Card), etc.
  • the memory may also include not only an internal storage unit of the control unit, but also an external storage device.
  • the memory is used to store the computer program and other programs and data required by the terminal.
  • the memory can also be used to temporarily store data that has been output or will be output.
  • FIG. 23 is only an example of a computer device, and does not constitute a limitation on the computer device. It may include more or less components than shown in the figure, or a combination of certain components, or different components, such as
  • the control unit may also include input and output devices, network access devices, buses, and the like.
  • the disclosed devices and methods can be implemented in other ways.
  • the embodiments of each device described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units or Components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the present invention implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, it can implement the steps of the foregoing method embodiments. .
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signal telecommunications signal
  • software distribution media etc.
  • the content contained in the computer-readable medium can be appropriately added or deleted according to the requirements of the legislation and patent practice in the jurisdiction.
  • the computer-readable medium Does not include electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

Provided are an image matching, 3D imaging and pose recognition method, a device, and a system. The image matching method comprises: acquiring an initial image; and performing detection on the initial image to obtain a distorted region and/or a non-distorted region, and generating a matching result for the distorted region and/or a matching result for the non-distorted region. In the technical solution of the present invention, detection is performed on an initial image to obtain a distorted region and/or a non-distorted region, and then a corresponding scheme is used to perform matching for the distorted region and/or the non-distorted region, such that a precise image matching result can be obtained even when a target object is offset from respective reference planes.

Description

图像匹配、3D成像及姿态识别方法、装置及系统Image matching, 3D imaging and gesture recognition method, device and system 技术领域Technical field
本发明涉及图像匹配技术领域,具体涉及图像匹配、3D成像及姿态识别方法、装置及系统。The invention relates to the technical field of image matching, in particular to methods, devices and systems for image matching, 3D imaging and gesture recognition.
背景技术Background technique
当单个或多个图像传感器将采集的被投射了图像(该被投射的图像在一定空间范围内具有唯一性,或成渐变规律且在空间范围内具有唯一性等等)后的目标物体的图像发送给控制单元后,控制单元通过将单个图像传感器采集的单个图像与参考图像匹配,或对左右图像传感器采集的两张图像进行匹配,进而根据匹配结果进行3D成像或者进行目标物体的姿态识别。When a single or multiple image sensors will collect the image of the target object after the projected image (the projected image is unique in a certain spatial range, or becomes a gradual law and unique in the spatial range, etc.) After being sent to the control unit, the control unit matches a single image collected by a single image sensor with a reference image, or matches two images collected by the left and right image sensors, and then performs 3D imaging or performs posture recognition of the target object according to the matching result.
但是,应该看到,以两个图像传感器为例,当物体整个或者部分表面相对于平行于图像传感器的芯片的方向(即基准面)发生偏转、弯曲等等变化时,导致两个图像传感器采集的图像产生不同步的形变,比如:图像中的某些特征部分分别向左右偏转、图像中的特征分别被放大和缩小;或者以单个图像传感器为例,由于一般参考图像都是在目标物体平行于图像传感器芯片或者与图像传感器芯片成某一预设角度的情况下采集的,当实际操作中目标物体放置位置相对参考图像中目标物体放置位置(即基准面)发生偏移,也会导致采集的部分或全部图像发生形变,使得这部分形变后的图像与参考图像无法完成良好的匹配。However, it should be noted that taking two image sensors as an example, when the entire or part of the surface of the object is deflected, bent, etc. with respect to the direction of the chip parallel to the image sensor (ie the reference plane), the two image sensors are caused to collect Asynchronous deformation occurs in the image of the image, such as: some feature parts in the image are deflected to the left and right, and the features in the image are respectively enlarged and reduced; or taking a single image sensor as an example, because the general reference image is parallel to the target object When the image sensor chip or the image sensor chip is at a preset angle, when the target object placement position in the actual operation is offset from the target object placement position in the reference image (ie the reference plane), it will also cause the acquisition Part or all of the images are deformed, making this part of the deformed image and the reference image unable to complete a good match.
发明内容Summary of the invention
有鉴于此,本发明提供一种图像匹配、3D成像及姿态识别方法、装置及系统。In view of this, the present invention provides a method, device and system for image matching, 3D imaging and gesture recognition.
本发明第一方面提供一种图像匹配方法,所述图像匹配方法包括:The first aspect of the present invention provides an image matching method. The image matching method includes:
获取初始图像;Get the initial image;
检测所述初始图像中的形变区域和/或非形变区域,生成针对所述形变区域的匹配结果和/或生成针对所述非形变区域的匹配结果。Detect the deformed area and/or non-deformed area in the initial image, generate a matching result for the deformed area and/or generate a matching result for the non-deformed area.
进一步,所述检测所述初始图像中的形变区域和/或非形变区域包括:Further, the detecting the deformed area and/or the non-deformed area in the initial image includes:
获取所述初始图像中的当前图像块;Acquiring the current image block in the initial image;
基于所述当前图像块的图像特征,检测所述当前图像块为当前形变图像块或当前非形变图像块;其中,所述非形变区域为所述当前非形变图像块的集合;所述形变区域为所述当前形变图像块的集合。Based on the image characteristics of the current image block, it is detected that the current image block is a current deformed image block or a current non-deformable image block; wherein the non-deformable area is a set of the current non-deformable image block; the deformed area Is the set of the current deformed image blocks.
进一步,所述检测初始图像中的形变区域和/或非形变区域包括:Further, the detecting the deformed area and/or the non-deformed area in the initial image includes:
获取所述初始图像中的当前图像块;Acquiring the current image block in the initial image;
匹配所述当前图像块,判断是否能够生成匹配结果;Match the current image block to determine whether a matching result can be generated;
若否,所述当前图像块为当前形变图像块;若是,所述当前图像块为当前非形变图像块;其中,所述非形变区域为所述当前非形变图像块的集合;所述形变区域为所述当前形变图像块的集合。If not, the current image block is the current deformed image block; if yes, the current image block is the current non-deformed image block; wherein, the non-deformed area is a set of the current non-deformed image blocks; the deformed area Is the set of the current deformed image blocks.
进一步,所述生成针对所述形变区域的匹配结果包括:Further, the generating a matching result for the deformed region includes:
获取所述形变区域;Obtaining the deformation area;
转换所述形变区域为基准图像;Converting the deformed area into a reference image;
针对所述基准图像进行匹配,得到针对所述形变区域的匹配结果。Matching is performed on the reference image to obtain a matching result for the deformed area.
进一步,所述转换所述形变区域为基准图像包括:Further, the converting the deformed area into a reference image includes:
获取所述形变区域中的当前形变图像块;Acquiring the current deformed image block in the deformed area;
提取所述当前形变图像块的每行中的最值像素点;Extracting the most value pixel point in each row of the current deformed image block;
拟合所述每行中的所述最值像素点,得到拟合线;Fitting the most value pixel points in each row to obtain a fitting line;
计算所述拟合线相对基准线的图像形变量;Calculating the image shape variable of the fitted line relative to the reference line;
基于所述图像形变量,转换所述当前形变图像块为当前基准图像块。Based on the image deformation variable, the current deformed image block is converted into the current reference image block.
进一步,所述转换所述形变区域为基准图像包括:Further, the converting the deformed area into a reference image includes:
基于傅里叶变换转换所述形变区域为基准图像。The deformed area is converted into a reference image based on Fourier transform.
进一步,所述转换所述形变区域为基准图像包括:Further, the converting the deformed area into a reference image includes:
获取所述形变区域中的当前形变图像块;Acquiring the current deformed image block in the deformed area;
对所述当前形变图像块进行拟合,得到拟合函数;Fitting the current deformed image block to obtain a fitting function;
基于所述拟合函数,转换所述当前形变图像块为当前基准图像块。Based on the fitting function, the current deformed image block is converted into a current reference image block.
进一步,所述转换所述形变区域为基准图像包括:Further, the converting the deformed area into a reference image includes:
获取所述形变区域的当前形变图像块;Acquiring the current deformed image block of the deformed area;
基于所述当前形变图像块,生成目标物的形变量;Based on the current deformed image block, generating a deformation variable of the target object;
基于所述形变量,转换所述当前形变图像块为当前基准图像块。Based on the deformation variable, convert the current deformed image block into a current reference image block.
进一步,所述生成针对所述形变区域的匹配结果包括:Further, the generating a matching result for the deformed region includes:
依次生成所述形变区域发生单位形变量后的图像组;Sequentially generating image groups of the deformed area after unit deformation occurs;
针对所述图像组进行匹配,得到匹配结果。Matching is performed on the image group to obtain a matching result.
进一步,所述生成针对所述形变区域的匹配结果包括:Further, the generating a matching result for the deformed region includes:
获取预生成的所述形变区域依次发生单位形变量后的模板图像组;Acquiring a pre-generated group of template images in which the deformation area sequentially undergoes unit deformation;
针对所述模板图像组进行匹配,得到匹配结果。Matching is performed on the template image group to obtain a matching result.
进一步,所述初始图像为通过图像传感器采集的向目标物体投射图像后的图像;其中,被投射的所述图像成周期性渐变规律且在一定空间范围内具有唯一性,或在一定空间范围内具有唯一性。Further, the initial image is an image collected by an image sensor after an image is projected to the target object; wherein the projected image has a periodic gradual change pattern and is unique within a certain spatial range, or is within a certain spatial range It is unique.
本发明第二方面提供一种姿态识别方法,所述姿态识别方法包括:The second aspect of the present invention provides a gesture recognition method, the gesture recognition method includes:
第一方面任一项所述图像匹配方法;及The image matching method according to any one of the first aspect; and
根据所述匹配结果,生成目标物体的姿态信息。According to the matching result, the posture information of the target object is generated.
本发明第三方面提供一种3D成像方法,所述3D成像方法包括:A third aspect of the present invention provides a 3D imaging method, the 3D imaging method includes:
第一方面任一项所述图像匹配方法;及The image matching method according to any one of the first aspect; and
根据所述匹配结果,生成目标物体的3D图像。According to the matching result, a 3D image of the target object is generated.
本发明第四方面提供一种图像匹配装置,所述图像匹配装置包括:A fourth aspect of the present invention provides an image matching device, the image matching device includes:
图像获取模块,获取初始图像;Image acquisition module to acquire the initial image;
图像匹配模块,用于检测所述初始图像中的形变区域和/或非形变区域,生成针对所述形变区域的匹配结果和/或生成针对所述非形变区域的匹配结果。The image matching module is used to detect the deformed area and/or non-deformed area in the initial image, generate a matching result for the deformed area and/or generate a matching result for the non-deformed area.
本发明第五方面提供一种姿态识别装置,所述姿态识别装置包括:A fifth aspect of the present invention provides a gesture recognition device, and the gesture recognition device includes:
第四方面所述的图像匹配装置;及The image matching device of the fourth aspect; and
姿态生成模块,用于根据所述匹配结果,生成目标物体的姿态信息。The posture generation module is used to generate posture information of the target object according to the matching result.
本发明第六方面提供一种3D成像装置,所述3D成像装置包括:A sixth aspect of the present invention provides a 3D imaging device, the 3D imaging device includes:
第四方面所述的图像匹配装置;及The image matching device of the fourth aspect; and
图像生成模块,用于根据所述匹配结果,生成目标物体的3D图像。The image generation module is used to generate a 3D image of the target object according to the matching result.
本发明第七方面提供一种系统,所述系统包括:图像投射器、图像传感器和控制单元;A seventh aspect of the present invention provides a system, which includes: an image projector, an image sensor, and a control unit;
所述图像投射器,用于向目标物体投射图像;The image projector is used to project an image to a target object;
所述图像传感器,用于采集被投射所述图像后的所述目标物体的初始图像;The image sensor is used to collect the initial image of the target object after the image is projected;
所述控制单元,用于实现第一方面所述的图像匹配方法;第二方面所述的姿态识别方法;和/或第三方面所述的3D成像方法的步骤。The control unit is configured to implement the image matching method described in the first aspect; the gesture recognition method described in the second aspect; and/or the steps of the 3D imaging method described in the third aspect.
本发明第八方面提供一种计算机设备,所述计算机设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现第一方面所述的图像匹配方法;第二方面所述的姿态识别方法;和/或第三方面所述的3D成像方法的步骤。An eighth aspect of the present invention provides a computer device that includes a memory, a processor, and a computer program that is stored in the memory and can run on the processor. When the processor executes the computer program, Implement the steps of the image matching method described in the first aspect; the gesture recognition method described in the second aspect; and/or the 3D imaging method described in the third aspect.
本发明第九方面提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现第一方面所述的图像匹配方法;第二方面所述的姿态识别方法;和/或第三方面所述的3D成像方法的步骤。A ninth aspect of the present invention provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the image matching method described in the first aspect is implemented; The gesture recognition method described above; and/or the steps of the 3D imaging method described in the third aspect.
通过首先检测出初始图像中的形变区域和/或非形变区域,再针对检测出的形变区域和/或非形变区域采用对应的方式进行匹配;使得目标物体在各种相对基准面发生变化的情况下,仍能够获得精确的图像匹配结果。By first detecting the deformed area and/or non-deformed area in the initial image, and then matching the detected deformed area and/or non-deformed area in a corresponding way; making the target object change in various relative reference planes Under the circumstance, accurate image matching results can still be obtained.
附图说明Description of the drawings
为了更清楚地说明本发明实施例技术方案,下面将对实施例和现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。In order to explain the technical solutions of the embodiments of the present invention more clearly, the following will briefly introduce the embodiments and the accompanying drawings needed in the description of the prior art. Obviously, the drawings in the following description are only some implementations of the present invention. For example, for those of ordinary skill in the art, without creative work, other drawings can be obtained from these drawings.
图1A为本发明提供的系统的实施例的第一结构示意图;图1B为本发明 提供的系统的实施例的第二结构示意图;图1C为本发明提供的系统的实施例的第三结构示意图;Fig. 1A is a first schematic structural diagram of an embodiment of a system provided by the present invention; Fig. 1B is a second schematic structural diagram of an embodiment of a system provided by the present invention; Fig. 1C is a third schematic structural diagram of an embodiment of the system provided by the present invention ;
图2为本发明提供的图像匹配方法的实施例的流程示意图;2 is a schematic flowchart of an embodiment of an image matching method provided by the present invention;
图3为本发明提供的检测初始图像中的形变区域和/或非形变区域的实施例的第一流程示意图;3 is a schematic diagram of the first process of an embodiment of detecting a deformed area and/or a non-deformed area in an initial image provided by the present invention;
图4为本发明提供的检测初始图像中的形变区域和/或非形变区域的实施例的第二流程示意图;FIG. 4 is a schematic diagram of a second process of an embodiment of detecting a deformed area and/or a non-deformed area in an initial image provided by the present invention;
图5为本发明提供的生成针对形变区域的匹配结果的实施例的第一流程示意图;5 is a schematic diagram of the first process of an embodiment of generating a matching result for a deformed area provided by the present invention;
图6为本发明提供的生成针对形变区域的匹配结果的实施例的第二流程示意图;6 is a schematic diagram of a second process of an embodiment of generating a matching result for a deformed area provided by the present invention;
图7为本发明提供的生成针对形变区域的匹配结果的实施例的第三流程示意图;FIG. 7 is a schematic diagram of a third process of an embodiment of generating a matching result for a deformed area provided by the present invention;
图8为本发明提供的基于图像特征,检测当前图像块为当前形变图像块或当前非形变图像块的实施例的第一流程示意图;FIG. 8 is a schematic diagram of the first process of an embodiment of detecting that a current image block is a current deformed image block or a current non-deformable image block based on image features according to the present invention;
图9为本发明提供的基于图像特征,检测当前图像块为当前形变图像块或当前非形变图像块的实施例的第二流程示意图;FIG. 9 is a schematic diagram of a second process of an embodiment of detecting whether a current image block is a current deformed image block or a current non-deformable image block based on image characteristics according to the present invention;
图10为本发明提供的转换形变区域为基准图像的实施例的第一流程示意图;10 is a schematic diagram of the first process of an embodiment of converting a deformed area into a reference image provided by the present invention;
图11为本发明提供的转换形变区域为基准图像的实施例的第二流程示意图;11 is a schematic diagram of the second process of an embodiment of converting a deformed area into a reference image provided by the present invention;
图12为本发明提供的转换形变区域为基准图像的实施例的第三流程示意图;FIG. 12 is a schematic diagram of the third process of an embodiment of converting a deformed area into a reference image provided by the present invention; FIG.
图13为本发明提供的3D成像的实施例的流程示意图;FIG. 13 is a schematic flowchart of an embodiment of 3D imaging provided by the present invention;
图14为本发明提供的姿态识别方法的实施例的流程示意图;14 is a schematic flowchart of an embodiment of a gesture recognition method provided by the present invention;
图15为本发明提供的匹配装置的实施例的第一结构框图;15 is a first structural block diagram of an embodiment of a matching device provided by the present invention;
图16为本发明提供的匹配装置的实施例的第二结构框图;FIG. 16 is a second structural block diagram of an embodiment of a matching device provided by the present invention;
图17为本发明提供的匹配装置的实施例的第三结构框图;FIG. 17 is a third structural block diagram of an embodiment of a matching device provided by the present invention;
图18为本发明提供的匹配装置的实施例的第四结构框图;18 is a fourth structural block diagram of an embodiment of a matching device provided by the present invention;
图19为本发明提供的匹配装置的实施例的第五结构框图;19 is a fifth structural block diagram of an embodiment of a matching device provided by the present invention;
图20为本发明提供的匹配装置的实施例的第六结构框图;20 is a sixth structural block diagram of an embodiment of a matching device provided by the present invention;
图21为本发明提供的姿态识别装置的实施例的结构框图;21 is a structural block diagram of an embodiment of a gesture recognition device provided by the present invention;
图22为本发明提供的3D成像装置的实施例的结构框图;22 is a structural block diagram of an embodiment of a 3D imaging device provided by the present invention;
图23为本发明提供的计算机设备的实施例的结构框图;Figure 23 is a structural block diagram of an embodiment of a computer device provided by the present invention;
图24为本发明提供的图像转换的实施例的示意图;FIG. 24 is a schematic diagram of an embodiment of image conversion provided by the present invention;
图25A为本发明提供的初始图像的实施例的第一示意图;图25B为本发明提供的初始图像的实施例的第二示意图;图25C为本发明提供的初始图像的实施例的第三示意图;图25D为本发明提供的初始图像的实施例的第四示意图;FIG. 25A is a first schematic diagram of an embodiment of an initial image provided by the present invention; FIG. 25B is a second schematic diagram of an embodiment of an initial image provided by the present invention; FIG. 25C is a third schematic diagram of an embodiment of an initial image provided by the present invention Figure 25D is a fourth schematic diagram of an embodiment of an initial image provided by the present invention;
图26A为本发明提供的当前预处理图像块的实施例的示意图;图26B为本发明提供的转换后的当前预处理图像块的实施例的示意图;图26C为本发明提供的截取后的当前图像块的示意图;Fig. 26A is a schematic diagram of an embodiment of a current preprocessed image block provided by the present invention; Fig. 26B is a schematic diagram of an embodiment of a current preprocessed image block after conversion provided by the present invention; Schematic diagram of the image block;
图27为本发明提供的检测形变区域的方法产生的中间结果示意图。FIG. 27 is a schematic diagram of intermediate results produced by the method for detecting deformed regions provided by the present invention.
具体实施方式detailed description
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。The present invention will be further described in detail below in conjunction with the drawings and embodiments. It can be understood that the specific embodiments described here are only used to explain the related invention, but not to limit the invention. In addition, it should be noted that, for ease of description, only the parts related to the relevant invention are shown in the drawings.
需要说明的是,在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本发明。It should be noted that the embodiments of the present invention and the features in the embodiments can be combined with each other if there is no conflict. Hereinafter, the present invention will be described in detail with reference to the drawings and in conjunction with the embodiments.
如图1A、1B或1C所示,本发明实施例提供一种系统,该系统包括图像传感器11、图像投射器12和控制单元13;As shown in FIG. 1A, 1B or 1C, an embodiment of the present invention provides a system, which includes an image sensor 11, an image projector 12, and a control unit 13;
图像投射器12,用于向目标物体投射图像。The image projector 12 is used to project an image to a target object.
在一个实施例中,图像投射器12,可以用于向目标物体投射成周期性渐变规律且在一定空间范围内具有唯一性的图像,具体的,可以参照在先申请的专利(公开号CN107241592A)中所述。In one embodiment, the image projector 12 can be used to project a periodic gradual and unique image within a certain spatial range to the target object. For details, please refer to the previously applied patent (Publication No. CN107241592A) As described in.
在一个实施例中,周期性渐变规律可以包括但不限于周期性变化的正弦波 (或者余弦波等等)条纹。具体的,所述正弦波并不一定完全符合正弦波标准,也可以为接近正弦的条纹。比如:不完全规则的正弦条纹,或者成线性变化的正弦条纹(也称作三角波)。In one embodiment, the periodic gradual change rule may include, but is not limited to, periodically changing sine wave (or cosine wave, etc.) fringes. Specifically, the sine wave does not necessarily fully meet the sine wave standard, and may also be fringes close to sine. For example: incompletely regular sine fringes, or linearly changing sine fringes (also called triangular waves).
其中,在一定空间范围内具有唯一性是指以某一规格图像框在图像中任意区域移动,该图像框内对应区域内的图像始终保持唯一。比如:具有散乱图案的正弦波条纹。Among them, being unique within a certain spatial range means that an image frame of a certain specification moves in any area of the image, and the image in the corresponding area within the image frame is always unique. For example: sine wave stripes with random patterns.
在另一个实施例中,图像投射器12,也可以用于向目标物体投射在一定空间范围内具有唯一性的图像,比如:散乱的图案。In another embodiment, the image projector 12 can also be used to project a unique image within a certain spatial range, such as a scattered pattern, to the target object.
需要说明的是,图像投射器12除了投射上述列举的两种图像之外,还可以投射其它可满足后续匹配需要的任意图像。It should be noted that, in addition to the two images listed above, the image projector 12 can also project any other images that can meet the requirements of subsequent matching.
具体的,图像投射器12可以为投影仪或激光投射器等等现在已有或将来开发的可以投射上述图像的图像投射器。Specifically, the image projector 12 may be a projector or a laser projector that can project the above-mentioned image that is currently or will be developed in the future.
图像传感器11,用于采集被投射图像后的目标物体的初始图像。The image sensor 11 is used to collect the initial image of the target object after the projected image.
图像传感器11将采集后的初始图像发送给控制单元13、存储器或服务器等等。The image sensor 11 sends the collected initial image to the control unit 13, the memory or the server, and so on.
具体的,图像传感器11可以但不限于是:照相机、摄像机、扫描仪或其他带有相关功能的设备(手机、电脑等)等等。Specifically, the image sensor 11 may be, but is not limited to, a camera, a video camera, a scanner, or other devices with related functions (mobile phones, computers, etc.), and so on.
图像传感器11可以为一组(如图1A、1B所示)或者为围绕目标物体设置的多组(如图1C所示);其中,每组图像传感器可以包括1个图像传感器(如图1A所示),2个图像传感器(如图1B、1C所示),或者2个以上图像传感器(省略附图)。The image sensor 11 may be a group (as shown in FIG. 1A, 1B) or multiple groups (as shown in FIG. 1C) arranged around the target object; wherein, each group of image sensors may include one image sensor (as shown in FIG. 1A). (Shown), 2 image sensors (as shown in Figures 1B and 1C), or more than 2 image sensors (illustration omitted).
图像传感器11可以相对目标物体固定设置,或者相对目标物体可运动设置,本具体实施例中不作限定。The image sensor 11 can be fixedly arranged relative to the target object, or can be movably arranged relative to the target object, which is not limited in this specific embodiment.
控制单元13通过有线或者无线的方式分别通信连接图像投射器11和图像传感器12,与图像投射器11和图像传感器12进行通信。无线方式可以包括但不限于是:3G/4G、WIFI、蓝牙、WiMAX、Zigbee、UWB(ultra wideband),以及其它现在已有或将来开发的无线连接方式。The control unit 13 respectively communicates with the image projector 11 and the image sensor 12 in a wired or wireless manner, and communicates with the image projector 11 and the image sensor 12. Wireless methods may include, but are not limited to: 3G/4G, WIFI, Bluetooth, WiMAX, Zigbee, UWB (ultra wideband), and other wireless connection methods that are currently or will be developed in the future.
控制单元13可以是独立的计算设备的一部分;也可以是图像投射器11的 一部分;和/或是图像传感器12的一部分,在本具体实施例中,为方便说明,控制单元13以单独的部件展示。The control unit 13 may be a part of an independent computing device; it may also be a part of the image projector 11; and/or a part of the image sensor 12. In this specific embodiment, for convenience of description, the control unit 13 is a separate component Show.
关于控制单元13的具体限定参见下文中有关图像匹配、3D成像和/或姿态识别方法的限定。控制单元可以为可编程逻辑控制器(Programmable Logic Controller,PLC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、计算机(Personal Computer,PC)、工业控制计算机(Industrial Personal Computer,IPC)、智能手机、平板电脑或服务器等等。控制单元根据预先固定的程序,结合人工输入的信息、参数或者外部图像传感器采集的数据等生成程序指令。For specific definitions of the control unit 13, please refer to the definitions of image matching, 3D imaging and/or gesture recognition methods below. The control unit can be Programmable Logic Controller (PLC), Field-Programmable Gate Array (FPGA), Computer (Personal Computer, PC), Industrial Control Computer (Industrial Personal Computer, IPC) , Smart phone, tablet or server, etc. The control unit generates program instructions according to a pre-fixed program, combined with manually input information, parameters, or data collected by an external image sensor.
如图2所示,在一个实施例中,本发明提供一种图像匹配方法,以该方法应用于上面实施例所述的系统中为例,该图像匹配方法包括如下方法步骤:As shown in FIG. 2, in one embodiment, the present invention provides an image matching method. Taking the method applied to the system described in the above embodiment as an example, the image matching method includes the following method steps:
步骤S110获取初始图像;Step S110: acquiring an initial image;
步骤S120检测初始图像中的形变区域和/或非形变区域,生成针对形变区域的匹配结果和/或生成针对非形变区域的匹配结果。Step S120 detects the deformed area and/or the non-deformed area in the initial image, and generates a matching result for the deformed area and/or generates a matching result for the non-deformed area.
通过检测确定初始图像中的形变区域和/或非形变区域,即通过检测确定初始图像可能同时即包括形变区域也包括非形变区域,也可能全部都是形变区域,或全部都是非形变区域。The deformed area and/or non-deformed area in the initial image are determined by detection, that is, the initial image may include both deformed areas and non-deformed areas at the same time, or all deformed areas, or all non-deformed areas.
针对检测到的形变区域和/或非形变区域分别采用各自的方式生成对应的匹配结果。其中,对于非形变区域可以直接对非形变区域采用后面实施例所述的图像的匹配方法进行匹配,而对于形变区域需要对形变区域进行一定的处理(后面实施例会有进一步详细的说明)后,再按照后面实施例所述的图像的匹配方法对应进行匹配。For the detected deformed area and/or non-deformed area, respective methods are used to generate corresponding matching results. Among them, for the non-deformed area, the image matching method described in the following embodiment can be used to directly match the non-deformed area, and for the deformed area, the deformed area needs to be processed (further detailed description will be given in the following embodiment). Then, perform matching according to the image matching method described in the following embodiment.
通过检测出初始图像中的形变区域和/或非形变区域,再针对检测出的形变区域和/或非形变区域采用对应的方式进行匹配;使得目标物体在各种相对基准面发生变化的情况下,仍能够获得精确的图像匹配结果。By detecting the deformed area and/or non-deformed area in the initial image, and then use the corresponding method to match the detected deformed area and/or non-deformed area; making the target object change in various relative reference planes , And still get accurate image matching results.
为方便理解,下面对上述方法步骤进行进一步详细描述。To facilitate understanding, the steps of the above method are described in further detail below.
步骤S110获取初始图像;Step S110: acquiring an initial image;
获取图像传感器实时采集并发送的目标物体被投射图像后的初始图像,或 者从存储器或者服务器获取上述初始图像或者对图像传感器实时采集并发送的初始图像经过某些处理(比如:裁剪、亮度归一化等等)后形成的初始图像。Obtain the initial image of the target object collected and sent by the image sensor in real time after the projected image, or obtain the above initial image from the memory or server, or the initial image collected and sent by the image sensor in real time after some processing (such as: cropping, brightness normalization) The initial image formed after transformation, etc.).
在一个实施例中,如图1A所示,根据上面实施例所述,可以设置一组图像传感器,每组图像传感器包括1个图像传感器,初始图像为一张,预先存储一系列不同距离下通过图像传感器采集的被投射图像后的目标物体的参考图像,后续将初始图像与一系列参考图像进行匹配。In one embodiment, as shown in FIG. 1A, according to the above embodiment, a group of image sensors can be set, each group of image sensors includes 1 image sensor, the initial image is one, and a series of different distances are stored in advance. Download the reference image of the target object after the projected image collected by the image sensor, and subsequently match the initial image with a series of reference images.
在一个实施例中,如图1B所示,根据上面所述,可以设置一组图像传感器,每组图像传感器包括2个图像传感器11,每组初始图像包括第一初始图像和第二初始图像两张。In one embodiment, as shown in FIG. 1B, according to the above description, a group of image sensors may be provided, each group of image sensors includes two image sensors 11, and each group of initial images includes a first initial image and a second initial image. Two images.
在一个实施例中,如图1C所示,根据上面所述,设置N组图像传感器,其中,N为大于等于2的整数,每组图像为2个图像传感器11组成,即每组初始图像包括第一初始图像和第二初始图像两张。In one embodiment, as shown in FIG. 1C, N groups of image sensors are set according to the above description, where N is an integer greater than or equal to 2, and each group of images is composed of two image sensors 11, that is, each group of initial The image includes the first initial image and the second initial image.
当图像传感器相对目标物体位置固定时,由于单组图像传感器受到视野的限制,可能无法同时完成整个目标物体的图像的采集,因此需要通过多组彼此进行标定的图像传感器分别采集初始图像,然后再根据需要进行拼接。When the position of the image sensor relative to the target object is fixed, because a single group of image sensors is limited by the field of view, it may not be able to complete the image acquisition of the entire target object at the same time. Therefore, it is necessary to collect the initial image by multiple sets of image sensors that are calibrated with each other, and then Then splice as needed.
具体的,可以采用各种现在已有或将来开发的图像拼接方法,比如:基于特征相关的拼接和基于区域相关的拼接。Specifically, various image splicing methods currently available or developed in the future can be used, such as: feature-based splicing and region-based splicing.
在一个实施例中,基于区域的拼接方法可以从待拼接图像的灰度值出发,对待拼接图像中的一块区域与参考图像中的相同尺寸的区域使用最小二乘法等其它方法计算其灰度值的差异,对差异比较来判断待拼接图像重叠区域的相似度,从而得到拼接图像重叠区域的范围和位置,进而实现图像拼接。In one embodiment, the region-based splicing method can start from the gray value of the image to be spliced, and calculate the gray value of an area of the image to be spliced with the same size in the reference image using other methods such as least squares. The difference is compared to the difference to determine the similarity of the overlapping area of the image to be spliced, so as to obtain the range and position of the overlapping area of the spliced image, and then realize image splicing.
在一个实施例中,基于特征相关的拼接是通过像素导出图像的特征,然后以图像特征为标准,对重叠部分的对应特征区域进行搜索匹配。In one embodiment, the feature-based stitching is to derive the features of the image through pixels, and then use the image features as a standard to search and match the corresponding feature regions of the overlapping parts.
具体的,每组初始图像根据图像传感器数量的不同而不同,图像传感器可以为大于等于1的任意个,由于超过2个的图像传感器成像方式同样为其中任意2个的组合,即其成像方法是2个图像传感器下对应的成像方法的重复,因此本具体实施例只以单组图像传感器,且该组图像传感器包括1个图像传感器或2个图像传感器的情况为例进行说明。Specifically, each group of initial images varies according to the number of image sensors. The number of image sensors can be any number greater than or equal to 1. Since the imaging mode of more than 2 image sensors is also a combination of any two of them, the imaging method is The corresponding imaging methods under two image sensors are repeated. Therefore, this specific embodiment only uses a single group of image sensors, and the group of image sensors includes one image sensor or two image sensors as an example for description.
步骤S120检测初始图像中的形变区域和/或非形变区域,分别生成针对非形变区域的匹配结果和/或生成针对形变区域的匹配结果。Step S120 detects the deformed area and/or the non-deformed area in the initial image, and generates a matching result for the non-deformed area and/or generates a matching result for the deformed area, respectively.
其中,形变区域是指初始图像中的某些与基准图像不一致的区域。该形变区域是由于目标物体(1个图像传感器为例)或目标物体上的部分表面(2个图像传感器为例)相对基准面(以2个图像传感器为例,基准面可以指平行于图像传感器的芯片的面;以1个图像传感器为例,基准面可以指平行于拍摄参考图像时沿目标物体上,比如:中心设置的面)发生偏转、弯曲等等变化,从而使得通过图像传感器采集到的该部分图像相对基准图像(以2个图像传感器为例,基准图像是指在目标物体的表面平行于图像传感器的芯片的情况下采集的图像;以1个图像传感器为例,基准图像是指参考图像)发生形变。具体的,不一致可以表现为:形变区域相对基准图像发生偏转、拉伸、压缩、和/或弯曲等等。比如:以基准图像中包括正弦波特征部分为例,形变区域的正弦波图像相对基准图像中的正弦波发生左2偏转(如图25A所示)、右偏转(如图25B所示)、拉伸(如图25C所示)、压缩(如图25D所示)、或弯曲(省略附图)等等情况。Among them, the deformed area refers to some areas in the initial image that are inconsistent with the reference image. The deformed area is due to the target object (1 image sensor as an example) or part of the surface on the target object (2 image sensors as an example) relative to the reference surface (taking 2 image sensors as an example, the reference surface can be parallel to the image sensor) The surface of the chip; taking an image sensor as an example, the reference surface can refer to the deflection, bending, etc. of the target object parallel to the shooting of the reference image, such as the surface set in the center, so that the image sensor can collect This part of the image is relative to the reference image (take 2 image sensors as an example, the reference image refers to the image collected when the surface of the target object is parallel to the chip of the image sensor; taking 1 image sensor as an example, the reference image refers to The reference image) is deformed. Specifically, the inconsistency may be manifested as: the deformation area is deflected, stretched, compressed, and/or bent relative to the reference image. For example: Taking the characteristic part of the sine wave in the reference image as an example, the sine wave image in the deformed area is deflected by 2 left (as shown in Fig. 25A), right deflection (as shown in Fig. 25B), and the sine wave in the deformed area Stretching (as shown in FIG. 25C), compression (as shown in FIG. 25D), or bending (illustration omitted), etc.
其中,非形变区域是指初始图像中与基准图像一致的区域。Among them, the non-deformed area refers to the area in the initial image that is consistent with the reference image.
具体的,图像的匹配方法可以包括但不限于如下方法步骤:Specifically, the image matching method may include but is not limited to the following method steps:
在一个实施例中,以2个图像传感器为例,匹配是在两张初始图像之间进行的。具体的,该初始图像包括第一初始图像和第二初始图像,在计算两幅图像像素点之间对应关系的时候,通常会以被匹配的像素点为中心设定一个固定大小的图像框,做图像框中的图像块的匹配。第一初始图像中的一个n*n的图像块,会沿两个图像传感器的极线方向,跟第二初始图像中的N个同样大小的图像块做对比(N是两幅图片视差的一个搜索范围)。在一个实施例中,对比的方法是计算两个图像块相对应像素点的亮度差的绝对值,再对这个绝对值求和,得出一个匹配分数。由此可以得出N个匹配分数,可以求得这N个匹配分数中的最小值,则此最小值所对应的第二初始图像中的像素点与第一初始图像中的被匹配的像素点相对应,依次类推,从而得到两个图像中相互对应的多个像素点的匹配结果。In one embodiment, taking two image sensors as an example, the matching is performed between the two initial images. Specifically, the initial image includes a first initial image and a second initial image. When calculating the correspondence between the pixels of the two images, a fixed-size image frame is usually set with the matched pixel as the center. Match the image blocks in the image frame. An n*n image block in the first initial image will be compared with the N image blocks of the same size in the second initial image along the epipolar direction of the two image sensors (N is one of the parallaxes of the two images Search scope). In one embodiment, the method of comparison is to calculate the absolute value of the brightness difference of the corresponding pixels of the two image blocks, and then sum the absolute value to obtain a matching score. From this, N matching scores can be obtained, and the minimum value of these N matching scores can be obtained, and the pixel point in the second initial image corresponding to this minimum value is the matched pixel point in the first initial image Correspondingly, and so on, so as to obtain the matching result of multiple pixels in the two images corresponding to each other.
在一个实施例中,以1个图像传感器为例,上面实施例是对第一初始图像和第二初始图像之间进行匹配,而在1个图像传感器的实施例中,是将单张的初始图像与参考图像进行匹配,具体的匹配方法可以参见上面实施例所述,在此不再赘述。In one embodiment, one image sensor is taken as an example. The above embodiment is to match between the first initial image and the second initial image, and in the embodiment of one image sensor, the initial The image is matched with the reference image, and the specific matching method can be referred to the above embodiment, which will not be repeated here.
如图3所示,在一个实施例中,上述步骤S120中“检测初始图像中的形变区域和/或非形变区域”可以包括如下方法步骤:As shown in FIG. 3, in one embodiment, "detecting deformed regions and/or non-deformed regions in the initial image" in step S120 may include the following method steps:
步骤S121获取初始图像中的当前图像块;Step S121: acquiring the current image block in the initial image;
根据上面实施例所述,当前图像块为进行当前匹配的图像块。According to the above embodiment, the current image block is the image block for current matching.
步骤S123基于当前图像块的图像特征,检测当前图像块为当前形变图像块或当前非形变图像块;Step S123, based on the image characteristics of the current image block, detect that the current image block is the current deformed image block or the current non-deformable image block;
具体的,初始图像包括形变区域和/或非形变区域。Specifically, the initial image includes a deformed area and/or a non-deformed area.
其中,形变区域是指当前形变图像块的集合;而非形变区域是指当前非形变图像块的集合。Among them, the deformed area refers to the current set of deformed image blocks; the non-deformed area refers to the current set of non-deformed image blocks.
根据上面实施例所述,由于投射在目标物体表面的图像具有某些特征,可以通过判断这些特征是否发生形变,从而检测出形变区域,进而把此形变区域转换到对应基准面时生成的图像。According to the above embodiments, since the image projected on the surface of the target object has certain characteristics, the deformed area can be detected by judging whether these characteristics are deformed, and then the deformed area can be converted to the image generated when the corresponding reference surface is generated.
比如,根据上面实施例所述,该图像包括成正弦渐变规律的条纹的特征部分,该正弦条纹在平行于基准面的情况下,横向X为多个周期的重复变化,纵向Y对齐,即成竖直状态。当发生形变时,该条纹在横向X上的周期可能发生拉伸或压缩,和/或纵向Y方向可能发生倾斜,则可以通过图像中这些特征部分的变化来判断当前图像块是否发生形变。For example, according to the above embodiment, the image includes the characteristic part of the fringe with a sinusoidal gradual change. When the sinusoidal fringe is parallel to the reference plane, the horizontal X is the repeated change of multiple cycles, and the vertical Y is aligned, which is Upright state. When deformation occurs, the period of the stripe in the horizontal X direction may be stretched or compressed, and/or the vertical Y direction may be inclined, and the current image block may be deformed according to the changes of these characteristic parts in the image.
有关检测当前图像块为当前形变图像块或当前非形变图像块的方法,后面实施例会有进一步详细的说明。The method for detecting whether the current image block is the current deformed image block or the current non-deformable image block will be described in further detail in the following embodiments.
如图4所示,在一个实施例中,上述步骤S120中“检测初始图像中的形变区域和/或非形变区域”可以包括如下方法步骤:As shown in FIG. 4, in one embodiment, "detecting deformed regions and/or non-deformed regions in the initial image" in step S120 may include the following method steps:
步骤S122获取初始图像中的当前图像块;Step S122 obtains the current image block in the initial image;
步骤S124匹配当前图像块;判断是否能够生成匹配结果;Step S124: match the current image block; determine whether a matching result can be generated;
步骤S126若否,以当前图像块为当前形变图像块;If no in step S126, use the current image block as the current deformed image block;
步骤S128若是,以当前图像块为当前非形变图像块。If yes in step S128, use the current image block as the current non-deformable image block.
具体的,以2个图像传感器为例,根据上面实施例步骤S120中所述的图像的匹配方法,将预设大小的图像框沿初始图像(比如:第一初始图像)依次移动单位量(比如:一个像素),以截取多个图像块分别进行匹配;每次进行当前图像块的匹配时,由于该当前图像块的图像发生形变时,同一目标会产生不同步的形变(比如:第一初始图像的形变区域产生拉伸变形,而第二初始图像的对应的相同形变区域则产生压缩变形),从而导致当前图像块无法完成良好的匹配,因此无法完成匹配的当前图像块即可以视为当前形变图像块,而可以完成匹配的当前图像块即为当前非形变图像块,则多个当前形变图像块的集合为形变区域,多个当前非形变图像块的集合为非形变区域。Specifically, taking two image sensors as an example, according to the image matching method described in step S120 of the above embodiment, an image frame of a preset size is sequentially moved by a unit amount (e.g., the first initial image) along the initial image (e.g., the first initial image). : One pixel), to intercept multiple image blocks for matching; each time the current image block is matched, because the image of the current image block is deformed, the same target will have an asynchronous deformation (such as: the first initial The deformed area of the image is stretched and deformed, and the corresponding same deformed area of the second initial image is compressed and deformed), resulting in the current image block unable to complete a good match, so the current image block that cannot be matched can be regarded as the current A deformed image block, and the current image block that can be matched is the current non-deformable image block, the set of multiple current deformed image blocks is the deformed area, and the set of multiple current non-deformable image blocks is the non-deformed area.
同理,当为1个图像传感器时,当实际中的目标物体与拍摄参考图像的目标物体位置发生倾斜等等变化时,从而也会使得实际获取的初始图像相对基准图像(即参考图像)发生形变,从而难以完成良好的匹配。需要说明的是,通常在这种情况下,由于目标物体整体相对参考面发生倾斜等变化,因此整个对应目标物体的图像区域都为形变区域。In the same way, when it is an image sensor, when the actual target object and the target object position of the reference image are tilted, etc., it will also make the actual acquired initial image relative to the reference image (ie reference image). Deformation, making it difficult to complete a good match. It should be noted that, usually in this case, the entire image area corresponding to the target object is a deformed area because the entire target object is tilted relative to the reference plane.
需要说明的是,对于当前非形变图像块的匹配结果在步骤S124中“匹配当前图像块”的过程中已经生成。It should be noted that the matching result for the current non-deformable image block has been generated in the process of "matching the current image block" in step S124.
具体的,上述步骤S120中“生成针对形变区域的匹配结果”可以通过但不限于如下方法步骤实现:Specifically, the "generating a matching result for the deformed area" in the above step S120 can be implemented through but not limited to the following method steps:
如图5所示,在一个实施例中,步骤S120中“生成针对形变区域的匹配结果”可以包括如下方法步骤:As shown in FIG. 5, in one embodiment, "generating a matching result for a deformed area" in step S120 may include the following method steps:
步骤S221获取形变区域;Step S221: Obtain a deformed area;
具体的,根据上面实施例所述,形变区域可以为多个当前形变图像块的集合,因此可以获取当前形变图像块,后面步骤S222可以转换当前形变图像块为当前基准图像块。Specifically, according to the above embodiment, the deformed area may be a collection of multiple current deformed image blocks, so the current deformed image block can be obtained, and the current deformed image block can be converted into the current reference image block in the following step S222.
步骤S222转换形变区域为基准图像;Step S222: Convert the deformed area into a reference image;
有关转换形变区域为基准图像的方法,后面实施例会有进一步详细的说明。The method of converting the deformed area into the reference image will be described in further detail in the following embodiments.
步骤S223针对基准图像进行匹配,得到针对形变区域的匹配结果。Step S223 performs matching on the reference image to obtain a matching result for the deformed area.
则将形变区域用基准图像进行代替,针对基准图像进行匹配。有关匹配方法,参见上面实施例中的图像匹配方法中的相关描述,在此不再赘述。Then the deformed area is replaced with a reference image, and the reference image is matched. For the matching method, please refer to the relevant description in the image matching method in the above embodiment, which will not be repeated here.
如图6所,示,在一个实施例中,步骤S120中“生成针对形变区域的匹配结果”可以包括如下方法步骤:As shown in FIG. 6, in one embodiment, "generating a matching result for the deformed area" in step S120 may include the following method steps:
步骤S321获取预生成的形变区域依次发生单位形变量后的模板图像组;Step S321 obtains a template image group in which the pre-generated deformation area has a unit deformation in sequence;
在一个实施例中,模板图像组可以为预先生成的,该当前形变图像块依次发生至少一个单位变形量后得到的图像模板组。在一个实施例中,为针对每个当前图像块进行匹配,对应每个当前图像块预先生成当前模板图像组。In an embodiment, the template image group may be a pre-generated image template group obtained after the current deformed image block sequentially undergoes at least one unit deformation amount. In one embodiment, in order to perform matching for each current image block, a current template image group is generated in advance corresponding to each current image block.
具体的,各个模板的大小与图像块相对应;Specifically, the size of each template corresponds to the image block;
具体的,比如,以目标物体的表面相对基准面发生偏转的偏转角度为形变量为例,比如:我们需计算的平面倾斜角在X轴:-60度到+60度,Y轴:-60度到+60度,每20度为一个单位,则采样角度为:-60、-40、-20、0、20、40、60,共7种情况,则XY倾斜情况共7*7=49个模板。需要说明的是,除以20度为偏转单位外,可以根据需要设定任意角度的偏转单位,偏转单位越小匹配精度越高,但需要匹配的次数越多,因此对时间或硬件的成本要求会更高。Specifically, for example, take the deflection angle of the target object's surface relative to the reference plane as the deformation variable. For example, the plane inclination angle we need to calculate is on the X axis: -60 degrees to +60 degrees, and the Y axis: -60 degrees. From degrees to +60 degrees, every 20 degrees is a unit, the sampling angle is: -60, -40, -20, 0, 20, 40, 60, a total of 7 cases, the XY tilt case is 7*7=49 Templates. It should be noted that, in addition to 20 degrees as the deflection unit, you can set the deflection unit at any angle according to your needs. The smaller the deflection unit, the higher the matching accuracy, but the more times you need to match, so the cost of time or hardware is required. Will be higher.
在一个实施例中,可以根据偏转单位,结合图像投射器和图像传感器的空间位置转换信息预先生成模板图像;也可以根据下面图11所示的实施例中的拟合函数的方法,生成对应的模板图像。In an embodiment, the template image can be generated in advance according to the deflection unit, combined with the spatial position conversion information of the image projector and the image sensor; or according to the method of fitting function in the embodiment shown in FIG. 11, the corresponding Template image.
步骤S322针对模板图像组进行匹配,得到匹配结果。Step S322 performs matching on the template image group to obtain a matching result.
为方便理解,下面以针对第一初始图像和第二初始图像进行匹配为例进一步详细说明,比如:依次获取第一初始图像中的当前图像块对应的模板图像(比如:依次为偏转-60、-40、-20、0、20、40、60后的模板图像),则对应第二初始图像的位于同一极线上,可以包括多个待匹配图像块,分别计算各个模板图像与第二初始图像中的待匹配的图像块之间的灰度差的绝对值之和,则对应灰度差的绝对值之和最小的第二初始图像的待匹配图像块与第一初始图像 的当前图像块为匹配图像块,因此两个匹配图像块对应的像素点为匹配像素点,从而得到匹配结果。To facilitate understanding, the following takes the matching of the first initial image and the second initial image as an example for further detailed description. For example, the template image corresponding to the current image block in the first initial image is sequentially obtained (for example: deflection -60, -40, -20, 0, 20, 40, and 60 template images), the corresponding second initial image is located on the same epipolar line, and it can include multiple image blocks to be matched. Each template image and the second initial The sum of the absolute values of the gray-scale differences between the image blocks to be matched in the image corresponds to the second initial image with the smallest sum of the absolute values of the gray-scale differences and the current image block of the first initial image. In order to match the image blocks, the pixels corresponding to the two matching image blocks are matched pixels, and the matching result is obtained.
需要说明的是,当只包括一张初始图像时,可以按照上面实施例所述的方法对初始图像和参考图像进行匹配,此时,可以将参考图像看做第二初始图像,从而生成匹配结果。It should be noted that when only one initial image is included, the initial image and the reference image can be matched according to the method described in the above embodiment. At this time, the reference image can be regarded as the second initial image to generate the matching result. .
在一个实施例中,可以根据偏转单位,结合图像投射器和图像传感器的空间位置转换信息预先生成模板图像组;也可以根据下面图11所示的实施例中的拟合函数的方法,生成对应的模板图像组。In an embodiment, the template image group can be generated in advance according to the deflection unit, combined with the spatial position conversion information of the image projector and the image sensor; or according to the method of fitting function in the embodiment shown in FIG. 11, the corresponding Set of template images.
如图7所示,在一个实施例中,步骤S120中“生成针对形变区域的匹配结果”可以包括如下方法步骤:As shown in FIG. 7, in one embodiment, "generating a matching result for the deformed area" in step S120 may include the following method steps:
步骤S421依次生成形变区域发生单位形变量后的图像组;Step S421 sequentially generates image groups after unit deformation of the deformed area occurs;
步骤S422针对图像组进行匹配,得到匹配结果。Step S422 performs matching on the image group to obtain a matching result.
在一个实施例中,以2个图像传感器为例,依次获取第一初始图像中的当前图像块偏转单位角度后形成的形变图像块组与第二初始图像中位于同一极线上的多个待匹配图像块进行匹配,分别计算各个形变图像块与第二初始图像中的待匹配的图像块之间的灰度差的绝对值之和,则对应灰度差的绝对值之和最小的第二初始图像的待匹配图像块与第一初始图像的当前图像块为匹配图像块,因此两个匹配图像块对应的像素点为匹配像素点,从而得到匹配结果。相关的其它描述,可以参照上面实施例中模板图像的匹配方法,在此不再赘述。In one embodiment, taking two image sensors as an example, the deformed image block group formed after the current image block in the first initial image is deflected by a unit angle and the plurality of waiting images located on the same epipolar line in the second initial image are sequentially acquired. The matching image blocks are matched, and the sum of the absolute values of the gray-scale differences between each deformed image block and the image block to be matched in the second initial image is calculated, and then the second one with the smallest sum of the absolute values of the gray-scale differences is calculated. The to-be-matched image block of the initial image and the current image block of the first initial image are matched image blocks, so the pixels corresponding to the two matched image blocks are matched pixels, thereby obtaining the matching result. For other related descriptions, reference can be made to the template image matching method in the above embodiment, which will not be repeated here.
在一个实施例中,可以根据偏转单位,结合图像投射器和图像传感器的空间位置转换信息生成发生单位形变量后的图像组;也可以根据下面图9所示的实施例中的拟合函数的方法,生成对应的图像组。In an embodiment, the image group after unit deformation can be generated according to the deflection unit, combined with the spatial position conversion information of the image projector and the image sensor; it can also be based on the fitting function in the embodiment shown in FIG. 9 below. Method to generate the corresponding image group.
具体的,步骤S123基于当前图像块的图像特征,检测当前图像块为当前形变图像块或当前非形变图像块可以通过但不限于如下方法步骤实现:Specifically, in step S123, based on the image characteristics of the current image block, detecting that the current image block is the current deformed image block or the current non-deformable image block can be implemented by, but not limited to, the following method steps:
如图8所示,在一个实施例中,步骤S123可以包括如下方法步骤:As shown in FIG. 8, in an embodiment, step S123 may include the following method steps:
S1231获取当前图像块;S1231 obtains the current image block;
具体的,根据上面所述,可以将图像框在初始图像上沿图像传感器的极线 方向移动,每次移动一个单位,则图像框对应的初始图像上的当前图像为当前图像块。Specifically, according to the above, the image frame can be moved along the epipolar direction of the image sensor on the initial image, and each time it moves one unit, the current image on the initial image corresponding to the image frame is the current image block.
在一个实施例中,根据后面实施例所述,当需要基于该检测方法进一步将当前形变图像块转换为基准图像时,由于转换后的当前形变图像块的边缘可能会存在空白部分(如图26B所示),为保证转换后的当前图像块内容的完整性,通常需要获取一张比实际需要的匹配的图像块的尺寸更大的当前图像块(如图26A所示),后续再对转换后的图像进行裁剪,从而得到完整的转换为基准图像后的当前形变图像块(如图26C所示)。In one embodiment, according to the following embodiment, when it is necessary to further convert the current deformed image block into a reference image based on the detection method, there may be a blank part on the edge of the current deformed image block after the conversion (as shown in FIG. 26B As shown), in order to ensure the integrity of the content of the current image block after conversion, it is usually necessary to obtain a current image block whose size is larger than the actual matching image block (as shown in Figure 26A), and then perform the conversion. The latter image is cropped to obtain the current deformed image block completely converted into the reference image (as shown in FIG. 26C).
S1233提取当前图像块的每行中的最值像素点(比如:灰度值最高点);S1233 extracting the most value pixel point (for example: the highest gray value point) in each row of the current image block;
具体的,根据上面实施例所述,以图像中包括正弦波图像为例,由于成渐变规律,因此,如果没有发生形变时,每个正弦波的波峰或者波谷应该位于同一基准线上,则位于波峰或者波谷处的像素点为最值像素点,比如:灰度值最高或最低。Specifically, according to the above embodiment, taking the image including a sine wave image as an example, due to the gradual change rule, if there is no deformation, the peak or trough of each sine wave should be on the same reference line. The pixel point at the peak or valley is the maximum value pixel point, for example, the gray value is the highest or the lowest.
S1235拟合每行中的最值像素点,得到当前图像块的当前拟合线;S1235 fits the most value pixel points in each row to obtain the current fitting line of the current image block;
将每行中的最值点拟合成一条线(如图27所示),目标物体的两个面分别可以得到线L1和L2。Fit the most value points in each row to a line (as shown in Figure 27), and the two faces of the target object can get lines L1 and L2 respectively.
S1237检测当前拟合线相对基准线的当前图像形变量;S1237 detects the current image shape variable of the current fitted line relative to the reference line;
检测L1和L2,其中,L1相对基准线发生偏移,而L2未发生偏移。Detect L1 and L2, where L1 is shifted from the reference line, while L2 is not shifted.
S1239根据当前图像形变量,检测当前图像块为当前形变图像块或当前非形变图像块。S1239, according to the current image shape variable, detects that the current image block is the current deformed image block or the current non-deformable image block.
在一个实施例中,可以判断形变量是否大于或大于等于某一阈值;若是,当前图像块为形变图像块;若否,当前图像块为非形变图像块。In one embodiment, it can be determined whether the deformation amount is greater than or equal to a certain threshold; if so, the current image block is a deformed image block; if not, the current image block is an indeformable image block.
理论上,未发生形变时,则拟合线与基准线的形变量为零,但由于各种误差的存在,因此,往往不能准确表现为零,因此可以预先设定一个阈值,若形变量小于等于或小于该阈值时(在一个实施例中,该阈值也可以为理论值零),则认为当前图像块为当前非形变图像块;若大于或大于等于某一阈值时,则判断当前图像块为当前形变图像块,该拟合线相对基准线的形变量为当前图像块的当前图像形变量。Theoretically, when there is no deformation, the deformation of the fitted line and the baseline is zero. However, due to the existence of various errors, it is often not accurately represented as zero. Therefore, a threshold can be preset. If the deformation is less than When it is equal to or less than the threshold (in one embodiment, the threshold may also be a theoretical value of zero), the current image block is considered to be the current non-deformed image block; if it is greater than or equal to a certain threshold, the current image block is determined Is the current deformed image block, and the shape variable of the fitted line relative to the reference line is the current image shape variable of the current image block.
如图9所示,在一个实施例中,步骤S123可以包括如下方法步骤:As shown in FIG. 9, in an embodiment, step S123 may include the following method steps:
S1232获取当前图像块;S1232 obtains the current image block;
同理,根据上面实施例所述,在一个实施例中,为保证转换后的当前图像块内容的完整性,通常需要获取一张比实际需要的匹配的图像块的尺寸更大的当前图像块。Similarly, according to the above embodiment, in one embodiment, in order to ensure the integrity of the content of the current image block after conversion, it is usually necessary to obtain a current image block whose size is larger than the actual required matching image block. .
S1234对当前图像块进行拟合,得到拟合函数;S1234 fits the current image block to obtain a fitting function;
在一个实施例中,对当前图像块进行拟合,得到拟合函数,以图像包括成周期性正弦变化条纹为例,该拟合函数可以为:Z=Asin(BX+CY+D)+E;设当前初始图像块内某个像素点的像素值强度为Z,横坐标为X,纵坐标为Y;其中,A为正弦函数的振幅;360/B为该函数明暗变化的周期(单位:个像素),C代表该像素点在Y方向上的倾斜程度,当未发生形变时,C为0,在Y方向上,上方每一行图像相对于下方每一行图像沿横向X正方向平移C/B个像素,倾斜角度为arccotangent(C/B);D为该函数在横向X方向的平移量;E为函数在Z方向的平移量;其中,A、D和E为固定值。In one embodiment, the current image block is fitted to obtain a fitting function. Taking the image including periodic sinusoidal stripes as an example, the fitting function can be: Z=Asin(BX+CY+D)+E ; Suppose the pixel value intensity of a certain pixel in the current initial image block is Z, the abscissa is X, and the ordinate is Y; where A is the amplitude of the sine function; 360/B is the period of the function's brightness change (unit: Pixels), C represents the inclination of the pixel in the Y direction. When there is no deformation, C is 0. In the Y direction, each line of the image above is shifted along the positive X direction in the horizontal X direction relative to each line of the image below. B pixels, the inclination angle is arccotangent (C/B); D is the translation amount of the function in the lateral X direction; E is the translation amount of the function in the Z direction; where A, D and E are fixed values.
S1236根据拟合函数,检测当前图像块为形变图像块或非形变图像块。S1236, according to the fitting function, detects whether the current image block is a deformed image block or a non-deformable image block.
根据上面实施例所述,C代表该像素点在Y方向上的倾斜程度,当未发生形变时,C为0;According to the above embodiment, C represents the inclination of the pixel in the Y direction. When no deformation occurs, C is 0;
另外,由该拟合函数可以知道arccotangent(C/B)为该当前形变图像块的图像形变量。在一个实施例中,可以判断该图像形变量是否大于或大于等于某一阈值;若是,当前图像块为形变图像块;若否,则当前图像块为非形变图像块。In addition, it can be known from the fitting function that arccotangent (C/B) is the image shape variable of the current deformed image block. In one embodiment, it can be determined whether the image deformation amount is greater than or equal to a certain threshold; if so, the current image block is a deformed image block; if not, the current image block is a non-deformable image block.
具体的,步骤S222转换形变区域为基准图像可以通过但不限于如下方法步骤实现:Specifically, in step S222, converting the deformed area into a reference image can be achieved through but not limited to the following method steps:
如图10所示,在一个实施例中,步骤S222可以包括如下方法步骤:As shown in FIG. 10, in an embodiment, step S222 may include the following method steps:
步骤S2221获取当前形变图像块;Step S2221 obtains the current deformed image block;
步骤S2222提取当前形变图像块的每行中的最值像素点;Step S2222 extracts the most value pixel points in each row of the current deformed image block;
步骤S2223拟合每行中的最值像素点,得到当前图像块的当前拟合线;Step S2223 fits the most value pixel points in each row to obtain the current fitted line of the current image block;
步骤S2224检测当前拟合线相对基准线的当前图像形变量;Step S2224 detects the current image shape variable of the current fitting line relative to the reference line;
需要说明的是,当上述步骤S2221-步骤S2224中的方法步骤的详细描述,可以参见上面实施例中步骤S123的步骤S1231-步骤S1237的描述。另外,在一个实施例中,当步骤S123包括上述步骤S1231-步骤S1237时,则此处的步骤S2221-步骤S2224可以省略。It should be noted that the detailed description of the method steps in step S2221 to step S2224 can refer to the description of step S1231 to step S1237 in step S123 in the above embodiment. In addition, in an embodiment, when step S123 includes the above-mentioned steps S1231-step S1237, then steps S2221-step S2224 here can be omitted.
步骤S2225基于当前图像形变量,转换当前形变图像块为当前基准图像块。Step S2225 converts the current deformed image block into the current reference image block based on the current image shape variable.
如图11所示,在一个实施例中,步骤S222可以包括如下方法步骤:As shown in FIG. 11, in an embodiment, step S222 may include the following method steps:
步骤S2231获取当前图像块;Step S2231 obtains the current image block;
步骤S2232对当前图像块进行拟合,得到当前拟合函数;Step S2232 performs fitting on the current image block to obtain the current fitting function;
需要说明的是,当上述步骤S2231-步骤S2232中的方法步骤的详细描述,可以参见上面实施例中步骤S1232-步骤S1234的描述。另外,在一个实施例中,当步骤S123包括了上述步骤S1232-步骤S1234时,则此处的步骤S2231-步骤S2232可以省略。It should be noted that the detailed description of the method steps in step S2231 to step S2232 can be found in the description of step S1232 to step S1234 in the above embodiment. In addition, in an embodiment, when step S123 includes the above-mentioned steps S1232-step S1234, then steps S2231-step S2232 here can be omitted.
步骤S2233基于当前拟合函数,转换当前形变图像块为当前基准图像块;Step S2233 converts the current deformed image block into the current reference image block based on the current fitting function;
可以基于该拟合函数转换形变区域,即使得该形变区域周期为基准周期,图像的C为0,从而得到转换后的形变区域。The deformed area can be converted based on the fitting function, that is, the period of the deformed area is the reference period, and the C of the image is 0, so as to obtain the deformed area after conversion.
为方便理解,下面以一个具体实例对上述方法进一步详细说明。To facilitate understanding, the above method will be further described in detail with a specific example below.
假设需要获取的图像块的尺寸为(宽度W:33,高度H:33),由于转换后的图像边缘会存在空白(如图26B所示),为保证图像块内容的完整性,通常需要截取一张比该图像块的尺寸更大的初始图像块,比如,如图26A所示,选取一个更宽的区域作为初始图像块(W:63,H:33),其中该初始图像块的中心形成一个矩形框,该矩形框的尺寸对应图像块的尺寸(W:33,H:33);除此之外,该初始图像块也可以为任意其它的尺寸,只要保证大于预设的图像块的尺寸,并保证转换后的图像块中的内容完整即可。Assuming that the size of the image block to be acquired is (width W: 33, height H: 33), since there will be blanks at the edges of the converted image (as shown in Figure 26B), in order to ensure the integrity of the image block content, it is usually necessary to intercept An initial image block larger than the size of the image block, for example, as shown in Figure 26A, a wider area is selected as the initial image block (W: 63, H: 33), where the center of the initial image block Form a rectangular frame, the size of the rectangular frame corresponds to the size of the image block (W: 33, H: 33); in addition, the initial image block can also be any other size, as long as it is larger than the preset image block And ensure that the content in the converted image block is complete.
具体的,先以矩形框为中心,根据上面的拟合函数,对初始图像块进行三维曲线的拟合。Specifically, first, with the rectangular frame as the center, the initial image block is fitted with a three-dimensional curve according to the above fitting function.
拟合出来的函数为:Z=55.24×sin(20.73×X-4.07×Y+159.58)+97.85。The fitted function is: Z=55.24×sin(20.73×X-4.07×Y+159.58)+97.85.
假设基准图像的参数为:B=20,C=0,基准图像的周期为360/20(单位: 个像素);而根据上面函数,B=20.73,则需要转换的初始图像的周期为360/20.73(单位:个像素),即初始图像沿横向X发生了压缩形变,所以需要对初始图像进行横向X的拉伸,拉伸系数为:20.73/20=1.0365。另外,由于基准图像的C=0,所以还需要对拉伸后的图像进行倾斜变换,变换方法为:由于图像的高度H:33,则取中间行(即第17行)不做变换,以中间行17作为0点基准,设其他行数为i,第i行沿着X轴正方向平移-(i-17)*(C/B)个像素,遍历所有行之后,完成变换。比如:Assuming that the parameters of the reference image are: B=20, C=0, the period of the reference image is 360/20 (unit: pixels); and according to the above function, B=20.73, the period of the initial image that needs to be converted is 360/ 20.73 (unit: pixels), that is, the initial image is compressed and deformed in the horizontal direction X, so the initial image needs to be stretched in the horizontal direction X, and the stretching coefficient is: 20.73/20=1.0365. In addition, since C=0 of the reference image, it is necessary to perform a tilt transformation on the stretched image. The transformation method is as follows: Since the height of the image is H: 33, take the middle row (that is, the 17th row) without transformation. The middle row 17 is used as the 0 point reference, and the number of other rows is i, and the i-th row is translated along the positive direction of the X axis by -(i-17)*(C/B) pixels. After all rows are traversed, the transformation is completed. such as:
第18行整体沿X轴正方向平移-(18-17)*C/B=-C/B,即4.07/20=0.2035个像素,第19行整体沿X轴正方向平移-2×C/B=0.47个像素等等。完成以上两个变化之后,该初始图像完成转换,转换后的初始图像变成如图26B所示。The whole line of the 18th line is translated along the positive direction of the X axis -(18-17)*C/B=-C/B, that is 4.07/20=0.2035 pixels, the whole line of the 19th line is translated along the positive direction of the X axis-2×C/ B=0.47 pixels and so on. After completing the above two changes, the initial image is converted, and the converted initial image becomes as shown in FIG. 26B.
进一步,截取位于变形后的初始图像的中间的矩形框(W:33,H:33)内的区域为该转换后的图像块,截取后的转换图像块如图26C所示。Further, the area within the rectangular frame (W: 33, H: 33) located in the middle of the deformed initial image is intercepted as the converted image block, and the converted image block after the interception is shown in FIG. 26C.
如图12所示,在一个实施例中,步骤S222可以包括如下方法步骤:As shown in FIG. 12, in an embodiment, step S222 may include the following method steps:
步骤S2241获取当前形变图像块;Step S2241 obtains the current deformed image block;
步骤S2242基于当前形变图像块,生成目标物的当前形变量;Step S2242 generates the current deformation of the target object based on the current deformation image block;
步骤S2243基于当前形变量,转换当前形变图像块为当前基准图像块;Step S2243, based on the current deformation amount, converts the current deformed image block into the current reference image block;
比如,如图24所示,以空间相对第一图像传感器111和第二图像传感器112偏转的某一斜面L上某点A为例,为简化说明,只以点A与第一图像传感器111为例进行说明。根据预先的标定结果,根据初始图像可以获取图像中A点对应的在图像传感器坐标系下的实际空间位置坐标;根据点A相对基准面L’的形变量(比如:该形变量可以根据上面实施例得到的图像形变量,基于图像传感器的标定结果转换得到;或者基于上面实施例所述的模板图像组对应得到),则可以求取该点A沿图像投射器12的投射图像的方向对应的投射在基准面O的虚拟点A’在图像传感器坐标系下的位置坐标,根据第一、第二图像传感器坐标系和第一、第二图像坐标系的转换,可以求得位置A’分别在第一、第二图像坐标系下的坐标,依此类推,可以获取变形区域修正后的图像。For example, as shown in FIG. 24, take a certain point A on a certain slope L that is spaced relative to the first image sensor 111 and the second image sensor 112 as an example. To simplify the description, only point A and the first image sensor 111 are taken as Examples are explained. According to the pre-calibrated results, the actual spatial position coordinates of point A in the image corresponding to the image sensor coordinate system can be obtained according to the initial image; according to the deformation of point A relative to the reference plane L'(for example: the deformation can be implemented according to the above The image shape variable obtained in the example is converted based on the calibration result of the image sensor; or obtained based on the template image group described in the above embodiment), then the point A corresponding to the direction of the projected image of the image projector 12 can be calculated The position coordinates of the virtual point A'projected on the reference plane O in the image sensor coordinate system. According to the conversion between the first and second image sensor coordinate systems and the first and second image coordinate systems, the positions A'can be obtained respectively The coordinates in the first and second image coordinate systems, and so on, can obtain the corrected image of the deformed area.
在一个实施例中,步骤S222可以根据傅里叶变换,从而将形变区域转换为基准图像。In an embodiment, step S222 may be based on Fourier transform to convert the deformed area into a reference image.
如图13所示,在一个实施例中,还提供一种3D成像方法,该3D成像方法包括上面实施例所述的匹配方法,还包括步骤:As shown in FIG. 13, in one embodiment, a 3D imaging method is further provided, and the 3D imaging method includes the matching method described in the above embodiment, and further includes the steps:
S130根据匹配结果,生成目标物体的3D图像。S130 generates a 3D image of the target object according to the matching result.
根据匹配的结果,每个匹配点对基于三角测量的算法或者基于对应的参考图像,可以得到该匹配点对在三维空间范围内的对应点的姿态信息,基于匹配结果包括的多个匹配点对,即可以绘制该物体在三维空间的3D图像。According to the matching result, for each matching point pair based on the triangulation algorithm or the corresponding reference image, the posture information of the corresponding point of the matching point pair in the three-dimensional space can be obtained, based on the multiple matching point pairs included in the matching result , That is, you can draw a 3D image of the object in a three-dimensional space.
具体的,姿态信息可以为针对目标物的预设坐标系的3d坐标,刚体在3维空间的运动可以用3d坐标(共6个自由度)描述,具体的,可以分为旋转和平移,各为3个自由度。刚体在3维空间的平移是普通的线性变换,可以使用一个3x1的向量描述平移位置;而旋转位姿常用的描述方式包括但不限于:旋转矩阵、旋转向量、四元数、欧拉角和李代数。Specifically, the posture information can be 3d coordinates in a preset coordinate system for the target, and the motion of a rigid body in a 3-dimensional space can be described by 3d coordinates (a total of 6 degrees of freedom). Specifically, it can be divided into rotation and translation, each It is 3 degrees of freedom. The translation of a rigid body in a 3-dimensional space is an ordinary linear transformation, and a 3x1 vector can be used to describe the translation position; and the commonly used description methods of rotation pose include but are not limited to: rotation matrix, rotation vector, quaternion, Euler angle and Lie algebra.
如图14所示,在一个实施例中,还提供一种姿态识别方法,该姿态识别方法包括上面实施例所述的匹配方法,还包括步骤:As shown in FIG. 14, in one embodiment, a gesture recognition method is also provided. The gesture recognition method includes the matching method described in the above embodiment, and further includes the steps:
S140根据匹配结果,生成目标物体的姿态识别结果。S140 generates a posture recognition result of the target object according to the matching result.
根据匹配的结果,每个匹配点对基于三角测量的算法或者基于对应的参考图像,可以得到该匹配点对在三维空间范围内的对应点的姿态信息,基于一个或者多个三维空间点的姿态信息从而得到目标物体的姿态识别结果。具体的,该目标物体的姿态识别结果可以是代表整个物体的位姿态信息或目标物体关联的某个目标位置(比如:该目标位置可以位于目标物体上或者目标物体的包括框上)的姿态信息。According to the matching result, each matching point pair is based on the triangulation algorithm or based on the corresponding reference image, and the posture information of the corresponding point in the three-dimensional space of the matching point pair can be obtained, based on the posture of one or more three-dimensional space points Information to obtain the result of the posture recognition of the target object. Specifically, the result of the posture recognition of the target object may be the posture information representing the position and posture of the entire object or the posture information of a certain target position associated with the target object (for example, the target position may be located on the target object or on the including frame of the target object). .
或者根据上面实施例得到的点云图,基于Linemod的方法,从而得到目标物体的姿态信息。其中Linemod的方法是指预先基于目标物体的3D模型(比如:CAD模型)存储在多个角度下对应的点云图,将上面实施例得到的点云图像与图像库中的图像进行匹配,从而确定目标物体的姿态信息。Or according to the point cloud image obtained in the above embodiment, based on the method of Linemod, the posture information of the target object can be obtained. The Linemod method refers to storing the corresponding point cloud images at multiple angles based on the 3D model of the target object (such as: CAD model) in advance, and matching the point cloud image obtained in the above embodiment with the image in the image library to determine The posture information of the target object.
应该理解的是,虽然图1-14的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有 明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1-14中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various steps in the flowcharts of FIGS. 1-14 are displayed in sequence as indicated by the arrows, these steps are not necessarily performed in sequence in the order indicated by the arrows. Unless specifically stated in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least part of the steps in Figures 1-14 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. These sub-steps or stages The execution order of is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
如图15所示,在一个实施例中,提供一种图像匹配装置,该图像匹配装置包括:As shown in FIG. 15, in one embodiment, an image matching device is provided, and the image matching device includes:
图像获取模块110,用于获取初始图像;The image acquisition module 110 is used to acquire an initial image;
结果生成模块120,用于检测所述初始图像中的形变区域和/或非形变区域,生成针对所述形变区域的匹配结果和/或生成针对所述非形变区域的匹配结果。The result generating module 120 is configured to detect the deformed area and/or the non-deformed area in the initial image, generate a matching result for the deformed area and/or generate a matching result for the non-deformed area.
如图16所示,在一个实施例中,上述结果生成模块120包括:As shown in FIG. 16, in one embodiment, the above result generation module 120 includes:
图像获取单元121,用于获取所述初始图像中的当前图像块;The image acquisition unit 121 is configured to acquire the current image block in the initial image;
图像检测单元123,用于基于所述当前图像块的图像特征,检测所述当前图像块为当前形变图像块或当前非形变图像块;The image detection unit 123 is configured to detect whether the current image block is a current deformed image block or a current non-deformable image block based on the image characteristics of the current image block;
如图17所示,在一个实施例中,上述结果生成模块120包括:As shown in FIG. 17, in one embodiment, the above result generation module 120 includes:
图像获取单元122,用于获取所述初始图像中的当前图像块;The image acquisition unit 122 is configured to acquire the current image block in the initial image;
图像匹配单元124,用于匹配所述当前图像块,结果判断单元126,用于判断是否能够生成匹配结果;The image matching unit 124 is used to match the current image block, and the result judgment unit 126 is used to judge whether a matching result can be generated;
图像确定单元128,用于若否,所述当前图像块为当前形变图像块;若是,所述当前图像块为当前非形变图像块。The image determining unit 128 is configured to, if not, the current image block is the current deformed image block; if so, the current image block is the current non-deformable image block.
如图18所示,在一个实施例中,上述结果生成模块120包括:As shown in FIG. 18, in one embodiment, the above result generation module 120 includes:
形变获取单元221,用于获取所述形变区域;The deformation acquiring unit 221 is configured to acquire the deformation area;
形变转换单元222,用于转换所述形变区域为基准图像;The deformation conversion unit 222 is configured to convert the deformation area into a reference image;
基准匹配单元223,用于针对所述基准图像进行匹配,得到针对所述形变区域的匹配结果。The reference matching unit 223 is configured to perform matching on the reference image to obtain a matching result for the deformed area.
如图19所示,在一个实施例中,上述结果生成模块120包括:As shown in FIG. 19, in one embodiment, the above result generating module 120 includes:
模板获取单元321,用于获取预生成的形变区域依次发生单位形变量后的模板图像组;The template obtaining unit 321 is configured to obtain a template image group in which the pre-generated deformation area has a unit deformation in sequence;
模板匹配单元322,用于针对模板图像组进行匹配,得到匹配结果。The template matching unit 322 is configured to perform matching on the template image group to obtain a matching result.
如图20所示,在一个实施例中,上述结果生成模块120包括:As shown in FIG. 20, in one embodiment, the above result generation module 120 includes:
图像生成单元421,用于依次生成形变区域发生单位形变量后的图像组;The image generating unit 421 is configured to sequentially generate image groups after unit deformation of the deformed area occurs;
图像匹配单元422,用于针对图像组进行匹配,得到匹配结果。The image matching unit 422 is configured to perform matching on the image group to obtain a matching result.
进一步,在一个实施例中,图像检测单元123包括:Further, in an embodiment, the image detection unit 123 includes:
图像获取部1231,用于获取当前图像块;The image acquisition unit 1231 is used to acquire the current image block;
最值提取部1233,用于提取当前图像块的每行中的最值像素点;The maximum value extraction unit 1233 is used to extract the maximum value pixel points in each row of the current image block;
最值拟合部1235,用于拟合每行中的最值像素点,得到当前图像块的当前拟合线;The best value fitting unit 1235 is used to fit the best value pixel points in each row to obtain the current fitting line of the current image block;
形变检测部1237,用于检测当前拟合线相对基准线的当前图像形变量;The deformation detection unit 1237 is used to detect the current image deformation of the current fitting line relative to the reference line;
图像检测部1239,用于根据当前图像形变量,检测当前图像块为当前形变图像块或当前非形变图像块。The image detection unit 1239 is configured to detect whether the current image block is the current deformed image block or the current non-deformable image block according to the current image deformation.
进一步,在一个实施例中,图像检测单元123包括:Further, in an embodiment, the image detection unit 123 includes:
图像获取部1232获取当前图像块;The image acquisition unit 1232 acquires the current image block;
函数提取部1234对当前图像块进行拟合,得到拟合函数;The function extraction unit 1234 fits the current image block to obtain a fitting function;
图像检测部1236根据拟合函数,检测当前图像块为当前形变图像块或当前非形变图像块。The image detection unit 1236 detects whether the current image block is the current deformed image block or the current non-deformed image block according to the fitting function.
进一步,在一个实施例中,形变转换单元222包括:Further, in an embodiment, the deformation conversion unit 222 includes:
图像获取部2221,用于获取形变区域中的当前形变图像块;The image acquiring unit 2221 is used to acquire the current deformed image block in the deformed area;
最值提取部2222,用于提取所述当前形变图像块的每行中的最值像素点;The maximum value extraction unit 2222 is configured to extract the maximum value pixel points in each row of the current deformed image block;
最值拟合部2223,用于拟合所述每行中的所述最值像素点,得到当前拟合线;The best value fitting unit 2223 is configured to fit the best value pixel points in each row to obtain the current fitted line;
形变计算部2224,用于计算当前拟合线相对基准线的当前图像形变量;The deformation calculation unit 2224 is used to calculate the current image deformation of the current fitting line relative to the reference line;
图像转换部2225,用于基于当前图像形变量,转换当前形变图像块为当前基准图像块。The image conversion unit 2225 is configured to convert the current deformed image block into the current reference image block based on the current image shape variable.
进一步,在一个实施例中,形变转换单元222包括:Further, in an embodiment, the deformation conversion unit 222 includes:
图像获取部2231,用于获取形变区域中的当前形变图像块;The image acquiring unit 2231 is used to acquire the current deformed image block in the deformed area;
函数提取部2232,用于对当前形变图像块进行拟合,得到当前拟合函数;The function extraction part 2232 is used for fitting the current deformed image block to obtain the current fitting function;
图像转换部2233,用于基于当前拟合函数,转换当前形变图像块为当前基准图像块。The image conversion unit 2233 is configured to convert the current deformed image block into the current reference image block based on the current fitting function.
进一步,在一个实施例中,形变转换单元222包括:Further, in an embodiment, the deformation conversion unit 222 includes:
图像获取部2241,用于获取形变区域的当前形变图像块;The image acquiring unit 2241 is used to acquire the current deformed image block of the deformed area;
形变生成部2242,用于基于当前形变图像块,生成目标物的当前形变量;The deformation generating unit 2242 is configured to generate the current deformation of the target object based on the current deformed image block;
图像转换部2243,用于基于当前形变量,转换当前形变图像块为当前基准图像块。The image conversion unit 2243 is configured to convert the current deformed image block into the current reference image block based on the current deformation amount.
进一步,在一个实施例中,形变转换单元222包括:Further, in an embodiment, the deformation conversion unit 222 includes:
图像转换部2251,用于基于傅里叶变换转换形变区域为基准图像。The image conversion unit 2251 is configured to convert the deformed area into a reference image based on Fourier transform.
如图21所示,在一个实施例中,提供一种姿态识别装置,该姿态识别装置包括:As shown in FIG. 21, in one embodiment, a gesture recognition device is provided, and the gesture recognition device includes:
上述图像匹配装置;及The above-mentioned image matching device; and
姿态生成模块130,用于根据匹配结果,生成目标物体的姿态信息。The posture generation module 130 is configured to generate posture information of the target object according to the matching result.
如图22所示,在一个实施例中,提供一种3D成像装置,该3D成像装置包括:As shown in FIG. 22, in one embodiment, a 3D imaging device is provided, and the 3D imaging device includes:
上述图像匹配模块;及The above-mentioned image matching module; and
3D成像模块140,用于根据匹配结果,生成目标物体的3D图像。The 3D imaging module 140 is configured to generate a 3D image of the target object according to the matching result.
关于上述图像匹配装置、3D成像装置、姿态识别装置的具体限定可以分别参见上文中对于图像匹配方法、3D成像方法、姿态识别方法的限定,在此不再赘述。上述各个装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For the specific definitions of the above-mentioned image matching device, 3D imaging device, and gesture recognition device, please refer to the above definitions of the image matching method, 3D imaging method, and gesture recognition method respectively, which will not be repeated here. Each module in each of the above-mentioned devices may be implemented in whole or in part by software, hardware, and a combination thereof. The above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
在另一些实施例中,还提供一种计算机可读存储介质,所述计算机可读存 储介质存储有计算机程序,所述计算机程序被处理器执行时实现上面实施例中所述的图像匹配方法、3D成像方法、和/或姿态识别方法的步骤。In other embodiments, a computer-readable storage medium is also provided, the computer-readable storage medium stores a computer program, and the computer program implements the image matching method described in the above embodiments when the computer program is executed by a processor, Steps of 3D imaging method and/or gesture recognition method.
有关图像匹配方法、3D成像方法、和/或姿态识别方法的描述参见上面的实施例,在此不再重复赘述。For the description of the image matching method, the 3D imaging method, and/or the gesture recognition method, please refer to the above embodiment, which will not be repeated here.
如图23所示,在另一些实施例中,还提供一种计算机设备,所述计算机设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上面实施例中所述的图像匹配方法、3D成像方法、和/或姿态识别方法步骤。As shown in FIG. 23, in some other embodiments, a computer device is also provided. The computer device includes a memory, a processor, and a computer program that is stored in the memory and can run on the processor. When the processor executes the computer program, the steps of the image matching method, 3D imaging method, and/or gesture recognition method described in the above embodiments are implemented.
有关图像匹配方法、3D成像方法、和/或姿态识别方法的描述参见上面的实施例,在此不再重复赘述。For the description of the image matching method, the 3D imaging method, and/or the gesture recognition method, please refer to the above embodiment, which will not be repeated here.
以计算机和工业控制计算机为例,工业控制计算机具有重要的计算机属性和特征,因此它们都具有计算机中央处理单元(Central Processing Unit,CPU)、硬盘、内存等内部存储器,还具有插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等外部存储器,并有操作系统、控制网络和协议、计算能力、友好的人机界面,是为其他各结构/设备/系统提供可靠、嵌入式、智能化的计算机和工业控制计算机。Take computers and industrial control computers as examples. Industrial control computers have important computer attributes and characteristics. Therefore, they all have computer central processing unit (CPU), hard disk, memory and other internal memories, as well as plug-in hard disks. External storage such as Smart Media Card (SMC), Secure Digital (SD) Card, Flash Card (Flash Card), etc., with operating system, control network and protocol, computing power, and friendly man-machine interface, It is to provide reliable, embedded, intelligent computers and industrial control computers for other structures/equipment/systems.
示例性的,所述计算机程序可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器中,并由所述处理器执行,以完成本发明。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序在所述控制单元中的执行过程。Exemplarily, the computer program may be divided into one or more modules/units, and the one or more modules/units are stored in the memory and executed by the processor to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program in the control unit.
所称处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called processor can be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
所述存储器可以是所述终端内置的存储设备,例如硬盘或内存。所述存储 器也可以是所述控制单元的外部存储设备,例如所述控制单元上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器还可以既包括所述控制单元的内部存储单元,也包括外部存储设备。所述存储器用于存储所述计算机程序以及所述终端所需的其他程序和数据。所述存储器还可以用于暂时地存储已经输出或者将要输出的数据。The memory may be a storage device built into the terminal, such as a hard disk or a memory. The memory may also be an external storage device of the control unit, such as a plug-in hard disk equipped on the control unit, a Smart Media Card (SMC), a Secure Digital (SD) card, and a flash memory. Card (Flash Card), etc. Further, the memory may also include not only an internal storage unit of the control unit, but also an external storage device. The memory is used to store the computer program and other programs and data required by the terminal. The memory can also be used to temporarily store data that has been output or will be output.
本领域技术人员可以理解,图23仅仅是计算机设备的示例,并不构成对计算机设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述控制单元还可以包括输入输出设备、网络接入设备、总线等。Those skilled in the art can understand that FIG. 23 is only an example of a computer device, and does not constitute a limitation on the computer device. It may include more or less components than shown in the figure, or a combination of certain components, or different components, such as The control unit may also include input and output devices, network access devices, buses, and the like.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the above-mentioned embodiments, the description of each embodiment has its own emphasis. For parts that are not described in detail or recorded in an embodiment, reference may be made to related descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered as going beyond the scope of the present invention.
在本发明所提供的实施例中,应该理解到,所揭露的各个装置和方法,可以通过其它的方式实现。例如,以上所描述的各个装置的实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided by the present invention, it should be understood that the disclosed devices and methods can be implemented in other ways. For example, the embodiments of each device described above are only illustrative. For example, the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units or Components can be combined or integrated into another system, or some features can be omitted or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括是电载波信号和电信信号。If the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the present invention implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, it can implement the steps of the foregoing method embodiments. . Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc. It should be noted that the content contained in the computer-readable medium can be appropriately added or deleted according to the requirements of the legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to the legislation and patent practice, the computer-readable medium Does not include electrical carrier signals and telecommunication signals.
当元件被表述“固定在”另一个元件,它可以直接在另一个元件上、或者其间可以存在一个或多个居中的元件、与另一个元件预制成一体。当一个元件被表述“连接”另一个元件,它可以是直接连接到另一个元件、或者其间可以存在一个或多个居中的元件。本说明书使用的术语“垂直的”、“水平的”、“左”、“右”、“内”、“外”以及类似的表述只是为了说明的目的。When an element is expressed as being "fixed to" another element, it may be directly on the other element, or there may be one or more intermediate elements in between, prefabricated integrally with the other element. When an element is said to be "connected" to another element, it can be directly connected to the other element, or there may be one or more intervening elements in between. The terms "vertical", "horizontal", "left", "right", "inner", "outer" and similar expressions used in this specification are for illustrative purposes only.
除非另有定义,本说明书所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本说明书中在本发明的说明书中所使用的属于只是为了描述具体的实施方式的目的,不是用于限制本发明。Unless otherwise defined, all technical and scientific terms used in this specification have the same meaning as commonly understood by those skilled in the technical field of the present invention. What is used in the description of the present invention in this specification is only for the purpose of describing specific embodiments, and is not used to limit the present invention.
本文术语中“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如:A和/或B,可以表示单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或” 的关系。The term "and/or" in this article is only an association relationship describing the associated objects, which means that there can be three relationships, for example: A and/or B, which can mean that A exists alone, A and B exist at the same time, and B exists alone. These three situations. In addition, the character "/" in this text generally indicates that the associated objects before and after are in an "or" relationship.
本发明的权利要求书和说明书及上述附图中的术语“第一”、“第二”、“第三”、“S110”、“S1120”“S130”等等(如果存在)是用来区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”“具有”以及他们的任何形变,意图在于覆盖不排他的包含。例如:包括了一系列步骤或者模块的过程、方法或系统不必限于清楚地列出的那些步骤或者模块,而是包括没有清楚地列出的或对于这些过程、方法、系统、产品或机器人固有的其它步骤或模块。The terms "first", "second", "third", "S110", "S1120", "S130", etc. (if any) in the claims and description of the present invention and the above-mentioned drawings are used to distinguish Similar objects are not necessarily used to describe a specific order or sequence. It should be understood that the data used in this way can be interchanged under appropriate circumstances, so that the embodiments described herein can be implemented in a sequence other than the content illustrated or described herein. In addition, the terms "include", "have" and any of their variations are intended to cover non-exclusive inclusion. For example, a process, method, or system that includes a series of steps or modules is not necessarily limited to those clearly listed steps or modules, but includes those that are not clearly listed or are inherent to these processes, methods, systems, products, or robots. Other steps or modules.
需要说明的是,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的结构和模块并不一定是本发明所必须的。It should be noted that those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the involved structures and modules are not necessarily required by the present invention.
以上对本发明实施例所提供的图像匹配、3D成像及姿态识别方法、装置及系统进行了详细介绍,但以上实施例的说明只是用于帮助理解本发明的方法及其核心思想,不应理解为对本发明的限制。本技术领域的技术人员,依据本发明的思想,在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。The image matching, 3D imaging, and gesture recognition methods, devices, and systems provided by the embodiments of the present invention are described in detail above, but the descriptions of the above embodiments are only used to help understand the method and core ideas of the present invention, and should not be understood as Restrictions on the invention. Those skilled in the art, based on the idea of the present invention, can easily conceive of changes or substitutions within the technical scope disclosed by the present invention, shall be covered by the protection scope of the present invention.

Claims (19)

  1. 一种图像匹配方法,其特征在于,所述图像匹配方法包括:An image matching method, characterized in that the image matching method includes:
    获取初始图像;Get the initial image;
    检测所述初始图像中的形变区域和/或非形变区域,生成针对所述形变区域的匹配结果和/或生成针对所述非形变区域的匹配结果。Detect the deformed area and/or non-deformed area in the initial image, generate a matching result for the deformed area and/or generate a matching result for the non-deformed area.
  2. 根据权利要求1所述的图像匹配方法,其特征在于,所述检测所述初始图像中的形变区域和/或非形变区域包括:The image matching method according to claim 1, wherein the detecting the deformed area and/or the non-deformed area in the initial image comprises:
    获取所述初始图像中的当前图像块;Acquiring the current image block in the initial image;
    基于所述当前图像块的图像特征,检测所述当前图像块为当前形变图像块或当前非形变图像块;其中,所述非形变区域为所述当前非形变图像块的集合;所述形变区域为所述当前形变图像块的集合。Based on the image characteristics of the current image block, it is detected that the current image block is a current deformed image block or a current non-deformable image block; wherein the non-deformable area is a set of the current non-deformable image block; the deformed area Is the set of the current deformed image blocks.
  3. 根据权利要求1所述的图像匹配方法,其特征在于,所述检测所述初始图像中的形变区域和/或非形变区域包括:The image matching method according to claim 1, wherein the detecting the deformed area and/or the non-deformed area in the initial image comprises:
    获取所述初始图像中的当前图像块;Acquiring the current image block in the initial image;
    匹配所述当前图像块,判断是否能够生成匹配结果;Match the current image block to determine whether a matching result can be generated;
    若否,所述当前图像块为当前形变图像块;若是,所述当前图像块为当前非形变图像块;其中,所述非形变区域为所述当前非形变图像块的集合;所述形变区域为所述当前形变图像块的集合。If not, the current image block is the current deformed image block; if yes, the current image block is the current non-deformed image block; wherein, the non-deformed area is a set of the current non-deformed image blocks; the deformed area Is the set of the current deformed image blocks.
  4. 根据权利要求1或2或3所述的图像匹配方法,其特征在于,所述生成针对所述形变区域的匹配结果包括:The image matching method according to claim 1 or 2 or 3, wherein the generating a matching result for the deformed area comprises:
    获取所述形变区域;Obtaining the deformation area;
    转换所述形变区域为基准图像;Converting the deformed area into a reference image;
    针对所述基准图像进行匹配,得到针对所述形变区域的匹配结果。Matching is performed on the reference image to obtain a matching result for the deformed area.
  5. 根据权利要求4所述的图像匹配方法,其特征在于,所述转换所述形变区域为基准图像包括:The image matching method according to claim 4, wherein said converting said deformed area into a reference image comprises:
    获取所述形变区域中的当前形变图像块;Acquiring the current deformed image block in the deformed area;
    提取所述当前形变图像块的每行中的最值像素点;Extracting the most value pixel point in each row of the current deformed image block;
    拟合所述每行中的所述最值像素点,得到拟合线;Fitting the most value pixel points in each row to obtain a fitting line;
    计算所述拟合线相对基准线的图像形变量;Calculating the image shape variable of the fitted line relative to the reference line;
    基于所述图像形变量,转换所述当前形变图像块为当前基准图像块。Based on the image deformation variable, the current deformed image block is converted into the current reference image block.
  6. 根据权利要求4所述的图像匹配方法,其特征在于,所述转换所述形变区域为基准图像包括:The image matching method according to claim 4, wherein said converting said deformed area into a reference image comprises:
    基于傅里叶变换转换所述形变区域为基准图像。The deformed area is converted into a reference image based on Fourier transform.
  7. 根据权利要求4所述的图像匹配方法,其特征在于,所述转换所述形变区域为基准图像包括:The image matching method according to claim 4, wherein said converting said deformed area into a reference image comprises:
    获取所述形变区域中的当前形变图像块;Acquiring the current deformed image block in the deformed area;
    对所述当前形变图像块进行拟合,得到拟合函数;Fitting the current deformed image block to obtain a fitting function;
    基于所述拟合函数,转换所述当前形变图像块为当前基准图像块。Based on the fitting function, the current deformed image block is converted into a current reference image block.
  8. 根据权利要求4所述的图像匹配方法,其特征在于,所述转换所述形变区域为基准图像包括:The image matching method according to claim 4, wherein said converting said deformed area into a reference image comprises:
    获取所述形变区域的当前形变图像块;Acquiring the current deformed image block of the deformed area;
    基于所述当前形变图像块,生成目标物的形变量;Based on the current deformed image block, generating a deformation variable of the target object;
    基于所述形变量,转换所述当前形变图像块为当前基准图像块。Based on the deformation variable, convert the current deformed image block into a current reference image block.
  9. 根据权利要求1或2或3所述的图像匹配方法,其特征在于,所述生成针对所述形变区域的匹配结果包括:The image matching method according to claim 1 or 2 or 3, wherein the generating a matching result for the deformed area comprises:
    依次生成所述形变区域发生单位形变量后的图像组;Sequentially generating image groups of the deformed area after unit deformation occurs;
    针对所述图像组进行匹配,得到匹配结果。Matching is performed on the image group to obtain a matching result.
  10. 根据权利要求1或2或3所述的图像匹配方法,其特征在于,所述生成针对所述形变区域的匹配结果包括:The image matching method according to claim 1 or 2 or 3, wherein the generating a matching result for the deformed area comprises:
    获取预生成的所述形变区域依次发生单位形变量后的模板图像组;Acquiring a pre-generated group of template images in which the deformation area sequentially undergoes unit deformation;
    针对所述模板图像组进行匹配,得到匹配结果。Matching is performed on the template image group to obtain a matching result.
  11. 根据权利要求1或2或3所述的图像匹配方法,其特征在于,所述初始图像为通过图像传感器采集的向目标物体投射图像后的图像;其中,被投射的所述图像成周期性渐变规律且在一定空间范围内具有唯一性,或在一定空间范围内具有唯一性。The image matching method according to claim 1 or 2 or 3, wherein the initial image is an image collected by an image sensor and projected to a target object; wherein the projected image becomes a periodic gradient Regular and unique within a certain space, or unique within a certain space.
  12. 一种姿态识别方法,其特征在于,所述姿态识别方法包括:A gesture recognition method, characterized in that the gesture recognition method includes:
    权利要求1-11任一项所述图像匹配方法;及The image matching method according to any one of claims 1-11; and
    根据所述匹配结果,生成目标物体的姿态信息。According to the matching result, the posture information of the target object is generated.
  13. 一种3D成像方法,其特征在于,所述3D成像方法包括:A 3D imaging method, characterized in that, the 3D imaging method includes:
    权利要求1-11任一项所述图像匹配方法;及The image matching method according to any one of claims 1-11; and
    根据所述匹配结果,生成目标物体的3D图像。According to the matching result, a 3D image of the target object is generated.
  14. 一种图像匹配装置,其特征在于,所述图像匹配装置包括:An image matching device, characterized in that the image matching device includes:
    图像获取模块,获取初始图像;Image acquisition module to acquire the initial image;
    图像匹配模块,用于检测所述初始图像中的形变区域和/或非形变区域,生成针对所述形变区域的匹配结果和/或生成针对所述非形变区域的匹配结果。The image matching module is used to detect the deformed area and/or non-deformed area in the initial image, generate a matching result for the deformed area and/or generate a matching result for the non-deformed area.
  15. 一种姿态识别装置,其特征在于,所述姿态识别装置包括:A gesture recognition device, characterized in that the gesture recognition device includes:
    权利要求14所述图像匹配装置;及The image matching device of claim 14; and
    姿态生成模块,用于根据所述匹配结果,生成目标物体的姿态信息。The posture generation module is used to generate posture information of the target object according to the matching result.
  16. 一种3D成像装置,其特征在于,所述3D成像装置包括:A 3D imaging device, characterized in that, the 3D imaging device comprises:
    权利要求14所述图像匹配装置;及The image matching device of claim 14; and
    图像生成模块,用于根据所述匹配结果,生成目标物体的3D图像。The image generation module is used to generate a 3D image of the target object according to the matching result.
  17. 一种系统,其特征在于,所述系统包括:图像投射器、图像传感器和控制单元;A system, characterized in that the system includes: an image projector, an image sensor, and a control unit;
    所述图像投射器,用于向目标物体投射图像;The image projector is used to project an image to a target object;
    所述图像传感器,用于采集被投射所述图像后的所述目标物体的初始图像;The image sensor is used to collect the initial image of the target object after the image is projected;
    所述控制单元,用于实现权利要求1-11任一项所述的图像匹配方法;权利要求12所述的姿态识别方法;和/或权利要求13所述的3D成像方法的步骤。The control unit is configured to implement the image matching method according to any one of claims 1-11; the gesture recognition method according to claim 12; and/or the steps of the 3D imaging method according to claim 13.
  18. 一种计算机设备,所述计算机设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1-11任一项所述的图像匹配方法;权利要求12所述的姿态识别方法;和/或权利要求13所述的3D成像方法的步骤。A computer device, the computer device comprising a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor implements the rights when the computer program is executed. The image matching method according to any one of claims 1-11; the gesture recognition method according to claim 12; and/or the steps of the 3D imaging method according to claim 13.
  19. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1-11任一项 所述的图像匹配方法;权利要求12所述的姿态识别方法;和/或权利要求13所述的3D成像方法的步骤。A computer-readable storage medium storing a computer program, wherein the computer program implements the image matching method according to any one of claims 1-11 when the computer program is executed by a processor; The gesture recognition method according to claim 12; and/or the steps of the 3D imaging method according to claim 13.
PCT/CN2020/115736 2019-09-27 2020-09-17 Image matching, 3d imaging and pose recognition method, device, and system WO2021057582A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910926826.3 2019-09-27
CN201910926826.3A CN112581512A (en) 2019-09-27 2019-09-27 Image matching, 3D imaging and posture recognition method, device and system

Publications (1)

Publication Number Publication Date
WO2021057582A1 true WO2021057582A1 (en) 2021-04-01

Family

ID=75110514

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/115736 WO2021057582A1 (en) 2019-09-27 2020-09-17 Image matching, 3d imaging and pose recognition method, device, and system

Country Status (2)

Country Link
CN (1) CN112581512A (en)
WO (1) WO2021057582A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08305854A (en) * 1995-05-01 1996-11-22 Nippon Telegr & Teleph Corp <Ntt> Projection parameter extraction processing method
CN1489382A (en) * 2002-07-23 2004-04-14 日本电气视象技术株式会社 Projecting apparatus
CN102393398A (en) * 2002-09-30 2012-03-28 应用材料以色列公司 Illumination system for optical inspection
CN108305218A (en) * 2017-12-29 2018-07-20 努比亚技术有限公司 Panoramic picture processing method, terminal and computer readable storage medium
CN108391145A (en) * 2017-02-20 2018-08-10 南安市耀森智能设备有限公司 A kind of robot
CN110139033A (en) * 2019-05-13 2019-08-16 Oppo广东移动通信有限公司 Camera control method and Related product

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2787453B2 (en) * 1988-10-25 1998-08-20 大日本印刷株式会社 How to determine the pattern allocation area of the pattern film
CN101198964A (en) * 2005-01-07 2008-06-11 格斯图尔泰克股份有限公司 Creating 3D images of objects by illuminating with infrared patterns
JP6251489B2 (en) * 2013-03-28 2017-12-20 株式会社 資生堂 Image analysis apparatus, image analysis method, and image analysis program
AU2015202937A1 (en) * 2015-05-29 2016-12-15 Canon Kabushiki Kaisha Systems and methods for registration of images
CN105956997B (en) * 2016-04-27 2019-07-05 腾讯科技(深圳)有限公司 The method and apparatus of image deformation processing
WO2018050223A1 (en) * 2016-09-14 2018-03-22 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Pattern detection
CN108133492B (en) * 2016-12-01 2022-04-26 京东方科技集团股份有限公司 Image matching method, device and system
CN109427046B (en) * 2017-08-30 2021-07-20 深圳中科飞测科技股份有限公司 Distortion correction method and device for three-dimensional measurement and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08305854A (en) * 1995-05-01 1996-11-22 Nippon Telegr & Teleph Corp <Ntt> Projection parameter extraction processing method
CN1489382A (en) * 2002-07-23 2004-04-14 日本电气视象技术株式会社 Projecting apparatus
CN102393398A (en) * 2002-09-30 2012-03-28 应用材料以色列公司 Illumination system for optical inspection
CN108391145A (en) * 2017-02-20 2018-08-10 南安市耀森智能设备有限公司 A kind of robot
CN108305218A (en) * 2017-12-29 2018-07-20 努比亚技术有限公司 Panoramic picture processing method, terminal and computer readable storage medium
CN110139033A (en) * 2019-05-13 2019-08-16 Oppo广东移动通信有限公司 Camera control method and Related product

Also Published As

Publication number Publication date
CN112581512A (en) 2021-03-30

Similar Documents

Publication Publication Date Title
EP3614340B1 (en) Methods and devices for acquiring 3d face, and computer readable storage media
US10872227B2 (en) Automatic object recognition method and system thereof, shopping device and storage medium
US10008005B2 (en) Measurement system and method for measuring multi-dimensions
US9135710B2 (en) Depth map stereo correspondence techniques
US10726580B2 (en) Method and device for calibration
US10810718B2 (en) Method and device for three-dimensional reconstruction
US10455219B2 (en) Stereo correspondence and depth sensors
CN105335748B (en) Image characteristic extracting method and system
EP3135033B1 (en) Structured stereo
KR20200044676A (en) Method and apparatus for active depth sensing and calibration method thereof
WO2012096747A1 (en) Forming range maps using periodic illumination patterns
US11468609B2 (en) Methods and apparatus for generating point cloud histograms
CN113362445B (en) Method and device for reconstructing object based on point cloud data
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN109375833B (en) Touch instruction generation method and device
Wei et al. An accurate stereo matching method based on color segments and edges
Yin et al. Estimation of the fundamental matrix from uncalibrated stereo hand images for 3D hand gesture recognition
CN112197708B (en) Measuring method and device, electronic device and storage medium
JP7298687B2 (en) Object recognition device and object recognition method
CN116912417A (en) Texture mapping method, device, equipment and storage medium based on three-dimensional reconstruction of human face
WO2021057582A1 (en) Image matching, 3d imaging and pose recognition method, device, and system
JP2021021577A (en) Image processing device and image processing method
JP2017162251A (en) Three-dimensional noncontact input device
CN106651940B (en) Special processor for 3D interaction
Hu et al. Active shape reconstruction using a novel visuotactile palm sensor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20869735

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20869735

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20869735

Country of ref document: EP

Kind code of ref document: A1