WO2019119453A1 - 图像匹配方法和视觉系统 - Google Patents

图像匹配方法和视觉系统 Download PDF

Info

Publication number
WO2019119453A1
WO2019119453A1 PCT/CN2017/118121 CN2017118121W WO2019119453A1 WO 2019119453 A1 WO2019119453 A1 WO 2019119453A1 CN 2017118121 W CN2017118121 W CN 2017118121W WO 2019119453 A1 WO2019119453 A1 WO 2019119453A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
real
downsampled
time
matching
Prior art date
Application number
PCT/CN2017/118121
Other languages
English (en)
French (fr)
Inventor
阳光
Original Assignee
深圳配天智能技术研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳配天智能技术研究院有限公司 filed Critical 深圳配天智能技术研究院有限公司
Priority to CN201780035380.4A priority Critical patent/CN109313708B/zh
Priority to PCT/CN2017/118121 priority patent/WO2019119453A1/zh
Publication of WO2019119453A1 publication Critical patent/WO2019119453A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Definitions

  • the present invention relates to the field of industrial vision, and more particularly to image matching methods and vision systems.
  • Image matching is a method of identifying points of the same name between two or more images based on analysis of the image of the object. Image matching is widely used in the current computer field, from network security to industrial production. At present, in the production process of industrial assembly lines, when processing the produced objects, the industrial robot needs to identify the target to be processed and obtain the position of the target to be processed to process it.
  • the commonly adopted method is to obtain a template image of a target to be processed and acquire a real-time image of a predetermined scene, and match the real-time image and the template image, and identify a target to be processed and a processing position according to the matching result, thereby processing the target to be processed, More accurate identification of the target to be machined and a more precise machining position for the target to be machined requires a more accurate match. Therefore, it is necessary to obtain a clearer image of the target to be processed.
  • the target to be processed does not necessarily exist or is located in the preset area, and when the high-resolution image is directly used for matching, the speed of image matching is greatly reduced, and the industrial robot cannot quickly recognize the object to be processed. Processing targets and machining positions reduce machining efficiency.
  • the technical problem to be solved by the present invention is to provide an image matching method and a vision system, which can improve the image matching speed and thereby improve the processing efficiency.
  • a technical solution adopted by the present invention is to provide an image matching method, including: collecting a real-time image of a predetermined scene; performing down-sampling processing on the real-time image and a preset template image to obtain the a downsampled image of the real-time image and the template image; matching the downsampled image of the real-time image and the template image to obtain preliminary matching information; determining, according to the preliminary matching information, whether a preset target appears in the real-time image, if And matching the real-time image with the template image to obtain a result of image matching.
  • another technical solution adopted by the present invention is to provide an image matching method, including: collecting a real-time image of a predetermined scene; performing down-sampling processing on the real-time image and a preset template image, and acquiring And performing a downsampled image of the real-time image and the template image; matching the downsampled image of the real-time image and the template image to obtain preliminary matching information; and acquiring, according to the preliminary matching information, a target that matches the template image in the real-time image a location; determining whether the location of the target is within the preset area, and if so, matching the real-time image corresponding to the location with the template image to obtain a result of image matching.
  • a vision system including: an image collector, a memory, and a processor, wherein the processor is coupled to the image collector and the memory, respectively;
  • the image collector is configured to collect a real-time image, and send the collected real-time image to the processor;
  • the memory is configured to store the template image, the real-time image, program data, and data processed by the processor
  • the processor performs the following steps: controlling the image collector to acquire a real-time image of a predetermined scene; performing down-sampling processing on the real-time image and the preset template image, acquiring the real-time image and a downsampled image of the template image; matching the downsampled image of the real-time image and the template image to obtain preliminary matching information; determining, according to the preliminary matching information, whether a preset target appears in the real-time image, and if so, The real-time image is matched with the template image to obtain the result of the image matching.
  • a vision system including: an image collector, a memory, and a processor, wherein the processor is coupled to the image collector and the memory, respectively.
  • the image collector is configured to collect a real-time image and send the collected real-time image to the processor;
  • the memory is configured to store the template image, the real-time image, program data, and the processor processing data;
  • the processor performs the following steps: controlling the image collector to acquire a real-time image of a predetermined scene; performing down-sampling processing on the real-time image and the preset template image to acquire the real-time image and template a downsampled image of the image; matching the downsampled image of the real-time image and the template image to obtain preliminary matching information; obtaining, according to the preliminary matching information, a location of a target in the real-time image that matches the template image; determining the Whether the position of the target is within the preset area, and if so, matching the real-time image corresponding to the position with the template image to obtain the result of the image matching.
  • the present invention performs downsampling processing on the template image and the real-time image, first obtains preliminary matching information by matching the downsampled image, and then judges the real-time image according to the preliminary matching information. Whether the preset target appears, and if so, the exact matching result is obtained by matching the real-time image and the template image, thereby improving the calculation speed of the image matching.
  • the invention can quickly obtain the matching result of the real-time image and the template image, improve the industrial recognition speed, and further improve the processing efficiency.
  • FIG. 1 is a schematic flow chart of an embodiment of an image matching method according to the present invention.
  • FIG. 2 is a schematic diagram of an embodiment of an image matching method according to the present invention.
  • FIG. 3 is a schematic structural view of an embodiment of a vision system of the present invention.
  • FIG. 4 is a schematic flow chart of another embodiment of an image matching method according to the present invention.
  • FIG. 5 is a block diagram showing another embodiment of the vision system of the present invention.
  • FIG. 1 is a schematic flowchart of an image matching method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an image matching method according to an embodiment of the present invention.
  • the image matching method of this embodiment includes the following steps:
  • S101 Collect a real-time image of a predetermined scene.
  • the target to be processed is set as a preset target, and before the industrial robot processes the preset target in the predetermined scene, the real-time image of the predetermined scene needs to be acquired through the image collector.
  • the reference image of the preset target may be used as the template image, and the image acquired in the preset scene may be used as the real-time image.
  • the image of the preset target acquired by the image collector when the target is located at the initial position may also be set as a template image, and the image acquired by the image collector in the predetermined scene is in real time during the processing.
  • the image is not limited here.
  • the reference image of the preset target is preset as the template image 21, after which the industrial robot acquires the real-time image 20 of the predetermined scene through the image collector, and will be located outside the preset target in the template image 21. Set to the outer edge A for processing.
  • the image collector is an industrial camera, an intelligent traffic camera, a smart camera, a 3D smart sensor or other device capable of acquiring a real-time image of a predetermined scene.
  • the resolution of the real-time image 20 and the resolution of the preset template image 21 may be the same or different.
  • S102 Perform a downsampling process on the real-time image and the preset template image, and obtain a downsampled image of the real-time image and the template image.
  • Downsampling is also known as downsampling or reducing the number of image samples. For an image with a resolution of N*M, if the downsampling coefficient is k, then one dot is taken every k points of each row and column in the original image to form an image.
  • the target in the real-time image 20 of the acquired predetermined scene is not necessarily in the preset area, and the industrial robot can pre-preserve when the preset target is located in the processing area. If the target is processed, if the original real-time image 20 and the template image 21 are directly used for matching, the calculation time is increased, and the calculation resources may be wasted, and the matching result cannot be quickly obtained.
  • the real-time image 20 and the template image 21 may be downsampled, and the downsampled images are matched to obtain preliminary matching information.
  • the image collector is an industrial camera. After the real-time image 20 is acquired by the industrial camera, the real-time image 20 and the template image 21 are subjected to different levels of downsampling processing, and the real-time image 20 and the template image 21 are acquired. Downsampled images with different resolutions.
  • the number of downsampled images of the real-time image 20 is the same as the number of downsampled images of the template image 21. In the embodiment, the number of downsampled images of the real-time image 20 and the number of downsampled images of the template image 21 are respectively 4.
  • the number of downsampled images of the real-time image 20 and the template image 21 may also be set according to the user's own needs, which is not limited herein.
  • S103 Match the downsampled image of the real-time image and the template image to obtain preliminary matching information.
  • the downsampled image of the real-time image 20 with the close resolution is mapped to the downsampled image of the template image 21, and the real-time image is obtained.
  • the downsampled images of 20 are hierarchically arranged in descending order of resolution.
  • the real-time downsampled image with close resolution is matched and parsed with the mapped template downsampled image to determine whether the target position information of the set precision level is obtained.
  • the downsampled image 201 of the real-time image 20 is mapped to the downsampled image 211 of the template image 21, and the downsampled image 202 of the real-time image 20 is mapped to the downsampled image 212 of the template image 21 to be hierarchically arranged.
  • the downsampled image 201 of the real-time image 20 having the lowest resolution among the downsampled image sequences of the real-time image 20 is matched and parsed with the downsampled image 211 of the template image 21 having the lowest resolution among the downsampled image sequences of the template image 21, and it is judged whether or not the image is acquired.
  • Set the target position information of the accuracy level If the target position information of the set precision level can be acquired, the target position information of the set precision level is used as preliminary matching information.
  • the downsampled image of the real-time image 20 and the template image 21 is acquired, the downsampled image of the real-time image 20 with the close resolution is mapped to the downsampled image of the template image 21, and the real-time image is mapped.
  • the downsampled images of image 20 are hierarchically arranged in descending order of resolution.
  • the real-time downsampled image with close resolution is matched and parsed with the mapped template downsampled image to determine whether the target position information of the set precision level is obtained.
  • the downsampled image 201 of the real-time image 20 is mapped to the downsampled image 211 of the template image 21, and the downsampled image 202 of the real-time image 20 is mapped to the downsampled image 212 of the template image 21, and the real-time image 20 is
  • the downsampled image 203 is mapped to the downsampled image 213 of the template image 21.
  • the downsampled image 201 of the lowest resolution real-time image 20 in the downsampled image sequence of the hierarchically arranged real-time image 20 is matched and parsed with the downsampled image 211 of the template image 21 mapped by the downsampled image 201 to determine whether to acquire the setting.
  • Target position information of the precision level is
  • the downsampled image of the real-time image 20 of the higher-level resolution and the downsampled image of the mapped template image 21 are selected in an ascending manner for matching analysis. Until the target position information of the set precision level is obtained, and the target position information of the set precision level is used as the preliminary matching information. For example, the downsampled image 202 of the real-time image 20 higher than the downsampled image 201 and the downsampled image 212 of the mapped template image 21 are matched and analyzed, and it is determined whether the parsed information is the target position information of the set precision level.
  • the downsampled image 203 having a higher level of resolution than the downsampled image 202 is selected for matching analysis with the mapped downsampled image 213 until the target position information of the set precision level can be parsed, and the setting is set.
  • the target position information of the precision level is used as preliminary matching information.
  • S104 Acquire, according to the preliminary matching information, a location of a target in the real-time image that matches the template image.
  • the obtained preliminary matching information includes location information of a target that has the highest matching with a preset target in the template image in the real-time image, where the location information includes location information of the target, coordinates, and the like, and the preliminary matching information is adopted. You can get rough location information for the target.
  • S105 Determine whether the location of the target is within the preset area, and if yes, match the real-time image corresponding to the location with the template image to obtain a result of image matching.
  • the processing area in the predetermined scene is set as the preset area, and the target with the highest matching degree of the preset target in the template image 21 is determined as the preset target, and the industrial robot passes the preliminary
  • the matching information determines whether the preset target is located in the preset area. If it is in the preset area, the real-time image 20 is matched with the template image 21 according to the preliminary matching information to obtain a result of image matching, that is, the preset target is obtained.
  • the precise position information in the real-time image and the corresponding matching relationship between each part of the preset target in the real-time image 20 and the preset target in the template image 21, wherein the precise position information includes position information such as an angle and a coordinate, and the result is passed. It can be determined that the outer edge of the real-time image 20 that matches the outer edge A is the outer edge that can be processed, and the industrial robot can process the outer edge accordingly.
  • the processing area in the predetermined scene is set as the preset area, and the target with the highest matching degree of the preset target in the template image 21 is determined as the preset target, and the industrial robot passes the preliminary
  • the matching information determines whether the preset target is located in the preset area. If it is not in the preset area, repeat the foregoing steps and then perform the determination again until the preset target falls within the preset area, and then the real-time image is obtained according to the preliminary matching information. Matching with the template image 21 to obtain the result of image matching, that is, obtaining accurate position information of the preset target in the real-time image and each part of the preset target in the real-time image 20 and the preset target in the template image 21.
  • the precise position information includes position information such as an angle and a coordinate
  • the result can be determined that the outer edge of the real-time image 20 matching the outer edge A is an outer edge that can be processed, and the industrial robot can Processing is performed outside.
  • the method of matching the real-time image with the downsampled image of the template image and matching the real-time image with the template image is edge matching or grayscale matching.
  • the present invention performs downsampling processing on the template image and the real-time image, first obtains preliminary matching information by matching the downsampled image, and then determines whether the target appears according to the preliminary matching information. In the preset area, if it is, the exact matching result is obtained by matching the real-time image and the template image, thereby improving the calculation speed of the image matching.
  • the invention can quickly obtain the matching result of the real-time image and the template image, improve the industrial recognition speed, and further improve the processing efficiency.
  • FIG. 3 is a schematic structural diagram of an embodiment of a vision system according to the present invention.
  • the vision system of this embodiment includes the following devices:
  • Image collector 31, processor 32 and memory 33 The image collector 31 is configured to collect real-time images and send the collected real-time images to the processor 32.
  • the memory 33 is used to store template images, real-time images, program data, and data processed by the processor 32.
  • the processor 32 is coupled to the image collector 31 and the memory 33, respectively. And processor 32 performs the following steps when executing the program:
  • the processor 32 sets the target to be processed as a preset target, and the processor 32 needs to acquire the real-time image of the predetermined scene through the image collector 31 before processing the preset target in the predetermined scene.
  • the reference image of the preset target may be used as the template image, and the image acquired in the preset scene may be used as the real-time image.
  • the image of the preset target acquired by the image collector when the target is located at the initial position may also be set as a template image, and the image acquired by the image collector 31 in the predetermined scene is subsequently processed during the processing.
  • Real-time images are not limited here.
  • the processor 32 presets the reference image of the preset target as the template image 21, after which the processor 32 acquires the real-time image 20 of the predetermined scene through the image collector 31, which will be preset in the template image 21.
  • One of the outer edges of the target is set to the outer edge A that can be processed.
  • the image collector 31 is an industrial camera, an intelligent traffic camera, a smart camera, a 3D smart sensor, or other device that can collect real-time images of predetermined scenes.
  • the resolution of the real-time image 20 and the resolution of the preset template image 21 may be the same or different.
  • Downsampling also known as downsampling or reducing the image, is a reduction in the number of samples. For an image with a resolution of N*M, if the downsampling coefficient is k, then one dot is taken every k points of each row and column in the original image to form an image.
  • the processor 32 can control when the preset target is located in the processing area.
  • the industrial robot processes the preset target. If the processor 32 directly matches the original real-time image 20 and the template image 21, the calculation time is increased, and the calculation resources may be wasted, and the matching result cannot be quickly obtained.
  • the processor 32 may perform downsampling processing on the real-time image 20 and the template image 21, and perform matching on the downsampled image to obtain preliminary matching information.
  • the image collector 31 is an industrial camera. After the real-time image 20 is acquired by the industrial camera, the real-time image 20 and the template image 21 are subjected to different levels of downsampling processing, and the real-time image 20 and the template image 21 are acquired. Multiple downsampled images with different resolutions.
  • the number of downsampled images of the real-time image 20 is the same as the number of downsampled images of the template image 21. In the embodiment, the number of downsampled images of the real-time image 20 and the number of downsampled images of the template image 21 are respectively 4.
  • the number of downsampled images of the real-time image 20 and the template image 21 may also be set according to the user's own needs, which is not limited herein.
  • the processor 32 maps the downsampled image of the real-time image 20 with the close resolution to the downsampled image of the template image 21, and
  • the downsampled images of the live image 20 are hierarchically arranged in descending order of resolution.
  • the real-time downsampled image with close resolution is matched and parsed with the mapped template downsampled image to determine whether the target position information of the set precision level is obtained.
  • the processor 32 maps the downsampled image 201 of the real-time image 20 to the downsampled image 211 of the template image 21, and maps the downsampled image 202 of the real-time image 20 to the downsampled image 212 of the template image 21,
  • the downsampled image 201 of the lowest resolution real-time image 20 in the downsampled image sequence of the hierarchically arranged real-time image 20 is matched and parsed with the downsampled image 211 of the template image 21 having the lowest resolution in the downsampled image sequence of the template image 21, It is judged whether or not the target position information of the set precision level is acquired. If the target position information of the set precision level can be acquired, the processor 32 uses the target position information of the set precision level as the preliminary match information.
  • the processor 32 maps the downsampled image of the real-time image 20 with the resolution close to the downsampled image of the template image 21,
  • the downsampled images of the live image 20 are hierarchically arranged in descending order of resolution.
  • the real-time downsampled image with close resolution is matched and parsed with the mapped template downsampled image to determine whether the target position information of the set precision level is obtained.
  • the processor 32 maps the downsampled image 201 of the real-time image 20 to the downsampled image 211 of the template image 21, and maps the downsampled image 202 of the real-time image 20 to the downsampled image 212 of the template image 21,
  • the downsampled image 203 of the real-time image 20 is mapped to the downsampled image 213 of the template image 21.
  • the downsampled image 201 of the lowest resolution real-time image 20 in the downsampled image sequence of the hierarchically arranged real-time image 20 is matched and parsed with the downsampled image 211 of the template image 21 mapped by the downsampled image 201 to determine whether to acquire the setting.
  • Target position information of the precision level is provided.
  • the processor 32 selects the downsampled image of the real-time image 20 of the higher-level resolution and the downsampled image of the mapped template image 21 in ascending order.
  • the matching analysis is performed until the target position information of the set precision level is obtained, and the target position information of the set precision level is used as the preliminary matching information.
  • the processor 32 performs matching analysis on the downsampled image 202 of the real-time image 20 that is one level higher than the downsampled image 201 and the downsampled image 212 of the mapped template image 21, and determines whether the parsed information is of a set precision level.
  • Target position information if not, selects a downsampled image 203 that is one level higher than the resolution of the downsampled image 202 and performs matching analysis with the mapped downsampled image 213 until the target position information of the set precision level can be parsed, and The target position information of the set precision level is used as preliminary matching information.
  • the preliminary matching information acquired by the processor 32 includes location information of the target that has the highest matching with the preset target in the template image in the real-time image, and the location information includes location information such as an angle, a coordinate, and the like of the target.
  • the preliminary matching information can obtain coarse position information of the target.
  • the processor 32 sets the processing area in the predetermined scene as the preset area, and determines the target in the real-time image 20 that has the highest matching degree with the preset target in the template image 21 as the preset target, and processes The device 32 determines whether the preset target is located in the preset area by using the preliminary matching information, and if it is in the preset area, matches the real-time image 20 with the template image 21 to obtain the result of the image matching, that is, the preset target is obtained.
  • the processor 32 sets the processing area in the predetermined scene as the preset area, and determines the target in the real-time image 20 that has the highest matching degree with the preset target in the template image 21 as the preset target, and processes The device 32 determines whether the preset target is located in the preset area by using the preliminary matching information. If the preset target is not in the preset area, repeating the foregoing steps and then performing the determination again until the preset target falls within the preset area, and then the real-time image is Matching with the template image 21 to obtain the result of image matching, that is, obtaining accurate position information of the preset target in the real-time image and each part of the preset target in the real-time image 20 and the preset target in the template image 21.
  • the precise position information includes position information such as an angle, a coordinate, etc., by which the outer edge of the real-time image 20 matching the outer edge A is determined to be an outer edge that can be processed, and the processor 32 can be controlled accordingly.
  • Industrial robots process the outside.
  • the method of matching the real-time image with the downsampled image of the template image and matching the real-time image with the template image is edge matching or grayscale matching.
  • the beneficial effects of the present invention are: different from the prior art, the present invention performs downsampling processing on the template image and the real-time image, first obtains preliminary matching information by matching the downsampled image, and then determines whether the target appears according to the preliminary matching information. In the preset area, if it is, the exact matching result is obtained by matching the real-time image and the template image, thereby improving the calculation speed of the image matching.
  • the invention can quickly obtain the matching result of the real-time image and the template image, improve the industrial recognition speed, and further improve the processing efficiency.
  • FIG. 4 is a schematic flow chart of another embodiment of an image matching method according to the present invention.
  • the image matching method of this embodiment includes the following steps:
  • S401 Collect a real-time image of a predetermined scene.
  • the real-time image of the predetermined scene needs to be acquired by the image collector.
  • the reference image of the preset target may be used as the template image, and the image acquired in the preset scene may be used as the real-time image.
  • the image of the preset target acquired by the image collector when the target is located at the initial position may also be set as a template image, and the image acquired by the image collector in the predetermined scene is in real time during the processing.
  • the image is not limited here.
  • the reference image of the preset target is preset as the template image 21, and then the industrial robot acquires the real-time image 20 of the predetermined scene through the image collector, and the template image 21 is located outside the preset target. Set to the outer edge A for processing.
  • the image collector is an industrial camera, an intelligent traffic camera, a smart camera, a 3D smart sensor or other device capable of acquiring a real-time image of a predetermined scene.
  • the resolution of the real-time image 20 and the resolution of the preset template image 21 may be the same or different.
  • S402 Perform a downsampling process on the real-time image and the preset template image, and obtain a downsampled image of the real-time image and the template image.
  • Downsampling is also known as downsampling or reducing the number of image samples. For an image with a resolution of N*M, if the downsampling coefficient is k, then one dot is taken every k points of each row and column in the original image to form an image.
  • the real-time image 20 and the template image 21 need to be downsampled, and the downsampled image is matched to obtain preliminary matching information.
  • the image collector is an industrial camera. After the real-time image 20 is acquired by the industrial camera, the real-time image 20 and the template image 21 are subjected to different levels of downsampling processing, and the real-time image 20 and the template image 21 are acquired. Downsampled images with different resolutions. The number of downsampled images of the real-time image 20 is the same as the number of downsampled images of the template image 21.
  • the number of downsampled images of the real-time image 20 and the template image 21 may also be set according to the user's own needs, which is not limited herein.
  • S403 Match the downsampled image of the real-time image and the template image to obtain preliminary matching information.
  • the downsampled image of the real-time image 20 with the close resolution is mapped to the downsampled image of the template image 21, and the real-time image is obtained.
  • the downsampled images of 20 are hierarchically arranged in descending order of resolution.
  • the real-time downsampled image with close resolution is matched and parsed with the mapped template downsampled image to determine whether the target position information of the set precision level is obtained.
  • the downsampled image 201 of the real-time image 20 is mapped to the downsampled image 211 of the template image 21, and the downsampled image 202 of the real-time image 20 is mapped to the downsampled image 212 of the template image 21 to be hierarchically arranged.
  • the downsampled image 201 of the real-time image 20 having the lowest resolution among the downsampled image sequences of the real-time image 20 is matched and parsed with the downsampled image 211 of the template image 21 having the lowest resolution among the downsampled image sequences of the template image 21, and it is judged whether or not the image is acquired.
  • Set the target position information of the accuracy level If the target position information of the set precision level can be acquired, the target position information of the set precision level is used as preliminary matching information.
  • the downsampled image of the real-time image 20 and the template image 21 is acquired, the downsampled image of the real-time image 20 with the close resolution is mapped to the downsampled image of the template image 21, and the real-time image is mapped.
  • the downsampled images of image 20 are hierarchically arranged in descending order of resolution.
  • the real-time downsampled image with close resolution is matched and parsed with the mapped template downsampled image to determine whether the target position information of the set precision level is obtained.
  • the downsampled image 201 of the real-time image 20 is mapped to the downsampled image 211 of the template image 21, and the downsampled image 202 of the real-time image 20 is mapped to the downsampled image 212 of the template image 21, and the real-time image 20 is
  • the downsampled image 203 is mapped to the downsampled image 213 of the template image 21.
  • the downsampled image 201 of the lowest resolution real-time image 20 in the downsampled image sequence of the hierarchically arranged real-time image 20 is matched and parsed with the downsampled image 211 of the template image 21 mapped by the downsampled image 201 to determine whether to acquire the setting.
  • Target position information of the precision level is
  • the downsampled image of the real-time image 20 of the higher-level resolution and the downsampled image of the mapped template image 21 are selected in an ascending manner for matching analysis. Until the target position information of the set precision level is obtained, and the target position information of the set precision level is used as the preliminary matching information. For example, the downsampled image 202 of the real-time image 20 whose resolution is higher than the resolution of the downsampled image 201 and the downsampled image 212 of the mapped template image 21 are matched and analyzed, and it is determined whether the analyzed information is the target of the set precision level.
  • Position information if not, selects a downsampled image 203 that is one level higher than the resolution of the downsampled image 202 and performs matching analysis with the mapped downsampled image 213 until the target position information of the set precision level can be parsed, and the The target position information of the accuracy level is set as the preliminary matching information.
  • the downsampled image of the real-time image 20 and the template image 21 is acquired, the downsampled image of the real-time image 20 with the close resolution is mapped to the downsampled image of the template image 21, and the real-time image is mapped.
  • the downsampled images of image 20 are hierarchically arranged in descending order of resolution.
  • the real-time downsampled image with close resolution is matched and parsed with the mapped template downsampled image to determine whether the target position information of the set precision is obtained.
  • the downsampled image 201 of the real-time image 20 is mapped to the downsampled image 211 of the template image 21, and the downsampled image 202 of the real-time image 20 is mapped to the downsampled image 212 of the template image 21, and the real-time image 20 is
  • the downsampled image 203 is mapped to the downsampled image 213 of the template image 21.
  • the downsampled image 201 of the lowest resolution real-time image 20 in the downsampled image sequence of the hierarchically arranged real-time image 20 is matched and parsed with the downsampled image 211 of the template image 21 mapped by the downsampled image 201 to determine whether to acquire the setting.
  • Target position information of the precision level is
  • the downsampled image of the real-time image 20 of the higher level than the downsampled image resolution of the currently matched real-time image 20 is selected in ascending order and its mapping
  • the downsampled image of the template image 21 is subjected to matching analysis, and it is judged based on the information obtained by the matching analysis whether or not the target position information of the set precision level is obtained. If all the downsampled images of the real-time image 20 are traversed in ascending order, the set cannot be obtained.
  • the target position information of the precision level is determined, the target position information obtained by the last matching analysis is used as the preliminary matching information.
  • the obtained preliminary matching information is the position of the target with the highest matching degree with the preset target in the template image 21.
  • Information which includes location information such as angles, coordinates, and the like of the target.
  • S404 Determine, according to the preliminary matching information, whether a preset target appears in the real-time image, and if yes, match the real-time image with the template image to obtain a result of image matching.
  • the target location information is obtained from the preliminary matching information, and it is determined whether the target location information is the target location information of the set precision level, and if yes, determining that the preset target appears in the real-time image 20,
  • the target having the highest degree of matching with the preset target in the template image 21 in the image 20 is determined as a preset target, and the real-time image 20 and the template image 21 are matched according to the preliminary matching information to obtain a matching result.
  • the matching result is the precise position information of the preset target in the real-time image and the corresponding matching relationship between each part of the preset target in the real-time image 20 and the preset target in the template image 21, wherein the precise position information includes the angle Position information such as coordinates, and according to the result, the industrial robot can also determine the precise position information of the preset target in the real-time image and the correspondence between each portion of the preset target in the real-time image 20 and the preset target in the template image 21.
  • the matching relationship, wherein the precise position information includes position information such as an angle and a coordinate.
  • the target location information is obtained from the preliminary matching information, and it is determined whether the target location information is the target location information of the set precision level, and if not, it is determined that the preset target does not appear in the real-time image. Then, the foregoing steps are repeated and the determination is made again until it is determined that the preset target appears in the real-time image, and the target with the highest matching degree of the preset target in the template image 21 is determined as the preset target, and then according to the preliminary matching information.
  • the real-time image 20 is matched with the template image 21 to obtain the result of the image matching, that is, the accurate position information of the preset target in the real-time image and each part of the preset target in the real-time image 20 and the pre-form in the template image 21 are obtained.
  • the precise position information includes position information such as an angle and a coordinate
  • the result can be determined that the outer edge of the real-time image 20 that matches the outer edge A is the outer edge that can be processed, and the industrial robot can The outer side is processed.
  • the method of matching the real-time image with the downsampled image of the template image and matching the real-time image with the template image is edge matching or grayscale matching.
  • the present invention performs downsampling processing on the template image and the real-time image, first obtains preliminary matching information by matching the downsampled image, and then determines whether the target appears according to the preliminary matching information. In the preset area, if it is, the exact matching result is obtained by matching the real-time image and the template image, thereby improving the calculation speed of the image matching.
  • the invention can quickly obtain the matching result of the real-time image and the template image, improve the industrial recognition speed, and further improve the processing efficiency.
  • FIG. 5 is a schematic structural diagram of an embodiment of a vision system according to the present invention.
  • the vision system of this embodiment includes the following devices:
  • Image collector 51 is configured to collect a real-time image of a predetermined scene and send the collected image to the processor 52.
  • the memory 53 is used to store template images, real-time images, program data, and data processed by the processor 52.
  • the processor 52 is coupled to the image collector 51 and the memory 53, respectively.
  • the processor 52 performs the following steps: controlling the image collector to acquire a real-time image of a predetermined scene; performing down-sampling processing on the real-time image and the preset template image to obtain a downsampled image of the real-time image and the template image; Matching the downsampled image of the real-time image and the template image to obtain preliminary matching information; determining whether a preset target appears in the real-time image according to the preliminary matching information, and if yes, matching the real-time image with the template image, To get the result of image matching.
  • the image matching method performed by the vision system at work the foregoing has been described in detail and will not be described herein.
  • the present invention performs downsampling processing on the template image and the real-time image, first obtains preliminary matching information by matching the downsampled image, and then determines whether the target appears according to the preliminary matching information. In the preset area, if it is, the exact matching result is obtained by matching the real-time image and the template image, thereby improving the calculation speed of the image matching.
  • the invention can quickly obtain the matching result of the real-time image and the template image, improve the industrial recognition speed, and further improve the processing efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种图像匹配方法和视觉系统,包括:采集预定场景的实时图像;对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像;对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息;根据所述初步匹配信息获取实时图像中与所述模板图像匹配的目标的位置;判断所述目标的位置是否在预设区域内,如果是,则将所述位置对应的实时图像与模板图像进行匹配,以得到图像匹配的结果。本发明能够快速获得实时图像和模板图像的匹配结果,提高工业识别速度,进而提高了加工效率。

Description

图像匹配方法和视觉系统
【技术领域】
本发明涉及工业视觉领域,特别是涉及图像匹配方法和视觉系统。
【背景技术】
图像匹配是基于物体图像图形的分析,在两幅或多幅图像之间识别同名点的方法。图像匹配在目前的计算机领域中运用很广,从网络安全到工业生产都有涉及。目前在工业流水线生产过程中,对生产的物体进行加工时,工业机器人需要识别待加工的目标和获取待加工目标的位置从而对其进行加工处理。通常采取的方法是获取待加工目标的模板图像和采集预定场景的实时图像,并对该实时图像和模板图像进行匹配,根据匹配结果识别待加工目标和获取加工位置从而对待加工目标进行处理,为了更准确识别待加工目标和获得待加工目标更精确的加工位置,需要得到更准确的匹配结果。因此,需要获得待加工目标更清晰的图像。但是,工业机器人采集的预定场景的实时图像中待加工目标不一定存在或位于预设区域,直接使用高分辨率的图像进行匹配时,会大大降低图像匹配的速度,进而工业机器人不能快速识别待加工目标和获取加工位置,降低了加工效率。
【发明内容】
本发明主要解决的技术问题是提供一种图像匹配方法和视觉系统,能够提高图像匹配速度,进而提高了加工效率。
为解决上述技术问题,本发明采用的一个技术方案是:提供一种图像匹配方法,包括:采集预定场景的实时图像;对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像;对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息;根据所述初步匹配信息判断所述实时图像中是否出现预置目标,如果是,将所述实时图像与模板图像进行匹配,以得到图像匹配的结果。
为解决上述技术问题,本发明采用的另一个技术方案是:提供一种图像匹配方法,包括:采集预定场景的实时图像;对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像;对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息;根据所述初步匹配信息获取实时图像中与所述模板图像匹配的目标的位置;判断所述目标的位置是否在预设区域内,如果是,则将所述位置对应的实时图像与模板图像进行匹配,以得到图像匹配的结果。
为解决上述技术问题,本发明采用的另一个技术方案是:提供一种视觉系统包括:图像采集器,存储器,处理器,所述处理器分别与所述图像采集器和所述存储器耦合连接;所述图像采集器用于采集实时图像,并将采集到的实时图像发送至所述处理器;所述存储器用于存储所述模板图像、所述实时图像、程序数据以及所述处理器处理的数据;所述处理器在执行所述程序时执行以下步骤:控制所述图像采集器采集预定场景的实时图像;对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像;对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息;根据所述初步匹配信息判断所述实时图像中是否出现预置目标,如果是,将所述实时图像与模板图像进行匹配,以得到图像匹配的结果。
为解决上述技术问题,本发明采用的再一个技术方案是:提供一种视觉系统,包括:图像采集器,存储器,处理器,所述处理器分别与所述图像采集器和所述存储器耦合连接;所述图像采集器用于采集实时图像,并将采集到的实时图像发送至所述处理器;所述存储器用于存储所述模板图像、所述实时图像、程序数据以及所述处理器处理的数据;
所述处理器在执行所述程序时执行以下步骤:控制所述图像采集器采集预定场景的实时图像;对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像;对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息;根据所述初步匹配信息获取实时图像中与所述模板图像匹配的目标的位置;判断所述目标的位置是否在预设区域内,如果是,则将所述位置对应的实时图像与模板图像进行匹配,以得到图像匹配的结果。
本发明的有益效果是:区别于现有技术的情况,本发明对模板图像及实时图像都做降采样处理,先通过降采样图像的匹配获得初步匹配信息,再根据初步匹配信息判断实时图像中是否出现预置目标,若是,则通过对实时图像和模板图像的匹配获得精确匹配结果,从而提高了图像匹配的计算速度。本发明能够快速获得实时图像和模板图像的匹配结果,提高工业识别速度,进而提高了加工效率。
【附图说明】
图1是本发明图像匹配方法一实施例的流程示意图;
图2是本发明图像匹配方法一实施例示意图;
图3是本发明视觉系统一实施例的结构示意图;
图4是本发明图像匹配方法另一实施例的流程示意图;
图5是本发明视觉系统另一实施例的结构示意图。
【具体实施方式】
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,均属于本发明保护的范围。
请参阅图1和图2,图1是本发明图像匹配方法一实施例的流程示意图,图2是本发明图像匹配方法一实施例示意图。本实施例的图像匹配方法包括如下步骤:
S101:采集预定场景的实时图像。
在一个具体的实施场景中,将待加工的目标设为预置目标,工业机器人对处于预定场景的预置目标进行加工前,需要先通过图像采集器获取预定场景的实时图像。在本实施例中,可以将预置目标的参考图像作为模板图像,将在预设场景采集的图像作为实时图像。在其他实施例中,也可以将目标位于初始位置时通过图像采集器获取的预置目标的图像设定为模板图像,后续在加工过程中,以通过图像采集器在预定场景采集的图像为实时图像,此处不予限定。
在一个具体的实施例中,将预置目标的参考图像预设为模板图像21,之后工业机器人通过图像采集器采集预定场景的实时图像20,将位于模板图像21中预置目标的某条外边设为可供加工的外边A。
在本实施例中,图像采集器为工业相机、智能交通相机、智能相机、3D智能传感器或其他可以采集预定场景实时图像的设备。其中,实时图像20的分辨率和预设的模板图像21的分辨率可以相同也可以不同。
S102:对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像。
降采样又名下采样或减少图像采样点数。对于一幅分辨率为N*M的图像来说,如果降采样系数为k,则即是在原图中每行每列每隔k个点取一个点组成一幅图像。
因获取的实时图像20和模板图像21的分辨率太大,采集的预定场景的实时图像20中目标不一定在预设区域内,而且当预置目标位于加工区域内时,工业机器人才能对预置目标进行加工,如果直接使用原始的实时图像20和模板图像21进行匹配,会增加计算时间,且可能浪费计算资源,不能快速得出匹配结果。本实施例中,可对实时图像20和模板图像21作降采样处理,并对降采样图像进行匹配,得到初步匹配信息。
在一个具体的实施场景中,图像采集器为工业相机,通过工业相机获取实时图像20后,对实时图像20和模板图像21作不同层次的降采样处理,获取实时图像20和模板图像21的多个分辨率不同的降采样图像。其中,实时图像20的降采样图像数量与模板图像21的降采样图像数量相同,在本实施例中,实时图像20的降采样图像数量与模板图像21的降采样图像数量分别为4。
在上述实施场景中,实时图像20和模板图像21的降采样图像数量还可根据用户自身需要进行设定,此处不予限定。
S103:对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息。
在一个具体的实施场景中,获取实时图像20和模板图像21的降采样图像后,将分辨率接近的实时图像20的降采样图像与模板图像21的降采样图像建立映射关系,并将实时图像20的降采样图像按照分辨率由低到高的顺序进行层级排列。对分辨率接近的实时降采样图像与其映射的模板降采样图像进行匹配解析,判断是否得到设定精度级的目标位置信息。例如,将实时图像20的降采样图像201与模板图像21的降采样图像211建立映射关系,将实时图像20的降采样图像202与模板图像21的降采样图像212建立映射关系,将层级排列的实时图像20的降采样图像序列中分辨率最低的实时图像20的降采样图像201与模板图像21的降采样图像序列中分辨率最低的模板图像21的降采样图像211进行匹配解析,判断是否获取设定精度级的目标位置信息。如果能够获取设定精度级的目标位置信息,则将该设定精度级的目标位置信息作为初步匹配信息。
在另一个具体的实施场景中,获取实时图像20和模板图像21的降采样图像后,将分辨率接近的实时图像20的降采样图像与模板图像21的降采样图像建立映射关系,并将实时图像20的降采样图像按照分辨率由低到高的顺序进行层级排列。对分辨率接近的实时降采样图像与其映射的模板降采样图像进行匹配解析,判断是否得到设定精度级的目标位置信息。例如,将实时图像20的降采样图像201与模板图像21的降采样图像211建立映射关系,将实时图像20的降采样图像202与模板图像21的降采样图像212建立映射关系,将实时图像20的降采样图像203与模板图像21的降采样图像213建立映射关系。将层级排列的实时图像20的降采样图像序列中分辨率最低的实时图像20的降采样图像201与该降采样图像201映射的模板图像21的降采样图像211进行匹配解析,判断是否获取设定精度级的目标位置信息。如果确定匹配解析得到的信息不是设定精度级的目标位置信息,则以升序的方式选取更高层级分辨率的实时图像20的降采样图像及其映射的模板图像21的降采样图像进行匹配解析,直至得到设定精度级的目标位置信息,并将该设定精度级的目标位置信息作为初步匹配信息。如将比降采样图像201高一层级的实时图像20的降采样图像202和其映射的模板图像21的降采样图像212进行匹配解析,判断解析得到的信息是否为设定精度级的目标位置信息,若不是,则选取比降采样图像202分辨率高一层级的降采样图像203与其映射的降采样图像213进行匹配解析,直至能够解析得到设定精度级的目标位置信息,并将该设定精度级的目标位置信息作为初步匹配信息。
S104:根据所述初步匹配信息获取实时图像中与所述模板图像匹配的目标的位置。
在上述实施场景中,获取的初步匹配信息包括实时图像中与模板图像中的预置目标匹配最高的的目标的位置信息,该位置信息包括目标的角度、坐标等位置信息,通过该初步匹配信息可以获取目标的粗略位置信息。
S105:判断所述目标的位置是否在预设区域内,如果是,则将所述位置对应的实时图像与模板图像进行匹配,以得到图像匹配的结果。
在一个具体的场景中,将预定场景中的加工区域设为预设区域,并将实时图像20中与模板图像21中的预置目标匹配度最高的目标确定为预置目标,工业机器人通过初步匹配信息判断该预置目标是否位于预设区域内,如果在预设区域内,则根据初步匹配信息将实时图像20与模板图像21进行匹配,以得到图像匹配的结果,即得到预置目标在实时图像中的精确位置信息以及实时图像20中的预置目标的每个部分与模板图像21中的预置目标的对应匹配关系,其中,精确位置信息包括角度、坐标等位置信息,通过该结果可以确定实时图像20中与外边A相匹配的外边即为可供加工的外边,工业机器人据此即可对该外边进行加工处理。
在一个具体的场景中,将预定场景中的加工区域设为预设区域,并将实时图像20中与模板图像21中的预置目标匹配度最高的目标确定为预置目标,工业机器人通过初步匹配信息判断该预置目标是否位于预设区域内,如果不在预设区域内,则重复前述步骤后再次进行判断,直至该预置目标落入预设区域内,然后根据初步匹配信息将实时图像20与模板图像21进行匹配,以得到图像匹配的结果,即得到预置目标在实时图像中的精确位置信息以及实时图像20中的预置目标的每个部分与模板图像21中的预置目标的对应匹配关系,其中,精确位置信息包括角度、坐标等位置信息,通过该结果可以确定实时图像20中与外边A相匹配的外边即为可供加工的外边,工业机器人据此即可对该外边进行加工处理。
上述任一实施例中,将实时图像和模板图像的降采样图像进行匹配以及将实时图像和模板图像进行匹配的方法为边缘匹配或者灰度匹配。
本发明的有益效果是:区别于现有技术的情况,本发明对模板图像及实时图像都做降采样处理,先通过降采样图像的匹配获得初步匹配信息,再根据初步匹配信息判断目标是否出现在预设区域,若是,则通过对实时图像和模板图像的匹配获得精确匹配结果,从而提高了图像匹配的计算速度。本发明能够快速获得实时图像和模板图像的匹配结果,提高工业识别速度,进而提高了加工效率。
基于同一发明构思,本发明还提供了一种视觉系统,请参阅图3,图3是本发明视觉系统一实施例的结构示意图。本实施例的视觉系统包括以下器件:
图像采集器31,处理器32和存储器33。其中,图像采集器31用于采集实时图像,并将采集到的实时图像发送至处理器32。存储器33用于存储模板图像、实时图像、程序数据以及处理器32处理的数据。处理器32分别与图像采集器31和存储器33耦合连接。且处理器32在执行程序时执行以下步骤:
1:控制所述图像采集器采集预定场景的实时图像。
在一个具体的实施场景中,处理器32将待加工的目标设为预置目标,处理器32对处于预定场景的预置目标进行加工前,需要先通过图像采集器31获取预定场景的实时图像。在本实施例中,可以将预置目标的参考图像作为模板图像,将在预设场景采集的图像作为实时图像。在其他实施例中,也可以将目标位于初始位置时通过图像采集器获取的预置目标的图像设定为模板图像,后续在加工过程中,以通过图像采集器31在预定场景采集的图像为实时图像,此处不予限定。
在一个具体的实施例中,处理器32将预置目标的参考图像预设为模板图像21,之后处理器32通过图像采集器31采集预定场景的实时图像20,将位于模板图像21中预置目标的某条外边设为可供加工的外边A。
在本实施例中,图像采集器31为工业相机、智能交通相机、智能相机、3D智能传感器或其他可以采集预定场景实时图像的设备。其中,实时图像20的分辨率和预设的模板图像21的分辨率可以相同也可以不同。
2:对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像。
降采样又名下采样或减小图像即是采样点数减少。对于一幅分辨率为N*M的图像来说,如果降采样系数为k,则即是在原图中每行每列每隔k个点取一个点组成一幅图像。
因获取的实时图像20和模板图像21的分辨率太大,采集的预定场景的实时图像20中目标不一定在预设区域内,而且当预置目标位于加工区域内时,处理器32才能控制工业机器人对预置目标进行加工,如果处理器32直接使用原始的实时图像20和模板图像21进行匹配,会增加计算时间,且可能浪费计算资源,不能快速得出匹配结果。本实施例中,处理器32可对实时图像20和模板图像21作降采样处理,并对降采样图像进行匹配,得到初步匹配信息。
在一个具体的实施场景中,图像采集器31为工业相机,通过工业相机获取实时图像20后,对实时图像20和模板图像21作不同层次的降采样处理,获取实时图像20和模板图像21的多个分辨率不同的降采样图像。其中,实时图像20的降采样图像数量与模板图像21的降采样图像数量相同,在本实施例中,实时图像20的降采样图像数量与模板图像21的降采样图像数量分别为4。
在上述实施场景中,实时图像20和模板图像21的降采样图像数量还可根据用户自身需要进行设定,此处不予限定。
3:对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息。
在一个具体的实施场景中,处理器32获取实时图像20和模板图像21的降采样图像后,将分辨率接近的实时图像20的降采样图像与模板图像21的降采样图像建立映射关系,并将实时图像20的降采样图像按照分辨率由低到高的顺序进行层级排列。对分辨率接近的实时降采样图像与其映射的模板降采样图像进行匹配解析,判断是否得到设定精度级的目标位置信息。例如,处理器32将实时图像20的降采样图像201与模板图像21的降采样图像211建立映射关系,将实时图像20的降采样图像202与模板图像21的降采样图像212建立映射关系,将层级排列的实时图像20的降采样图像序列中分辨率最低的实时图像20的降采样图像201与模板图像21的降采样图像序列中分辨率最低的模板图像21的降采样图像211进行匹配解析,判断是否获取设定精度级的目标位置信息。如果能够获取设定精度级的目标位置信息,则处理器32将该设定精度级的目标位置信息作为初步匹配信息。
在另一个具体的实施场景中,处理器32获取实时图像20和模板图像21的降采样图像后,将分辨率接近的实时图像20的降采样图像与模板图像21的降采样图像建立映射关系,并将实时图像20的降采样图像按照分辨率由低到高的顺序进行层级排列。对分辨率接近的实时降采样图像与其映射的模板降采样图像进行匹配解析,判断是否得到设定精度级的目标位置信息。例如,处理器32将实时图像20的降采样图像201与模板图像21的降采样图像211建立映射关系,将实时图像20的降采样图像202与模板图像21的降采样图像212建立映射关系,将实时图像20的降采样图像203与模板图像21的降采样图像213建立映射关系。将层级排列的实时图像20的降采样图像序列中分辨率最低的实时图像20的降采样图像201与该降采样图像201映射的模板图像21的降采样图像211进行匹配解析,判断是否获取设定精度级的目标位置信息。如果确定匹配解析得到的信息不是设定精度级的目标位置信息,则处理器32以升序的方式选取更高层级分辨率的实时图像20的降采样图像及其映射的模板图像21的降采样图像进行匹配解析,直至得到设定精度级的目标位置信息,并将该设定精度级的目标位置信息作为初步匹配信息。如处理器32将比降采样图像201高一层级的实时图像20的降采样图像202和其映射的模板图像21的降采样图像212进行匹配解析,判断解析得到的信息是否为设定精度级的目标位置信息,若不是,则选取比降采样图像202分辨率高一层级的降采样图像203与其映射的降采样图像213进行匹配解析,直至能够解析得到设定精度级的目标位置信息,并将该设定精度级的目标位置信息作为初步匹配信息。
4:根据所述初步匹配信息获取实时图像中与所述模板图像匹配的目标的位置。
在上述实施场景中,处理器32获取的初步匹配信息包括实时图像中与模板图像中的预置目标匹配最高的的目标的位置信息,该位置信息包括目标的角度、坐标等位置信息,通过该初步匹配信息可以获取目标的粗略位置信息。
5:判断所述目标的位置是否在预设区域内,如果是,则将所述位置对应的实时图像与模板图像进行匹配,以得到图像匹配的结果。
在一个具体的场景中,处理器32将预定场景中的加工区域设为预设区域,并将实时图像20中与模板图像21中的预置目标匹配度最高的目标确定为预置目标,处理器32通过初步匹配信息判断该预置目标是否位于预设区域内,如果在预设区域内,则将实时图像20与模板图像21进行匹配,以得到图像匹配的结果,即得到预置目标在实时图像中的精确位置信息以及实时图像20中的预置目标的每个部分与模板图像21中的预置目标的对应匹配关系,其中,精确位置信息包括角度、坐标等位置信息,通过该结果可以确定实时图像20中与外边A相匹配的外边即为可供加工的外边,处理器32据此即可控制工业机器人对该外边进行加工处理。
在一个具体的场景中,处理器32将预定场景中的加工区域设为预设区域,并将实时图像20中与模板图像21中的预置目标匹配度最高的目标确定为预置目标,处理器32通过初步匹配信息判断该预置目标是否位于预设区域内,如果不在预设区域内,则重复前述步骤后再次进行判断,直至该预置目标落入预设区域内,然后将实时图像20与模板图像21进行匹配,以得到图像匹配的结果,即得到预置目标在实时图像中的精确位置信息以及实时图像20中的预置目标的每个部分与模板图像21中的预置目标的对应匹配关系,其中,精确位置信息包括角度、坐标等位置信息,通过该结果可以确定实时图像20中与外边A相匹配的外边即为可供加工的外边,处理器32据此即可控制工业机器人对该外边进行加工处理。
上述任一实施例中,将实时图像和模板图像的降采样图像进行匹配以及将实时图像和模板图像进行匹配的方法为边缘匹配或者灰度匹配。本发明的有益效果是:区别于现有技术的情况,本发明对模板图像及实时图像都做降采样处理,先通过降采样图像的匹配获得初步匹配信息,再根据初步匹配信息判断目标是否出现在预设区域,若是,则通过对实时图像和模板图像的匹配获得精确匹配结果,从而提高了图像匹配的计算速度。本发明能够快速获得实时图像和模板图像的匹配结果,提高工业识别速度,进而提高了加工效率。
参阅图4,图4是本发明图像匹配方法另一实施例的流程示意图。结合图2,本实施例的图像匹配方法包括如下步骤:
S401:采集预定场景的实时图像。
在一个具体的实施场景中,工业机器人对处于预定场景的预置目标进行加工前,需要先通过图像采集器获取预定场景的实时图像。在本实施例中,可以将预置目标的参考图像作为模板图像,将在预设场景采集的图像作为实时图像。在其他实施例中,也可以将目标位于初始位置时通过图像采集器获取的预置目标的图像设定为模板图像,后续在加工过程中,以通过图像采集器在预定场景采集的图像为实时图像,此处不予限定。
在一个具体的实施例中,将预置目标的参考图像预设为模板图像21,之后工业机器人通过图像采集器采集预定场景的实时图像20,将模板图像21中位于预置目标的某条外边设为可供加工的外边A。
在本实施例中,图像采集器为工业相机、智能交通相机、智能相机、3D智能传感器或其他可以采集预定场景实时图像的设备。其中,实时图像20的分辨率和预设的模板图像21的分辨率可以相同也可以不同。
S402:对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像。
降采样又名下采样或减少图像采样点数。对于一幅分辨率为N*M的图像来说,如果降采样系数为k,则即是在原图中每行每列每隔k个点取一个点组成一幅图像。
因获取的实时图像20和模板图像21的分辨率太大,且采集的预定场景的实时图像20中目标不一定在预定场景内,如果直接使用原始的实时图像20和模板图像21进行匹配,会增加计算时间,浪费计算资源,不能快速得出匹配结果。本实施例中,需要对实时图像20和模板图像21作降采样处理,并对降采样图像进行匹配获得初步匹配信息。
在一个具体的实施场景中,图像采集器为工业相机,通过工业相机获取实时图像20后,对实时图像20和模板图像21作不同层次的降采样处理,获取实时图像20和模板图像21的多个分辨率不同的降采样图像。其中,实时图像20的降采样图像数量与模板图像21的降采样图像数量相同。
在上述实施场景中,实时图像20和模板图像21的降采样图像数量还可根据用户自身需要进行设定,此处不予限定。
S403:对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息。
在一个具体的实施场景中,获取实时图像20和模板图像21的降采样图像后,将分辨率接近的实时图像20的降采样图像与模板图像21的降采样图像建立映射关系,并将实时图像20的降采样图像按照分辨率由低到高的顺序进行层级排列。对分辨率接近的实时降采样图像与其映射的模板降采样图像进行匹配解析,判断是否得到设定精度级的目标位置信息。例如,将实时图像20的降采样图像201与模板图像21的降采样图像211建立映射关系,将实时图像20的降采样图像202与模板图像21的降采样图像212建立映射关系,将层级排列的实时图像20的降采样图像序列中分辨率最低的实时图像20的降采样图像201与模板图像21的降采样图像序列中分辨率最低的模板图像21的降采样图像211进行匹配解析,判断是否获取设定精度级的目标位置信息。如果能够获取设定精度级的目标位置信息,则将该设定精度级的目标位置信息作为初步匹配信息。
在另一个具体的实施场景中,获取实时图像20和模板图像21的降采样图像后,将分辨率接近的实时图像20的降采样图像与模板图像21的降采样图像建立映射关系,并将实时图像20的降采样图像按照分辨率由低到高的顺序进行层级排列。对分辨率接近的实时降采样图像与其映射的模板降采样图像进行匹配解析,判断是否得到设定精度级的目标位置信息。例如,将实时图像20的降采样图像201与模板图像21的降采样图像211建立映射关系,将实时图像20的降采样图像202与模板图像21的降采样图像212建立映射关系,将实时图像20的降采样图像203与模板图像21的降采样图像213建立映射关系。将层级排列的实时图像20的降采样图像序列中分辨率最低的实时图像20的降采样图像201与该降采样图像201映射的模板图像21的降采样图像211进行匹配解析,判断是否获取设定精度级的目标位置信息。如果确定匹配解析得到的信息不是设定精度级的目标位置信息,则以升序的方式选取更高层级分辨率的实时图像20的降采样图像及其映射的模板图像21的降采样图像进行匹配解析,直至得到设定精度级的目标位置信息,并将该设定精度级的目标位置信息作为初步匹配信息。如将比降采样图像201分辨率高一层级的实时图像20的降采样图像202和其映射的模板图像21的降采样图像212进行匹配解析,判断解析得到的信息是否为设定精度级的目标位置信息,若不是,则选取比降采样图像202分辨率高一层级的降采样图像203与其映射的降采样图像213进行匹配解析,直至能够解析得到设定精度级的目标位置信息,并将该设定精度级的目标位置信息作为初步匹配信息。
在再一个具体的实施场景中,获取实时图像20和模板图像21的降采样图像后,将分辨率接近的实时图像20的降采样图像与模板图像21的降采样图像建立映射关系,并将实时图像20的降采样图像按照分辨率由低到高的顺序进行层级排列。对分辨率接近的实时降采样图像与其映射的模板降采样图像进行匹配解析,判断是否得到设定精度的目标位置信息。例如,将实时图像20的降采样图像201与模板图像21的降采样图像211建立映射关系,将实时图像20的降采样图像202与模板图像21的降采样图像212建立映射关系,将实时图像20的降采样图像203与模板图像21的降采样图像213建立映射关系。将层级排列的实时图像20的降采样图像序列中分辨率最低的实时图像20的降采样图像201与该降采样图像201映射的模板图像21的降采样图像211进行匹配解析,判断是否获取设定精度级的目标位置信息。如果确定匹配解析得到的信息不是设定精度级的目标位置信息,则以升序的方式选取比当前匹配的实时图像20的降采样图像分辨率更高层级的实时图像20的降采样图像及其映射的模板图像21的降采样图像进行匹配解析,根据匹配解析得到的信息判断是否得到设定精度级的目标位置信息,若以升序的方式遍历完实时图像20所有的降采样图像,仍无法得到设定精度级的目标位置信息,则将最后一次匹配解析得到的目标位置信息作为初步匹配信息。
在上述实施场景中,将实时图像20的降采样图像与对应的模板图像21的降采样图像进行匹配时,获取的初步匹配信息为与模板图像21中的预置目标匹配度最高的目标的位置信息,该信息包括该目标的角度、坐标等位置信息。
S404:根据初步匹配信息判断所述实时图像中是否出现预置目标,如果是,将所述实时图像与模板图像进行匹配,以得到图像匹配的结果。
在一个具体的场景中,从初步匹配信息中获取目标位置信息,并判断该目标位置信息是否为设定精度级的目标位置信息,如果是,则确定实时图像20中出现预置目标,将实时图像20中与模板图像21中的预置目标匹配度最高的目标确定为预置目标,并根据初步匹配信息将实时图像20和模板图像21进行匹配,得到匹配结果。其中,匹配结果为预置目标在实时图像中的精确位置信息以及实时图像20中的预置目标的每个部分与模板图像21中的预置目标的对应匹配关系,其中,精确位置信息包括角度、坐标等位置信息,并且根据该结果工业机器人还可确定预置目标在实时图像中的精确位置信息以及实时图像20中的预置目标的每个部分与模板图像21中的预置目标的对应匹配关系,其中,精确位置信息包括角度、坐标等位置信息。
在另一个具体的场景中,从初步匹配信息中获取目标位置信息,并判断该目标位置信息是否为设定精度级的目标位置信息,若果不是,则确定实时图像中没有出现预置目标,则重复前述步骤后再次进行判断,直至确定实时图像中出现预置目标,将实时图像20中与模板图像21中的预置目标匹配度最高的目标确定为预置目标,然后根据初步匹配信息将实时图像20与模板图像21进行匹配,以得到图像匹配的结果,即得到预置目标在实时图像中的精确位置信息以及实时图像20中的预置目标的每个部分与模板图像21中的预置目标的对应匹配关系,其中,精确位置信息包括角度、坐标等位置信息,通过该结果可以确定实时图像20中与外边A相匹配的外边即为可供加工的外边,工业机器人据此即可对该外边进行加工处理。
上述任一实施例中,将实时图像和模板图像的降采样图像进行匹配以及将实时图像和模板图像进行匹配的方法为边缘匹配或者灰度匹配。
本发明的有益效果是:区别于现有技术的情况,本发明对模板图像及实时图像都做降采样处理,先通过降采样图像的匹配获得初步匹配信息,再根据初步匹配信息判断目标是否出现在预设区域,若是,则通过对实时图像和模板图像的匹配获得精确匹配结果,从而提高了图像匹配的计算速度。本发明能够快速获得实时图像和模板图像的匹配结果,提高工业识别速度,进而提高了加工效率。
基于同一发明构思,本发明还提供了另一种视觉系统,请参阅图5,图5是本发明视觉系统一实施例的结构示意图。本实施例的视觉系统包括以下器件:
图像采集器51,处理器52和存储器53。其中,图像采集器51用于采集预定场景的实时图像,并将采集到的图像发送至处理器52。存储器53用于存储模板图像、实时图像、程序数据以及处理器52处理的数据。处理器52分别与图像采集器51和存储器53耦合连接。且处理器52在执行程序时执行以下步骤:控制所述图像采集器采集预定场景的实时图像;对实时图像和预设的模板图像作降采样处理,获取实时图像和模板图像的降采样图像;对实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息;根据所述初步匹配信息判断所述实时图像中是否出现预置目标,如果是,将所述实时图像与模板图像进行匹配,以得到图像匹配的结果。关于该视觉系统在工作中执行的图像匹配方法,前述已详细描述,此处不再赘述。
本发明的有益效果是:区别于现有技术的情况,本发明对模板图像及实时图像都做降采样处理,先通过降采样图像的匹配获得初步匹配信息,再根据初步匹配信息判断目标是否出现在预设区域,若是,则通过对实时图像和模板图像的匹配获得精确匹配结果,从而提高了图像匹配的计算速度。本发明能够快速获得实时图像和模板图像的匹配结果,提高工业识别速度,进而提高了加工效率。
以上所述仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内 。

Claims (18)

  1. 一种图像匹配方法,其特征在于,包括:
    采集预定场景的实时图像;
    对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像;
    对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息;
    根据所述初步匹配信息判断所述实时图像中是否出现预置目标,如果是,将所述实时图像与模板图像进行匹配,以得到图像匹配的结果。
  2. 一种图像匹配方法,其特征在于,包括:
    采集预定场景的实时图像;
    对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像;
    对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息;
    根据所述初步匹配信息获取实时图像中与所述模板图像匹配的目标的位置;
    判断所述目标的位置是否在预设区域内,如果是,则将所述位置对应的实时图像与模板图像进行匹配,以得到图像匹配的结果。
  3. 根据权利要求1或2所述的图像匹配方法,其特征在于,所述对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像的步骤具体包括:
    对所述实时图像和模板图像作不同层次的降采样处理,获取所述实时图像和模板图像的多个分辨率不同的降采样图像。
  4. 根据权利要求3所述的图像匹配方法,其特征在于,所述模板图像的降采样图像数量与所述实时图像的降采样图像的数量相同。
  5. 根据权利要求2所述的图像匹配方法,其特征在于,所述对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息的步骤具体包括:
    将分辨率最低的实时图像的降采样图像和模板图像的降采样图像进行匹配,得到初步匹配信息。
  6. 根据权利要求5所述的图像匹配方法,其特征在于,将分辨率最低的实时图像的降采样图像和模板图像的降采样图像进行匹配,得到初步匹配信息的步骤具体包括:
    将分辨率接近的实时降采样图像与模板降采样图像建立映射关系;
    对分辨率接近的实时降采样图像与其映射的模板降采样图像进行匹配解析,得到设定精度级的目标位置信息。
  7. 根据权利要求6所述的图像匹配方法,其特征在于,所述将分辨率接近的实时降采样图像与其映射的模板降采样图像进行匹配解析,得到设定精度级的目标位置信息的步骤具体包括:
    以最低分辨率为初始条件,将实时降采样图像与对应的模板降采样图像进行匹配解析,
    若解析无法得到设定精度级的目标位置信息,则选取更高层级分辨率的实时图像及其映射的模板图像进行匹配解析,直至得到设定精度级的目标位置信息。
  8. 根据权利要求1或2所述的图像匹配方法,其特征在于,所述将实时图像和模板图像的降采样图像进行匹配以及将实时图像和模板图像进行匹配的方法为边缘匹配。
  9. 根据权利要求1或2所述的图像匹配方法,其特征在于,所述将实时图像和模板图像的降采样图像进行匹配以及将实时图像和模板图像进行匹配的方法为灰度匹配。
  10. 一种视觉系统,其特征在于,包括:图像采集器,存储器,处理器,所述处理器分别与所述图像采集器和所述存储器耦合连接;
    所述图像采集器用于采集实时图像,并将采集到的实时图像发送至所述处理器;
    所述存储器用于存储所述模板图像、所述实时图像、程序数据以及所述处理器处理的数据;
    所述处理器在执行所述程序时执行以下步骤:
    控制所述图像采集器采集预定场景的实时图像;
    对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像;
    对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息;
    根据所述初步匹配信息判断所述实时图像中是否出现预置目标,如果是,将所述实时图像与模板图像进行匹配,以得到图像匹配的结果。
  11. 一种视觉系统,其特征在于,包括:图像采集器,存储器,处理器,所述处理器分别与所述图像采集器和所述存储器耦合连接;
    所述图像采集器用于采集实时图像,并将采集到的实时图像发送至所述处理器;
    所述存储器用于存储所述模板图像、所述实时图像、程序数据以及所述处理器处理的数据;
    所述处理器在执行所述程序时执行以下步骤:
    控制所述图像采集器采集预定场景的实时图像;
    对所述实时图像和预设的模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像;
    对所述实时图像和模板图像的降采样图像进行匹配,得到初步匹配信息;
    根据所述初步匹配信息获取实时图像中与所述模板图像匹配的目标的位置;
    判断所述目标的位置是否在预设区域内,如果是,则将所述位置对应的实时图像与模板图像进行匹配,以得到图像匹配的结果。
  12. 根据权利要求10或11所述的视觉系统,其特征在于,所述处理器对所述实时图像和模板图像作降采样处理,获取所述实时图像和模板图像的降采样图像的步骤具体包括:
    对所述实时图像和模板图像作不同层次的降采样处理,获取所述实时图像和模板图像的多个分辨率不同的降采样图像。
  13. 根据权利要求12所述的视觉系统,其特征在于,所述处理器获取的模板图像的降采样图像数量与所述实时图像的降采样图像的数量相同。
  14. 根据权利要求11所述的视觉系统,其特征在于,所述处理器对所述实时图像和模板图像的降采样图像进行匹配,获取初步匹配信息的步骤具体包括包括:
    将分辨率最低的实时图像的降采样图像和模板图像的降采样图像进行匹配,得到初步匹配信息。
  15. 根据权利要求14所述的视觉系统,其特征在于,所述将分辨率最低的实时图像的降采样图像和模板图像的降采样图像进行匹配,得到初步匹配信息的步骤具体包括:
    将分辨率接近的实时降采样图像与模板降采样图像建立映射关系;
    对分辨率接近的实时降采样图像与其映射的模板降采样图像进行匹配解析,得到设定精度级的目标位置信息。
  16. 根据权利要求15所述的视觉系统,其特征在于,所述将分辨率最接近的实时降采样图像与其映射的模板降采样图像进行匹配解析,得到设定精度级的目标位置信息的步骤具体包括:
    以最低分辨率为初始条件,将实时降采样图像与对应的模板降采样图像进行匹配解析,
    若解析无法得到设定精度级的目标位置信息,则选取更高层级分辨率的实时图像及其映射的模板图像进行匹配解析,直至得到设定精度级的目标位置信息。
  17. 根据权利要求10或11所述的视觉系统,其特征在于,所述处理器采用边缘匹配的方法将实时图像和模板图像的降采样图像以及实时图像和模板图像进行匹配。
  18. 根据权利要求10或11所述的视觉系统,其特征在于,所述处理器采用灰度匹配的方法将实时图像和模板图像的降采样图像以及实时图像和模板图像进行匹配。
PCT/CN2017/118121 2017-12-22 2017-12-22 图像匹配方法和视觉系统 WO2019119453A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780035380.4A CN109313708B (zh) 2017-12-22 2017-12-22 图像匹配方法和视觉系统
PCT/CN2017/118121 WO2019119453A1 (zh) 2017-12-22 2017-12-22 图像匹配方法和视觉系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/118121 WO2019119453A1 (zh) 2017-12-22 2017-12-22 图像匹配方法和视觉系统

Publications (1)

Publication Number Publication Date
WO2019119453A1 true WO2019119453A1 (zh) 2019-06-27

Family

ID=65225724

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/118121 WO2019119453A1 (zh) 2017-12-22 2017-12-22 图像匹配方法和视觉系统

Country Status (2)

Country Link
CN (1) CN109313708B (zh)
WO (1) WO2019119453A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113466233A (zh) * 2020-03-31 2021-10-01 北京配天技术有限公司 视觉检测方法、视觉检测装置及计算机存储介质
CN111639708B (zh) * 2020-05-29 2023-05-09 深圳市燕麦科技股份有限公司 图像处理方法、装置、存储介质及设备
CN111787381A (zh) * 2020-06-24 2020-10-16 北京声迅电子股份有限公司 一种安检机采集图像的上传方法及上传装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477632A (zh) * 2008-12-19 2009-07-08 深圳市大族激光科技股份有限公司 一种灰度图像匹配方法及系统
CN101556695A (zh) * 2009-05-15 2009-10-14 广东工业大学 一种图像匹配方法
CN104966283A (zh) * 2015-05-22 2015-10-07 北京邮电大学 图像分层配准方法
CN105989608A (zh) * 2016-04-25 2016-10-05 北京光年无限科技有限公司 一种面向智能机器人的视觉捕捉方法及装置
CN106127261A (zh) * 2016-07-01 2016-11-16 深圳元启智能技术有限公司 一种快速多分辨率灰度图像模板匹配方法
US9721348B2 (en) * 2015-10-23 2017-08-01 Electronics And Telecommunications Research Institute Apparatus and method for raw-cost calculation using adaptive window mask

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489178A (zh) * 2013-08-12 2014-01-01 中国科学院电子学研究所 一种图像配准方法和系统
CN103617625B (zh) * 2013-12-13 2017-05-24 中国气象局北京城市气象研究所 一种图像匹配方法及图像匹配装置
CN105513038B (zh) * 2014-10-20 2019-04-09 网易(杭州)网络有限公司 图像匹配方法及手机应用测试平台
CN106096659A (zh) * 2016-06-16 2016-11-09 网易(杭州)网络有限公司 图像匹配方法和装置
CN106845555A (zh) * 2017-02-09 2017-06-13 聚龙智瞳科技有限公司 基于Bayer格式的图像匹配方法及图像匹配装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477632A (zh) * 2008-12-19 2009-07-08 深圳市大族激光科技股份有限公司 一种灰度图像匹配方法及系统
CN101556695A (zh) * 2009-05-15 2009-10-14 广东工业大学 一种图像匹配方法
CN104966283A (zh) * 2015-05-22 2015-10-07 北京邮电大学 图像分层配准方法
US9721348B2 (en) * 2015-10-23 2017-08-01 Electronics And Telecommunications Research Institute Apparatus and method for raw-cost calculation using adaptive window mask
CN105989608A (zh) * 2016-04-25 2016-10-05 北京光年无限科技有限公司 一种面向智能机器人的视觉捕捉方法及装置
CN106127261A (zh) * 2016-07-01 2016-11-16 深圳元启智能技术有限公司 一种快速多分辨率灰度图像模板匹配方法

Also Published As

Publication number Publication date
CN109313708A (zh) 2019-02-05
CN109313708B (zh) 2023-03-21

Similar Documents

Publication Publication Date Title
WO2019119453A1 (zh) 图像匹配方法和视觉系统
WO2017128865A1 (zh) 一种基于多镜头的智能机械手及定位装配方法
WO2014194620A1 (zh) 图像特征提取、训练、检测方法及模块、装置、系统
WO2017067264A1 (zh) 一种降低误识别率的方法、装置及智能移动终端
US6307951B1 (en) Moving body detection method and apparatus and moving body counting apparatus
CN108942929B (zh) 基于双目立体视觉的机械臂定位抓取的方法及装置
WO2017067269A1 (zh) 一种指纹的识别方法、装置以及移动终端
WO2015161697A1 (zh) 应用于人机交互的运动物体跟踪方法及系统
CN110009678B (zh) 正畸用弓丝弯制检测方法及系统
WO2015196878A1 (zh) 一种电视虚拟触控方法及系统
CN106205199A (zh) 一种车牌容错处理系统
CN1296862C (zh) 图像处理装置
WO2021221334A1 (ko) Gps정보 및 라이다 신호를 기초로 형성되는 컬러 맵 생성 장치 및 그 제어방법
WO2018176370A1 (zh) 一种视觉检测系统及方法
CN115096902B (zh) 一种运动控制方法及中框缺陷的检测系统
CN112101107B (zh) 一种智能网联模型车在环仿真交通信号灯智能识别方法
WO2021261905A1 (ko) 영상 분석 기반 작업 동작 인식 및 생산량 측정 장치와 그 방법
WO2018084381A1 (ko) 지피유장치를 기반으로 하는 딥러닝 분석을 이용한 영상 보정 방법
WO2022114340A1 (ko) 스마트 비전 얼라인먼트 시스템 및 이를 이용한 스마트 비전 얼라인먼트 방법
CN113510711A (zh) 一种基于人工智能的工业机器人动作执行监测调控方法及云监测调控平台
WO2023121012A1 (ko) 특징점 추적 기반 거동 분석 방법 및 시스템
WO2023033199A1 (en) Methods for automatically identifying a match between a product image and a reference drawing based on artificial intelligence
WO2019232782A1 (zh) 一种物体特征的识别方法、视觉识别装置及机器人
CN109202802A (zh) 一种用于卡合装配的视觉引导系统及方法
CN117788569B (zh) 一种基于图像特征的口腔异常点定位方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17935737

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17935737

Country of ref document: EP

Kind code of ref document: A1