WO2024075906A1 - Dispositif de traitement d'image, système de traitement d'image et procédé de traitement d'image - Google Patents

Dispositif de traitement d'image, système de traitement d'image et procédé de traitement d'image Download PDF

Info

Publication number
WO2024075906A1
WO2024075906A1 PCT/KR2022/020943 KR2022020943W WO2024075906A1 WO 2024075906 A1 WO2024075906 A1 WO 2024075906A1 KR 2022020943 W KR2022020943 W KR 2022020943W WO 2024075906 A1 WO2024075906 A1 WO 2024075906A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature point
area
image processing
design data
Prior art date
Application number
PCT/KR2022/020943
Other languages
English (en)
Korean (ko)
Inventor
양창모
장성태
Original Assignee
현대자동차 주식회사
기아 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 현대자동차 주식회사, 기아 주식회사 filed Critical 현대자동차 주식회사
Publication of WO2024075906A1 publication Critical patent/WO2024075906A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Definitions

  • the disclosure relates to image processing devices, image processing systems, and image processing methods.
  • an inspection method using images is used to inspect whether vehicle parts are assembled and the state of assembly. For example, whether or not a vehicle part is assembled can be determined using an image taken before part assembly and an image taken after part assembly.
  • image registration or image matching
  • image registration needs to be performed between the acquired images. If image registration between acquired images is not performed accurately, if a part of the acquired image is recognized as a part or part assembly area and extracted, the image quality of that part may deteriorate.
  • feature point detection can be used.
  • the problem to be solved is to provide an image processing device, an image processing system, and an image processing method that can improve the accuracy of image registration by improving the performance of feature point detection used in image registration.
  • An image processing device includes an exclusion area determination module that distinguishes a vehicle body area and a parts area in a design data-based image and determines a partial area including the parts area as an exclusion area; a reference image generation module that generates a reference image including a masking area that masks the exclusion area in the design data-based image; a feature point search area definition module that defines a feature point search area in an acquired image using the reference image; and a feature point detection module that detects a plurality of feature points in the feature point search area.
  • the device generates the design data-based image by matching shooting coordinates corresponding to a captured image provided from a camera that photographs a vehicle and design data coordinates corresponding to the design data. Additional creation modules may be included.
  • the exclusion area determination module may distinguish the vehicle body area and the component area according to values assigned to the design data for each vehicle body or component identification number.
  • the exclusion area may further include a background area distinguished from the vehicle body area and the parts area in the design data-based image.
  • the exclusion area may further include a user-specified area designated by the user as an area in the design data-based image where there is a large change depending on the environment.
  • the device may further include an image registration module that performs image registration on the acquired image based on the plurality of feature points.
  • the apparatus includes a transformation matrix generation module that generates a transformation matrix from a positional deviation from a previously acquired acquisition image while performing the image registration; And it may further include an error analysis module that analyzes whether component values of the transformation matrix correspond to a predefined error range.
  • the plurality of feature points include a first feature point and a second feature point different from the first feature point
  • the image registration module is configured to configure component values of a transformation matrix generated according to image registration using the first feature point. If it is outside the error range, image registration for the acquired image may be performed using the second feature point.
  • the feature point search area definition module changes the feature point search area by increasing or decreasing at least some of the boundary lines of the feature point search area in pixel units when component values of the transformation matrix are outside a predefined error range.
  • the feature point detection module may detect a new feature point in the changed feature point search area, and the image matching module may perform image registration on the acquired image based on the new feature point.
  • An image processing system includes a mobile robot that photographs a vehicle at imaging coordinates determined by teaching provided in advance; a first image processing device that receives a first captured image from the mobile robot and generates a reference image for feature point search using a design data-based image matching the first captured image; and a second image processing device that receives a second captured image from the mobile robot and performs image registration on the second captured image based on a feature point search area defined in the reference image.
  • the first image processing device may include an exclusion area determination module that distinguishes a vehicle body area and a parts area in the design data-based image and determines a partial area including the parts area as an exclusion area; and a reference image generation module that generates the reference image including a masking area that masks the exclusion area in the design data-based image.
  • the exclusion area determination module may distinguish the vehicle body area and the component area according to values assigned to the design data for each vehicle body or component identification number.
  • the second image processing device includes a feature point search area definition module that defines the feature point search region in the second captured image using the reference image; and a feature point detection module that detects a plurality of feature points in the feature point search area.
  • the second image processing device includes an image registration module that performs image registration on the second captured image based on the plurality of feature points; a transformation matrix generation module that generates a transformation matrix from a position difference with a previously acquired second captured image while performing the image registration; And it may further include an error analysis module that analyzes whether component values of the transformation matrix correspond to a predefined error range.
  • the plurality of feature points include a first feature point and a second feature point different from the first feature point
  • the image registration module is configured to configure component values of a transformation matrix generated according to image registration using the first feature point. If it is outside the error range, image registration for the second captured image may be performed using the second feature point.
  • the feature point search area definition module changes the feature point search area by increasing or decreasing at least some of the boundary lines of the feature point search area in pixel units when component values of the transformation matrix are outside a predefined error range.
  • the feature point detection module may detect a new feature point in the changed feature point search area, and the image matching module may perform image registration on the second captured image based on the new feature point.
  • An image processing method includes distinguishing a vehicle body area and a parts area in an image based on design data; determining a partial area including the part area as an exclusion area; generating a reference image including a masking area by masking the exclusion area in the design data-based image; defining a feature point search area in an acquired image using the reference image; and detecting a plurality of feature points in the feature point search area.
  • the method includes performing image registration on the acquired image based on the plurality of feature points; generating a transformation matrix from a positional deviation from a previously acquired image while performing the image registration; And it may further include analyzing whether component values of the transformation matrix correspond to a predefined error range.
  • the plurality of feature points include a first feature point and a second feature point different from the first feature point
  • the method may include component values of a transformation matrix generated according to image registration using the first feature point. If it is outside the error range, the step of performing image registration on the acquired image using the second feature point may be further included.
  • the method includes, when component values of the transformation matrix are outside a predefined error range, changing the feature point search area by increasing or decreasing at least a portion of the boundary line of the feature point search area on a pixel basis; detecting a new feature point in the changed feature point search area; And it may further include performing image registration on the acquired image based on the new feature point.
  • an area in an image with relatively little change depending on the shooting environment e.g., car body area, etc.
  • an area with relatively large change e.g., parts assembly area, area sensitive to lighting changes, etc.
  • FIG. 1 is a diagram for explaining an image processing system according to an embodiment.
  • FIG. 2 is a diagram illustrating a design data-based image that can be used in an image processing system according to embodiments.
  • 3 to 5 are diagrams to explain determining an exclusion area in an image processing system according to embodiments.
  • FIG. 6 is a diagram illustrating a reference image that can be used in an image processing system according to embodiments.
  • FIGS. 7 and 8 are diagrams for explaining a feature point search area defined in an acquired image using a reference image in an image processing system according to embodiments.
  • Figure 9 is a diagram for explaining an image processing method according to an embodiment.
  • Figure 10 is a diagram for explaining an image processing system according to an embodiment.
  • Figures 11 and 12 are diagrams to explain the range of image acquisition errors caused by errors in acquisition equipment.
  • Figures 13 and 14 are diagrams for explaining feature point detection when the error range is outside the error range.
  • Figure 15 is a diagram for explaining an image processing method according to an embodiment.
  • Figure 16 is a diagram for explaining feature point detection when it is outside the error range.
  • Figure 17 is a diagram for explaining an image processing method according to an embodiment.
  • FIG. 18 is a diagram for explaining an example of a computing device for implementing an image processing device, an image processing system, and an image processing method according to example embodiments.
  • ... unit refers to a unit that processes at least one function or operation, which is implemented as hardware, software, or a combination of hardware and software. It can be.
  • at least some components or functions of the image processing device, image processing system, and image processing method according to the embodiments described below may be implemented as a program or software, and the program or software may be stored in a computer-readable medium. there is.
  • FIG. 1 is a diagram for explaining an image processing system according to an embodiment.
  • an image processing system 1 may include a first image processing device 10, a second image processing device 20, and a mobile robot 30.
  • the mobile robot 30 may be a robot that can understand the environment, move around, and assist or perform tasks on behalf of people in industrial sites. Unlike AGV (automated guided vehicle), which relies on a predefined path, the mobile robot 30 is equipped with various types of sensors, including cameras, computer vision sensors, voice recognition sensors, and lidar, and performs computing such as artificial intelligence and machine learning. It uses technology to collect data about its environment and can make decisions on its own, such as setting routes.
  • the mobile robot 30 can be used in various production processes, for example, in a robotic automated manufacturing plant for various industrial products such as vehicles. In the following embodiments, the mobile robot 30 may photograph a vehicle at photographing coordinates determined by teaching provided in advance.
  • the first image processing device 10 may generate a reference image (IMG2) using a captured image (IMG1) and an image based on design data.
  • the captured image IMG1 is an image taken in advance for at least a portion of the vehicle
  • the design data-based image may be an image generated based on design data (eg, 3D CAD data).
  • the captured image (IMG1) is an image of the door frame of a vehicle captured by, for example, a mobile robot 30 in a certain shooting environment
  • the design data-based image is 3D CAD data for the door frame of the corresponding vehicle type. From , it may be a door frame image rendered to match the corresponding shooting environment (eg, shooting coordinates, shooting angle, etc.).
  • the reference image (IMG2) may be an image used to define a feature point search area on the acquired image (IMG3).
  • the acquired image (IMG3) is an image taken during the production process of at least a portion of the vehicle, and the feature point search area may be an area determined to search for feature points used for image matching on a certain image. That is, in embodiments, the feature point search area may include only areas in an image where feature point detection performance is judged to be high, and exclude areas where feature point detection performance is determined to be low.
  • the reference image (IMG2) is divided into an area where the performance of feature point detection is judged to be high (e.g., an area in the image where there is relatively little change depending on the shooting environment) and where the performance of feature point detection is judged to be low.
  • the judged area (for example, an area in the image with relatively large changes depending on the shooting environment) can be distinguished on the acquired image (IMG3) for the door frame.
  • the second image processing device 20 may define a feature point search area on the acquired image (IMG3) using the reference image (IMG2), detect feature points therefrom, and perform image registration.
  • the captured image IMG1 and the acquired image IMG3 are shown as being captured by the mobile robot 30, but the scope of the present invention is not limited thereto.
  • the first image processing device 10 receives a captured image (IMG1) from the mobile robot 30 and creates a reference image for feature point search using a design data-based image matching the captured image (IMG1). (IMG2) and may include a design data-based image generation module 100, an exclusion area determination module 110, and a reference image generation module 120.
  • the design data-based image generation module 100 includes shooting coordinates corresponding to a captured image (IMG1) provided from a camera that photographs a vehicle (e.g., a camera mounted on the mobile robot 30), and design data corresponding to the design data. By matching coordinates, an image based on design data can be created.
  • the shooting coordinates of a camera shooting a door frame in a certain work space are coordinates (x1, y1, z1) on the first coordinate system
  • the design data coordinates of the image rendered from the design data of the door frame are on the second coordinate system.
  • an image based on design data can be created by matching (or transforming) the shooting coordinates and design data coordinates to be coordinates in the same coordinate system.
  • the design data-based image generated in this way for a door frame is shown as an acquisition image that is captured and acquired at coordinates (x1, y1, z1) on a first coordinate system for the door frame of an actual vehicle in a later process.
  • the size, area, angle, etc. of the door frame may match each other.
  • the method in which the design data-based image generation module 100 matches the captured image IMG1 with the image rendered from the design data is not limited to the above-described method.
  • the exclusion area determination module 110 may distinguish the vehicle body area and the parts area in the design data-based image generated by the design data-based image generation module 100 and determine some areas including the parts area as the exclusion area.
  • the exclusion area is an area in which the performance of feature point detection is judged to be low when setting the feature point search area and can be used to indicate an area that needs to be excluded.
  • a masking area is set based on these exclusion areas, and only areas that are judged to have high performance in feature point detection in the feature point search area in the acquired image are selected. You can do things like include:
  • the exclusion area may include a part area.
  • the vehicle body area may include the vehicle body, and the parts area may include components (eg, cables) assembled to the vehicle body.
  • the part area is included in the feature point search area, and the feature points detected in the part area are used to determine before and after assembly of the part.
  • image registration is performed on acquired images, the accuracy of image registration cannot be guaranteed.
  • detection of feature points that have a high risk of lowering the accuracy of image registration can be avoided.
  • the exclusion area determination module 110 may automatically use design data to distinguish between a vehicle body area and a parts area in the design data-based image generated by the design data-based image generation module 100.
  • design data may be given various values for each vehicle body or part unique number.
  • design data corresponding to the unique number of a part corresponding to the car body is given a first value (e.g. '0') to indicate that the part is a car body
  • the unique number of the part corresponding to a part other than the car body is given a first value (e.g. '0').
  • the design data corresponding to may be given a second value (eg, '1') to indicate that the part is a part.
  • the exclusion area determination module 110 can automatically distinguish between the car body area and the part area according to the value assigned to the design data for each car body or part unique number.
  • the exclusion area may further include a background area.
  • the background area may include, for example, an area excluding the main subject (e.g., a door frame) on the acquired image (IMG3) captured by the mobile robot 30, for example, other parts distinct from the main subject, It may include areas where the factory's internal facilities, workers, etc. were filmed. If the feature point search area includes such a background area and image registration of acquired images is performed using feature points detected in the background area, the accuracy of image registration cannot be guaranteed. By including the background area in the exclusion area, detection of feature points that have a high risk of lowering the accuracy of image registration can be avoided.
  • the exclusion area may further include a user-specified area.
  • the user-specified area may be an area designated by the user as an area in the design data-based image where there is significant change depending on the environment. For example, in an image based on design data, the user can designate a part of the car body with a reflective surface that protrudes so that light reflection is severe as an area where there is a large change in the lighting environment within the factory. If the feature point search area includes such a protrusion user-specified area and image registration is performed on images acquired before and after light reflection occurs using feature points detected in the user-specified area, the accuracy of image registration cannot be guaranteed. I can't. By including a user-specified area in the exclusion area, detection of feature points that have a high risk of lowering the accuracy of image registration can be avoided.
  • the reference image generation module 120 may generate a reference image (IMG2) including a masking area that masks the exclusion area determined by the exclusion area determination module 110 in the design data-based image.
  • the masking area can be created by collecting the exclusion areas.
  • the masking area may include only a part area, a part area and a background area, a part area and a user-specified area, or a part, It can also include both background areas and custom areas.
  • the generated reference image (IMG2) may be transmitted to the second image processing device 20.
  • the reference image generation module 120 may display the masking area using a binary value.
  • the reference image IMG2 is an image of a predetermined size, and the masking area and other areas can be distinguished through binary values assigned to each pixel constituting the image.
  • pixels corresponding to the masking area may be assigned a first binary value (e.g., '0'), and pixels corresponding to other areas may be assigned a second binary value (e.g., '1'), but the masking
  • the method of expressing the area is not limited to the method described above.
  • the second image processing device 20 receives an acquired image (IMG3) taken from the mobile robot 30, and selects an acquired image (IMG3) based on the feature point search area defined in the reference image (IMG2).
  • Image registration may be performed for and may include a feature point search area definition module 200, a feature point detection module 210, and an image matching module 220.
  • the feature point search area definition module 200 may define a feature point search area in the acquired image IMG3 using the reference image IMG2 received from the reference image generation module 120. Specifically, the feature point search area definition module 200 overlaps the reference image (IMG2) including the masking area on the acquired image (IMG3) and performs a feature point search on the area obscured by the masking area on the acquired image (IMG3). Excluding the area, areas that are not obscured by the masking area on the acquired image (IMG3) can be included in the feature point search area.
  • the reference image IMG2 may be generated to have the same size as the acquired image IMG3 at the time it is created by the reference image generation module 120.
  • the feature point search area definition module 200 can overlap the acquired image (IMG3) and the reference image (IMG2) without a separate resizing operation.
  • the size of the reference image (IMG2) and the acquired image (IMG3) received by the feature point search area definition module 200 may be different.
  • the feature point search area definition module 200 performs resizing so that the size of the reference image (IMG2) matches the size of the acquired image (IMG3), or, conversely, the acquired image (IMG3) is adjusted to the size of the reference image (IMG2).
  • the acquired image (IMG3) and reference image (IMG2) can overlap. Additionally, image cropping, rotation, etc. may be selectively performed if necessary.
  • the reference image (IMG2) generated in the reference image generation module 120 may be stored in a database, and the feature point search area definition module 200 may generate an acquired image from the database when a new acquired image (IMG3) is generated. It is also possible to obtain a reference image (IMG2) corresponding to (IMG3). At this time, the reference image (IMG2) stored in the database may be appropriately labeled to facilitate search according to the acquired image (IMG3).
  • the feature point detection module 210 may detect a plurality of feature points in the feature point search area defined by the feature point search area definition module 200.
  • the feature points detected in the feature point search area defined by the feature point search area definition module 200 may be feature points that can be used as a standard so that image registration can be performed accurately even if the shooting environment has changed.
  • the feature point detection module 210 performs brightness and contrast correction through histogram matching for the acquired image (IMG3) within the feature point search area, adjusts the lighting similarity to increase, and then detects a plurality of feature points. can be detected. For example, feature point detection performance by correcting the histogram value in the feature point search region of the acquired image (IMG3) based on the histogram value previously acquired for the feature point search region in relation to the captured image (IMG1) or reference image (IMG2). can be further improved.
  • the image registration module 220 may perform image registration on the acquired image IMG3 based on a plurality of feature points detected by the feature point detection module 210. If the image registration result does not exceed the error range of the acquisition device (e.g., mobile robot 30), image registration is completed. If the image registration result does not exceed the error range of the acquisition device, the feature points are reselected and image registration is performed again. It can be done. The image registration re-performance operation according to the error range will be described later with reference to FIGS. 10 to 17.
  • areas with relatively large changes in the image due to changes in the environment such as uncontrollable lighting, or image changes that may occur when parts are assembled manually or when parts are not assembled.
  • areas where changes exist from the feature point search area and improving the detection rate and accuracy of feature points for image registration between acquired images, feature point detection performance used in image matching can be improved.
  • the exclusion area from the feature point search area of the acquired image can be automatically determined based on design data (e.g., 3D CAD data), the feature point search area can be set effectively and efficiently in a short time.
  • the vision system can be operated effectively.
  • FIG. 2 is a diagram illustrating a design data-based image that can be used in an image processing system according to embodiments.
  • the design data-based pillar image generated based on 3D CAD data for the pillar of the vehicle is the photographing environment of the pillar image previously photographed by the mobile robot 30 (e.g., associated with the mobile robot 30). It may be a pillar image rendered to match the shooting coordinates, shooting angle, etc. in the coordinate system.
  • the design data-based pillar image of FIG. 2 includes shooting coordinates corresponding to the captured pillar image provided from the mobile robot 30 that photographs the pillar, and design data coordinates based on 3D CAD data for the pillar. It can be created by matching (or converting) coordinates to the same coordinate system.
  • 3 to 5 are diagrams to explain determining an exclusion area in an image processing system according to embodiments.
  • the design data-based pillar image of FIG. 2 shows a part area (M1) that is separated from the vehicle body area.
  • Parts assembled on pillars, such as cables, can be manually assembled by workers at any point in time according to an undetermined process sequence, so the feature point search area includes a part area (M1) including the cable, and the part area (M1) including the cable is included in the feature point search area.
  • the component area M1 may be included in the exclusion area.
  • a background area (M2) distinguished from the vehicle body area is shown in the design data-based pillar image of FIG. 2.
  • the background area (M2) in which other parts, internal facilities of the factory, workers, etc. that are distinct from the main subject, pillars, etc. are photographed is included in the feature point search area, and acquisition images with different backgrounds are detected in the background area (M2). If image registration of acquired images is performed using characteristic points, the accuracy of image registration cannot be guaranteed. Therefore, the background area (M2) may be included in the exclusion area.
  • the design data-based pillar image of FIG. 2 shows a user-specified area (M3) distinguished from the vehicle body area.
  • the user can designate as a user-specified area (M3) the area where there is a large change in the lighting environment within the factory, and the part of the car body where the reflective surface protrudes so that light is reflected heavily.
  • a user-specified area (M3) is included, and the feature points detected in the user-specified area (M3) ensure the accuracy of image registration when image registration is performed between an acquired image with strong light reflection and an acquired image without light reflection. I can't. Therefore, the user-specified area (M3) may be included in the exclusion area.
  • FIG. 6 is a diagram illustrating a reference image that can be used in an image processing system according to embodiments.
  • the reference pillar image includes a masking area (E1) corresponding to the part area (M1) shown in relation to FIGS. 3 to 5, and a masking area (E2, E3) corresponding to the background area (M2). and a masking area (E4) corresponding to the user-specified area (M3).
  • the reference pillar image is shown as including all of the masking areas (E1, E2, E3, and E4), but depending on the specific implementation purpose or environment, some of the masking areas (E1, E2, E3, and E4) may be masked. It may also include only
  • FIGS. 7 and 8 are diagrams for explaining a feature point search area defined in an acquired image using a reference image in an image processing system according to embodiments.
  • the illustrated acquired pillar image may be an image captured by the mobile robot 30 during the production process. Because it was filmed during the production process, you can see that the interior of the factory was filmed behind the pillar.
  • a reference pillar image including masking areas (E1, E2, E3, E4) overlaps on the acquisition pillar image, and is obscured by the masking areas (E1, E2, E3, E4) on the acquisition pillar image.
  • the masked area can be excluded from the feature point search area, and the area not covered by the masking areas (E1, E2, E3, E4) on the acquired pillar image can be included in the feature point search area.
  • a plurality of feature points P may be detected in the feature point search area defined in this way.
  • Figure 9 is a diagram for explaining an image processing method according to an embodiment.
  • a design data-based image can be generated by matching the captured coordinates corresponding to the captured image and the design data coordinates corresponding to the design data.
  • the car body area and the parts area can be distinguished in the design data-based image, and some areas including the parts area can be determined as an exclusion area.
  • the method can generate a reference image including a masking area by masking the excluded area in the design data-based image in step S905, and select feature points from the acquired image using the reference image in step S907. You can define a search area. And the method can detect a plurality of feature points in the feature point search area in step S909.
  • Figure 10 is a diagram for explaining an image processing system according to an embodiment.
  • the image processing system 2 may include a first image processing device 10, a second image processing device 20, and a mobile robot 30.
  • the difference from the image processing system 1 described above with reference to FIG. 1 is that the second image processing device 20 further includes a transformation matrix generation module 230 and an error analysis module 240.
  • the transformation matrix generation module 230 may generate a transformation matrix from a position difference with a previously acquired image while performing image registration.
  • the transformation matrix can be created after acquiring an image and calculating the positional deviation from the existing image, and includes information about translation, scale, shear, rotation, and tilt. can do.
  • the transformation matrix can be defined as the following 3 x 3 matrix:
  • Tx and Ty may represent displacement along the x-axis and y-axis, respectively
  • Sx and Sy may represent scale factors along the x-axis and y-axis, respectively
  • Shx and Shy may represent the shear factor along the x-axis and y-axis, respectively.
  • E and F may be factors affecting the vanishing point in relation to tilt.
  • the transformation matrices are cos(q), sin(q), -sin(q), cos with respect to rotation angle q from some center instead of Tx, Ty, Sx, Sy, Shx, Shy, E, and F.
  • (q) may also be configured to include values.
  • the error analysis module 240 may analyze whether component values of the transformation matrix correspond to a predefined error range.
  • the predefined error range may include an error range for each shooting position of the image acquisition equipment, for example, the mobile robot 30.
  • the mobile robot 30 repeatedly performs image acquisition at various shooting positions, and can calculate the error range for each shooting position for each of the various components of the transformation matrix.
  • the image registration module 220 may reselect the feature points and re-perform image registration.
  • Figures 11 and 12 are diagrams to explain the range of image acquisition errors caused by errors in acquisition equipment.
  • the mobile robot 30 can repeatedly perform image acquisition at various photographing positions for the vehicle.
  • the mobile robot 30 has a first position (P20102), a second position (P20204), a third position (P20302), a fourth position (P20402), a fifth position (P20502), and a sixth position (P20603).
  • P20102 first position
  • P20204 second position
  • P20302 third position
  • P20402 fourth position
  • P20502 a fifth position
  • P20603 a sixth position
  • the error analysis module 240 determines that the value of the Sx component with respect to the third position (P20302) is in an error range of about 0.8 to about 1.05. It analyzes whether it is applicable, and if the value of the Sx component of the transformation matrix generated while performing image registration is a value outside the error range of about 0.8 to about 1.05, the image registration module 220 reselects the feature point and performs image registration. It can be re-done.
  • Figures 13 and 14 are diagrams for explaining feature point detection when the error range is outside the error range.
  • the feature point detection module 210 can detect n feature points (n is a natural number, for example, 10) in the feature point search area shown in FIG. 13, and the image matching module 220 detects the first feature point among the n plurality of feature points.
  • Image registration can be performed on the acquired image based on m selected feature points (P1, P2, P3) (m is a natural number smaller than n, for example, 3).
  • the transformation matrix generation module 230 may generate a transformation matrix from the positional deviation from a previously acquired image while performing image registration, and the error analysis module 240 may determine the component values of the transformation matrix in advance. You can analyze whether it falls within the defined error range.
  • the image registration module 220 selects m feature points among the n feature points. Image registration can be re-performed by reselecting (P4, P5, P6) secondarily. Thereafter, the transformation matrix generation module 230 may re-perform image registration and secondarily generate a transformation matrix from the positional deviation from the previously acquired acquisition image, and the error analysis module 240 may generate the transformation matrix of the transformation matrix. It is possible to analyze whether component values fall within a predefined error range.
  • Figure 15 is a diagram for explaining an image processing method according to an embodiment.
  • step S1501 a feature point search area is defined in the acquired image using a reference image, and in step S1503, a first feature point is selected from the feature point search region.
  • a plurality of feature points including can be detected.
  • the method may, in step S1505, perform image registration on the acquired image based on a plurality of feature points including the first feature point, and perform image matching in step S1507, while performing image matching in step S1507.
  • a transformation matrix can be generated from the positional deviation from the acquired image.
  • step S1509 the method can determine whether the component values of the transformation matrix correspond to a predefined error range.
  • step S1503 If the component values of the transformation matrix are determined to fall within the predefined error range, image registration can be successfully completed, and if the component values of the transformation matrix are determined to be outside the predefined error range, the method Proceeding to step S1503, a plurality of feature points including a second feature point different from the first feature point may be reselected. Thereafter, in step S1505, image registration may be performed on the acquired image based on a plurality of feature points including the second feature point.
  • Figure 16 is a diagram for explaining feature point detection when it is outside the error range.
  • the feature point detection module 210 can detect a plurality of feature points in the feature point search area shown in FIG. 16, and the image matching module 220 can perform image registration on the acquired image based on the plurality of feature points. Additionally, the transformation matrix generation module 230 may generate a transformation matrix from the positional deviation from a previously acquired image while performing image registration, and the error analysis module 240 may determine the component values of the transformation matrix in advance. You can analyze whether it falls within the defined error range.
  • the feature point search area definition module 200 increases or decreases at least some of the boundary lines (A-A') of the feature point search area in pixel units to create a new boundary line (B- It can be changed to a feature point search area containing B'). Thereafter, the feature point detection module 210 may detect a new feature point in the changed feature point search area, and the image matching module 220 may perform image registration on the acquired image based on the new feature point.
  • Figure 17 is a diagram for explaining an image processing method according to an embodiment.
  • a feature point search area including a first boundary line is defined in the acquired image using a reference image
  • a feature point search area is defined. Multiple feature points can be detected in the search area.
  • the method may perform image registration on the acquired image based on a plurality of feature points in step S1705, and perform image registration in step S1707 while comparing the previously acquired image with the acquired image.
  • a transformation matrix can be generated from the position deviation.
  • the method can determine whether the component values of the transformation matrix correspond to a predefined error range.
  • a feature point search area including a new second boundary line obtained by increasing or decreasing the first boundary line in pixel units may be defined.
  • steps S1703 and S1705 a new feature point may be detected in the changed feature point search area, and image registration on the acquired image may be performed based on the new feature point.
  • the feature point detection module 210 may detect n feature points (n is a natural number, for example, 10) in the feature point search area including the first boundary line, and the image matching module 220 may detect a plurality of n feature points.
  • Image registration can be performed on the acquired image based on m primarily selected feature points (m is a natural number smaller than n, for example, 3).
  • the transformation matrix generation module 230 may generate a transformation matrix from the positional deviation from a previously acquired image while performing image registration, and the error analysis module 240 may determine the component values of the transformation matrix in advance. You can analyze whether it falls within the defined error range.
  • the image registration module 220 may re-perform image registration by secondarily reselecting m feature points among the n feature points. .
  • the feature point The search area definition module 200 may define a feature point search area including a new second border line obtained by increasing or decreasing the first border line in pixel units. Thereafter, in the changed feature point search area, the process of detecting n feature points again and performing image matching on the acquired image based on the m feature points primarily selected among the n plural feature points may be repeated.
  • the first borderline is set to increase or decrease by a predetermined pixel value (for example, 5 pixels), and when the increase or decrease value reaches the predetermined pixel value, in the feature point search area including the first borderline Pixel-by-pixel increase or decrease may be performed on a borderline that is different from the first borderline.
  • a predetermined pixel value for example, 5 pixels
  • areas with relatively large changes in the image due to changes in the environment such as uncontrollable lighting, or images that may occur when parts are assembled manually or when parts are not assembled By excluding areas where phase changes exist from the feature point search area and improving the detection rate and accuracy of feature points for image registration between acquired images, feature point detection performance used in image registration can be improved.
  • the exclusion area can be automatically determined in the feature point search area of the acquired image based on design data (e.g., 3D CAD data), the feature point search area can be set effectively and efficiently in a short time.
  • the vision system can be operated effectively.
  • FIG. 18 is a diagram for explaining an example of a computing device for implementing an image processing device, an image processing system, and an image processing method according to example embodiments.
  • an image processing device, an image processing system, and an image processing method according to embodiments may be implemented using a computing device 500.
  • Computing device 500 includes at least one of a processor 510, a memory 530, a user interface input device 540, a user interface output device 550, and a storage device 560 that communicate over a bus 520. can do.
  • Computing device 500 may also include a network interface 570 that is electrically connected to network 40 .
  • the network interface 570 can transmit or receive signals to and from other entities through the network 40.
  • the processor 510 may be implemented in various types such as a Micro Controller Unit (MCU), Application Processor (AP), Central Processing Unit (CPU), Graphic Processing Unit (GPU), Neural Processing Unit (NPU), and memory. 530 or may be any semiconductor device that executes instructions stored in the storage device 560. Processor 510 may be configured to implement the functions and methods described above with respect to FIGS. 1 to 17 .
  • MCU Micro Controller Unit
  • AP Application Processor
  • CPU Central Processing Unit
  • GPU Graphic Processing Unit
  • NPU Neural Processing Unit
  • memory 530 or may be any semiconductor device that executes instructions stored in the storage device 560.
  • Processor 510 may be configured to implement the functions and methods described above with respect to FIGS. 1 to 17 .
  • Memory 530 and storage device 560 may include various types of volatile or non-volatile storage media.
  • the memory may include read-only memory (ROM) 531 and random access memory (RAM) 532.
  • the memory 530 may be located inside or outside the processor 510, and the memory 530 may be connected to the processor 510 through various known means.
  • At least some components or functions of the image processing device, image processing system, and image processing method according to the embodiments may be implemented as a program or software running on the computing device 500, and the program or software may be implemented by a computer. It may be stored on a readable medium.
  • At least some components or functions of the image processing device, image processing system, and image processing method according to the embodiments may be implemented as hardware that can be electrically connected to the computing device 500.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un dispositif de traitement d'image, un système de traitement d'image et un procédé de traitement d'image. Le procédé de traitement d'image peut comprendre : un module de détermination de zone d'exclusion qui distingue une zone de carrosserie de voiture et une zone de pièces dans une image basée sur des données de conception et détermine une zone partielle comprenant la zone de pièces en tant que zone d'exclusion ; un module de génération d'image de référence qui génère une image de référence comprenant une zone de masquage qui masque la zone d'exclusion dans l'image basée sur des données de conception ; un module de définition de zone de recherche de points caractéristiques qui définit une zone de recherche de points caractéristiques dans l'image acquise à l'aide de l'image de référence ; et un module de détection de points caractéristiques qui détecte une pluralité de points caractéristiques dans la zone de recherche de points caractéristiques.
PCT/KR2022/020943 2022-10-04 2022-12-21 Dispositif de traitement d'image, système de traitement d'image et procédé de traitement d'image WO2024075906A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020220126505A KR20240047180A (ko) 2022-10-04 2022-10-04 이미지 처리 장치, 이미지 처리 시스템 및 이미지 처리 방법
KR10-2022-0126505 2022-10-04

Publications (1)

Publication Number Publication Date
WO2024075906A1 true WO2024075906A1 (fr) 2024-04-11

Family

ID=90608524

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/020943 WO2024075906A1 (fr) 2022-10-04 2022-12-21 Dispositif de traitement d'image, système de traitement d'image et procédé de traitement d'image

Country Status (2)

Country Link
KR (1) KR20240047180A (fr)
WO (1) WO2024075906A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101630596B1 (ko) * 2016-02-12 2016-06-14 이정희 차량하부 촬영장치 및 이를 운용하는 차량하부 촬영방법
KR20190019403A (ko) * 2017-08-17 2019-02-27 현대자동차주식회사 차량 레이더 검사 시스템 및 그 방법
KR20200127751A (ko) * 2019-05-03 2020-11-11 현대자동차주식회사 실링 로봇 티칭 시스템 및 그 방법
KR20210018397A (ko) * 2021-02-05 2021-02-17 한국과학기술연구원 3차원 자동 스캔 시스템 및 방법
KR20210120229A (ko) * 2020-03-26 2021-10-07 현대자동차주식회사 영상기반 지그 검사 시스템 및 그 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101630596B1 (ko) * 2016-02-12 2016-06-14 이정희 차량하부 촬영장치 및 이를 운용하는 차량하부 촬영방법
KR20190019403A (ko) * 2017-08-17 2019-02-27 현대자동차주식회사 차량 레이더 검사 시스템 및 그 방법
KR20200127751A (ko) * 2019-05-03 2020-11-11 현대자동차주식회사 실링 로봇 티칭 시스템 및 그 방법
KR20210120229A (ko) * 2020-03-26 2021-10-07 현대자동차주식회사 영상기반 지그 검사 시스템 및 그 방법
KR20210018397A (ko) * 2021-02-05 2021-02-17 한국과학기술연구원 3차원 자동 스캔 시스템 및 방법

Also Published As

Publication number Publication date
KR20240047180A (ko) 2024-04-12

Similar Documents

Publication Publication Date Title
WO2020085881A1 (fr) Procédé et appareil de segmentation d'image en utilisant un capteur d'événement
WO2016171341A1 (fr) Système et procédé d'analyse de pathologies en nuage
WO2012005387A1 (fr) Procédé et système de suivi d'un objet mobile dans une zone étendue à l'aide de multiples caméras et d'un algorithme de poursuite d'objet
WO2021112462A1 (fr) Procédé d'estimation de valeurs de coordonnées tridimensionnelles pour chaque pixel d'une image bidimensionnelle, et procédé d'estimation d'informations de conduite autonome l'utilisant
WO2011052826A1 (fr) Procédé de création et d'actualisation d'une carte pour la reconnaissance d'une position d'un robot mobile
WO2020027607A1 (fr) Dispositif de détection d'objets et procédé de commande
WO2019054593A1 (fr) Appareil de production de carte utilisant l'apprentissage automatique et le traitement d'image
WO2021101045A1 (fr) Appareil électronique et procédé de commande associé
WO2020235734A1 (fr) Procédé destiné à estimer la distance à un véhicule autonome et sa position au moyen d'une caméra monoscopique
WO2020256517A2 (fr) Procédé et système de traitement de mappage de phase automatique basés sur des informations d'image omnidirectionnelle
WO2015182904A1 (fr) Appareil d'étude de zone d'intérêt et procédé de détection d'objet d'intérêt
WO2019127049A1 (fr) Procédé de mise en correspondance d'images, dispositif et support d'enregistrement
WO2020189909A2 (fr) Système et procédé de mise en oeuvre d'une solution de gestion d'installation routière basée sur un système multi-capteurs 3d-vr
WO2020204219A1 (fr) Procédé de classification de valeurs aberrantes dans un apparentissage de reconnaissance d'objet à l'aide d'une intelligence artificielle, dispositif de classification et robot
WO2022092743A1 (fr) Procédé d'extraction de caractères de plaque d'immatriculation de véhicule et dispositif d'extraction de caractères de plaque d'immatriculation pour appliquer le procédé
WO2024075906A1 (fr) Dispositif de traitement d'image, système de traitement d'image et procédé de traitement d'image
WO2021045481A1 (fr) Système et procédé de reconnaissance d'objets
WO2018038300A1 (fr) Dispositif, procédé et programme informatique de fourniture d'image
WO2024019266A1 (fr) Appareil et procédé de réalisation de transformation de couleurs sur des images de capteur brutes
WO2020230921A1 (fr) Procédé d'extraction de caractéristiques d'une image à l'aide d'un motif laser, et dispositif d'identification et robot l'utilisant
WO2023149603A1 (fr) Système de surveillance par images thermiques utilisant une pluralité de caméras
WO2016104842A1 (fr) Système de reconnaissance d'objet et procédé de prise en compte de distorsion de caméra
WO2022080680A1 (fr) Procédé et dispositif de retouche d'image basés sur une intelligence artificielle
WO2017007047A1 (fr) Procédé et dispositif de compensation de la non-uniformité de la profondeur spatiale en utilisant une comparaison avec gigue
WO2019151555A1 (fr) Procédé de détection d'objet et système robotisé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22961548

Country of ref document: EP

Kind code of ref document: A1