CN109215133B - Simulation image library construction method for visual alignment algorithm screening - Google Patents
Simulation image library construction method for visual alignment algorithm screening Download PDFInfo
- Publication number
- CN109215133B CN109215133B CN201810958055.1A CN201810958055A CN109215133B CN 109215133 B CN109215133 B CN 109215133B CN 201810958055 A CN201810958055 A CN 201810958055A CN 109215133 B CN109215133 B CN 109215133B
- Authority
- CN
- China
- Prior art keywords
- image
- background image
- characteristic
- feature
- alignment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2008—Assembling, disassembling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Landscapes
- Engineering & Computer Science (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method for constructing a simulated image library for screening a visual alignment algorithm, which comprises the following steps: selecting a feature shape for alignment; selecting a background image; setting the position of the characteristic shape in the background image, namely the position of an image I forming the characteristic shape in the simulated image pair; setting the displacement and rotation angle of the characteristic shape in the second image of the simulated image pair relative to the position in the first image; processing the characteristic shape by using an alignment characteristic processing algorithm to generate a first image and a second image in the simulated image pair; the method has the advantages that the simulation image library constructed by the method comprises a large number of image pairs with characteristic shapes possibly appearing at any placing positions and image pairs possibly having distortion; the method can improve the characteristic recognition algorithm by utilizing the simulated image library, find the defects or defects of the algorithm, improve the algorithm matching precision, improve the algorithm robustness and shorten the debugging time of the algorithm in field application.
Description
Technical Field
The invention relates to the technical field of visual alignment, in particular to a method for constructing a simulated image library for screening a visual alignment algorithm.
Background
Alignment is a professional name of the part of precision assembly of devices in modern industrial production, and is typically applied to mounting of various flexible or rigid devices represented by mobile phone production. The specific implementation process is that the object a at the position 1 and the object B at the position 2 are installed together, and in the installation process, the horizontal or rotation direction of the object a or the object B needs to be adjusted. One key link for achieving the alignment function is whether the accurate positions of the object a and the object B can be obtained. In order to realize accurate position adjustment, the object is shot by the visual alignment system during work, and the whole alignment process is guided to be realized.
When the visual alignment process is implemented, the most critical factor is whether the feature shapes in the images captured by the visual system can be accurately positioned and matched. The method comprises the steps of accurately identifying characteristic shapes from various shape elements in a shot image, and then carrying out alignment matching. When the environment change of the image shot by the vision system affects the shot image, for example, the light source used for vision alignment of the machine table is broken or the distribution is uneven, the distribution of the color blocks of the shot image is likely to change. The vision alignment system assists in providing a machine table for accurate positioning and matching, the installation environment of the machine table for the camera is limited, the camera cannot be guaranteed to have a good installation angle to be just aligned with a shooting object, and the inclined angle can cause distortion of a shot image. Meanwhile, because the installation space of the machine is limited, the machine cannot provide enough installation space for the cameras, and if the cameras are installed in the space outside the space of the machine to indirectly shoot an object through reflection or a multi-level optical path system, the shot images are distorted. In the complex application scene where the outside changes or appears as much as possible, the feature recognition algorithm is still guaranteed to accurately recognize the feature shape and realize alignment, or the defects existing in the feature recognition algorithm are found out, the feature recognition algorithm is further improved, the matching precision of the feature recognition algorithm on the feature shape in the image is improved, and the alignment with higher precision is realized, which is an important technical problem to be solved. The feature recognition algorithm can still accurately recognize the feature shapes needing to be aligned and matched in light change, color change and complex background lines, and the feature recognition algorithm can still realize accurate alignment aiming at background images which can appear as far as possible. In order to verify the accuracy of the vision alignment system, an actual machine can be used for collecting image verification or simulation image verification. The verification of the actual machine table has the advantages of being in line with the actual situation, having obvious defects, needing to be carried out on the actual production field, having long joint debugging time and high investment cost of matched hardware, and having difficult alignment software effect under the abnormal conditions of the failure of the verification machine table and the like due to operation limitation. The simulation image verification is flexible, and the software performance under abnormal conditions can be verified. In actual work, a simulation image is generally adopted for verification, so that the alignment software is ensured to be normally expressed under the condition, and then the software which passes the simulation image verification and an actual machine are subjected to combined work.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method for constructing a simulated image library for screening a visual alignment algorithm.
The technical scheme of the invention is to provide a method for constructing a simulated image library for screening a visual alignment algorithm, which comprises the following steps:
a. selecting a feature shape for alignment;
b. selecting a background image, wherein the background image is the background image with the characteristic shape removed;
c. setting the position of the characteristic shape in the background image, namely the position of an image I forming the characteristic shape in the simulation image pair;
d. setting the displacement and rotation angle of the characteristic shape in the second image of the simulated image pair relative to the position in the first image;
e. processing the feature shape by using an alignment feature processing algorithm to generate a first image and a second image in the simulated image pair;
f. changing the positions of the characteristic shapes in the background image in the step c, repeating the steps d-e, and regenerating a new simulated image pair until all the positions of the characteristic shapes in the background image are changed;
g. and c, replacing the alignment characteristic shape selected in the step a, and repeating the steps b-f until the characteristic shape is selected, so as to obtain corresponding analog image pairs generated by different characteristic shapes at all positions in the background image.
Preferably, the background image includes: the background image is extracted from the actual shot picture of the visual alignment system in the actual production process, and the background image is generated according to the specified color distribution.
Preferably, the method includes replacing the background image in the step b, and repeating the steps c-g until the background image is replaced, so as to obtain corresponding simulated image pairs generated at all positions of different characteristic shapes in different background images.
Preferably, before the step e, image distortion processing is performed, and different distortion coefficients are set for different positions of the background image according to the background image and the checkerboard; for the image I and the image II, if the characteristic shape position is distorted, a distortion coefficient sub-matrix of the position is called to distort the characteristic shape; and if the characteristic shape position is not distorted, not performing distortion processing.
Preferably, the processing of the feature shape in step e by using the alignment feature processing algorithm includes contour processing including contour width determination and edge determination, and background filling may be performed by using an actual background image or a generated background image.
Preferably, the displacement of the characteristic shape is a translation distance in any one of a horizontal direction and a vertical direction.
Preferably, the displacement of the characteristic shape includes a horizontal translation distance and a vertical translation distance.
Preferably, the characteristic shape includes one or more of a polygon, a corner, or a line.
The invention has the beneficial effects that the method for constructing the simulated image library for screening the visual alignment algorithm is provided, the simulated image library constructed by the method comprises a large number of image pairs of any placing positions possibly appearing in the characteristic shape and image pairs possibly having distortion conditions, and alignment data of the characteristic shape, the translation distance of the characteristic shape and the rotation angle of the characteristic shape, which are set in the image pair generation process; the simulated image library can be used for improving the feature recognition algorithm, so that the defects or defects existing in the algorithm are found and correspondingly improved, the algorithm matching precision is improved, the algorithm robustness can be improved, and the debugging time of the algorithm in field application is shortened; if the feature recognition algorithm can meet the requirement of actual use precision for matching the feature shapes in the image pairs in the simulated image library, when the feature recognition algorithm is applied to a visual alignment system, accurate alignment can be realized when the feature recognition algorithm actually encounters the same arrangement position or distortion condition of the feature shapes in the simulated image pairs in alignment, so that the feature recognition algorithm has stronger adaptability.
Description of the drawings:
fig. 1 is a schematic flow chart of a method for generating a simulated image pair according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for constructing a simulated image library for screening a visual alignment algorithm according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a method for generating a simulated image pair including distortion processing according to an embodiment of the present invention;
FIG. 4 is a schematic view of one of the features of the present invention;
FIG. 5 is a schematic view of another feature according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-5, the present invention provides the following embodiments:
the method for constructing the simulated image library for screening the visual alignment algorithm in the embodiment comprises the following steps:
a. selecting a feature shape for alignment;
b. selecting a background image, wherein the background image is the background image with the characteristic shape removed;
c. setting the position of the characteristic shape in the background image, namely the position of an image I forming the characteristic shape in the simulation image pair;
d. setting the displacement and rotation angle of the characteristic shape in the second image of the simulated image pair relative to the position in the first image;
e. processing the feature shape by using an alignment feature processing algorithm to generate a first image and a second image in the simulated image pair;
f. changing the positions of the characteristic shapes in the background image in the step c, repeating the steps d-e, and regenerating a new simulated image pair until all the positions of the characteristic shapes in the background image are changed;
g. and c, replacing the alignment characteristic shape selected in the step a, and repeating the steps b-f until the characteristic shape is selected, so as to obtain corresponding analog image pairs generated by different characteristic shapes at all positions in the background image.
When developing vision alignment software, if all the algorithms depend on joint debugging with a hardware machine, the cost is high and the efficiency is low, so various feature recognition algorithms and parameters are screened and optimized through a simulation image, and then the optimized algorithms and parameters are applied to a vision alignment system. If the feature recognition algorithm is to accurately recognize the feature shape needing to be aligned and matched in light change, color change and complex background lines, the feature recognition algorithm is required to accurately align the background image which can appear as far as possible. Therefore, in this embodiment, the corresponding simulated image pair is generated by selecting all the positions of the different feature shapes in the background image, and specifically, the corresponding simulated image pair can be expressed as any placing position where the feature shape to be aligned may appear when the workpiece to be aligned is fed. If the feature recognition algorithm can accurately recognize the placement positions of the feature shapes as far as possible, the feature recognition algorithm has stronger adaptability to accurate alignment, namely, the matching precision of the feature shapes in the image is higher. The construction of the simulation image library specifically comprises the following steps: if the same base point is used as a coordinate system, firstly selecting a feature shape for alignment, and then selecting a background image, wherein the background image is the background image without the feature shape; setting the position of the feature in the background image, i.e. the position (x) constituting the first image of the feature in the simulated image pair1,y1,T1) Wherein x, y and T respectively represent the position and the space angle of the characteristic shape in the horizontal direction and the vertical direction, and are specifically represented as the space position of a workpiece to be aligned on an alignment platform; setting features in image twoPosition of shape (x)1+dx,y1+dy,T1+ dT), and the horizontal translation distance dx, the vertical translation distance dy and the rotation angle dT of the feature shape in the second image relative to the first image, namely the possible appearance position of the feature shape; processing the feature shape by using an alignment feature processing algorithm to correspondingly generate a first image and a second image in the simulated image pair; continuously replacing the positions of the characteristic shapes in the background image, and regenerating a new simulation image pair until all the positions of the characteristic shapes in the background image are replaced; and continuously selecting new alignment characteristic shapes until the characteristic shapes are selected, and obtaining corresponding simulated image pairs generated by different characteristic shapes at all positions in the background image. In this way, the simulated image library formed by a large number of image pairs includes a large number of image pairs at arbitrary positions where the feature shapes may appear, and includes alignment data of the feature shapes set in the image pair generation process, the translation distances of the feature shapes, and the rotation angles of the feature shapes. If the feature recognition algorithm is used for matching the image pair of the simulated image library, the matching calculation result is compared with the alignment data of the image pair (for example, the translation distance in the horizontal direction calculated by the feature recognition algorithm is compared with the translation distance in the horizontal direction preset in the image pair generation process), the actual use precision range of the image pair cannot be met, the parameters of the algorithm can be adjusted until the actual use precision range of the algorithm is met, the defects or defects existing in the algorithm are found and correspondingly improved, the algorithm matching precision is improved, the algorithm robustness can be improved, and the debugging time of the algorithm in field application is shortened. If the feature recognition algorithm can meet the requirement of actual use precision for matching the feature shapes in the image pairs in the simulated image library, when the feature recognition algorithm is applied to a visual alignment system, accurate alignment can be realized when the feature recognition algorithm actually encounters the placement position of the feature shapes which are the same as those in the simulated image pairs in alignment, and the feature recognition algorithm has stronger adaptability.
In an embodiment, the background image includes: the background image is extracted from the actual shot picture of the visual alignment system in the actual production process, and the background image is generated according to the specified color distribution. In this embodiment, in order to solve the problem that the captured image is affected by the environmental change of the image captured by the vision alignment system, for example, the light source used for the vision alignment of the machine is broken or the distribution is uneven, which may cause the distribution of color blocks of the captured image to change, thereby affecting the accurate alignment, a scheme is adopted in which the background image selected in the process of generating the simulated image pair includes a background image extracted from the actually captured image of the vision alignment system in the actual production process and a background image generated according to the specified color distribution, and different background images (for example, background images with shadows or uneven color block distribution) are generated according to the specified color distribution, so that the feature recognition algorithm applied to the vision alignment system does not affect the realization of the alignment when encountering the above-mentioned scene.
In a preferred embodiment, the method includes replacing the background image in the step b, and repeating the steps c to g until the background image is replaced, so as to obtain corresponding simulated image pairs generated at all positions of different characteristic shapes in different background images. By adopting the scheme, corresponding analog image pairs generated by different feature shapes at all positions in different background images can be obtained.
In the scheme of the preferred embodiment, before the step e, image distortion processing is performed, and different distortion coefficients are set for different positions of the background image according to the background image and the checkerboard; for the image I and the image II, if the characteristic shape position is distorted, a distortion coefficient sub-matrix of the position is called to distort the characteristic shape; and if the characteristic shape position is not distorted, not performing distortion processing. In the actual machine alignment operation, the installation environment given to the camera by the machine is limited, the camera cannot be guaranteed to have a good installation angle to be just aligned with a shooting object, and the oblique angle usually causes distortion of a shot image. Meanwhile, because the installation space of the machine is limited, the machine cannot provide enough installation space for the cameras, and if the cameras are installed in the space outside the space of the machine and are used for indirectly shooting a workpiece to be aligned through reflection or a multi-level optical path system, the shot images are distorted. If the workpiece to be aligned is a flexible part, the peripheral edge of the workpiece to be aligned is curled, and the image shot by the camera has a certain distortion phenomenon. Therefore, the distorted image will affect the accurate alignment of the workpiece, and it is important to perform the distortion processing on the alignment feature. Therefore, in order to solve the above problem, the present embodiment provides a method for constructing a simulated image library, in which the step e is preceded by an image distortion process. Setting different distortion coefficients (the proportionality coefficients of the image between the actual physical points of the image in the actual physical space and the image pixel points) at different positions of the background image according to the background image and the checkerboard; and for the image I and the image II, the characteristic shape can be distorted to enable the position of the characteristic shape to have distortion, namely, the distortion coefficient sub-matrix of the position is called to distort the characteristic shape. If the feature recognition algorithm can also meet the requirement of actual use precision for matching the feature shapes in the image pairs with distortion conditions in the simulated image library, when the feature recognition algorithm is applied to a visual alignment system and the distortion conditions of the same feature shapes as those in the simulated image pairs are actually met in alignment, accurate alignment can also be realized, so that the feature recognition algorithm has stronger adaptability.
In the preferred embodiment, the processing of the feature shape in step e by using the alignment feature processing algorithm includes contour processing including contour width determination and edge determination, and background filling, and the background filling may be filled by using an actual background image or a generated background image. When the feature shape is placed in the background image, contour processing, such as determination of contour width and edge, a change relationship between the edge and the surrounding background, and the like, is required. After the contour processing is finished, the background image needs to be filled, and the actual background image or the generated background image can be used for filling.
In a preferred embodiment, the displacement of the feature is a translation distance in either a horizontal direction or a vertical direction. Specifically, the feature shapes in the second image have a translation distance in any one of the horizontal direction and the vertical direction relative to the first image, which represents that more possible placement positions of the feature shapes are simulated.
In a preferred embodiment, the displacement of the feature includes a horizontal translation distance and a vertical translation distance. Specifically, the feature shapes in the second image have a translation distance in the horizontal direction and the vertical direction relative to the first image, so that more possible placement positions of the feature shapes can be simulated.
In a preferred embodiment, the characteristic shape includes one or more of a polygon, a corner, or a line. Specifically, the characteristic shape may be a circle, a square, a cross, or the like, may be a line segment, a broken line, or the like, and may also be a right angle, an arc angle, or the like, and may be selected according to an actual alignment condition when determining the characteristic shape in alignment.
In the description of the embodiments of the present invention, it should be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "center", "top", "bottom", "inner", "outer", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for the purpose of describing the present invention and simplifying the description, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus should not be construed as limiting the present invention. Where "inside" refers to an interior or enclosed area or space. "periphery" refers to an area around a particular component or a particular area.
In the description of the embodiments of the present invention, the terms "first", "second", "third", and "fourth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", "third", "fourth" may explicitly or implicitly include one or more of the features. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the embodiments of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "assembled" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the description of the embodiments of the invention, the particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
In the description of the embodiments of the present invention, it should be understood that "-" and "-" indicate the same range of two numerical values, and the range includes the endpoints. For example, "A-B" means a range greater than or equal to A and less than or equal to B. "A to B" means a range of not less than A and not more than B.
In the description of the embodiments of the present invention, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (6)
1. A method for constructing a simulated image library for screening a visual alignment algorithm is characterized by comprising the following steps:
a. selecting a feature shape for alignment;
b. selecting a background image, wherein the background image is the background image with the characteristic shape removed;
c. setting the position of the characteristic shape in the background image, namely the position of an image I forming the characteristic shape in the simulation image pair;
d. setting the displacement and rotation angle of the characteristic shape in the second image of the simulated image pair relative to the position in the first image;
e. processing the feature shape by using an alignment feature processing algorithm to generate a first image and a second image in the simulated image pair;
f. changing the positions of the characteristic shapes in the background image in the step c, repeating the steps d-e, and regenerating a new simulated image pair until all the positions of the characteristic shapes in the background image are changed;
g. c, replacing the alignment characteristic shapes selected in the step a, repeating the steps b-f until the characteristic shapes are selected, and obtaining corresponding analog image pairs generated by different characteristic shapes at all positions in the background image;
before the step e, image distortion processing is carried out, and different distortion coefficients are set for different positions of the background image according to the background image and the checkerboard; for the image I and the image II, if the characteristic shape position is distorted, a distortion coefficient sub-matrix of the position is called to distort the characteristic shape; if the characteristic shape position is not distorted, distortion processing is not carried out;
and e, processing the feature shape by using an alignment feature processing algorithm in the step e, wherein the processing comprises contour processing and background filling, the contour processing comprises contour width and edge determination, and the background filling can be filled by using an actual background image or a generated background image.
2. The method of claim 1, wherein the background image comprises: the background image is extracted from the actual shot picture of the visual alignment system in the actual production process, and the background image is generated according to the specified color distribution.
3. The method for constructing the simulated image library for screening of the visual alignment algorithm according to claim 2, wherein the method comprises the steps of replacing the background image in the step b, and repeating the steps c to g until the background image is replaced, so as to obtain corresponding simulated image pairs generated at all positions of different feature shapes in different background images.
4. The method as claimed in claim 1, wherein the feature shape displacement is a translation distance in either a horizontal direction or a vertical direction.
5. The method of claim 1, wherein the feature shape displacement comprises a horizontal translation distance and a vertical translation distance.
6. The method of claim 1, wherein the feature shapes comprise one or more of polygons, corners, or lines.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810958055.1A CN109215133B (en) | 2018-08-22 | 2018-08-22 | Simulation image library construction method for visual alignment algorithm screening |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810958055.1A CN109215133B (en) | 2018-08-22 | 2018-08-22 | Simulation image library construction method for visual alignment algorithm screening |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109215133A CN109215133A (en) | 2019-01-15 |
CN109215133B true CN109215133B (en) | 2020-07-07 |
Family
ID=64988854
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810958055.1A Active CN109215133B (en) | 2018-08-22 | 2018-08-22 | Simulation image library construction method for visual alignment algorithm screening |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109215133B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112508997B (en) * | 2020-11-06 | 2022-05-24 | 霸州嘉明扬科技有限公司 | System and method for screening visual alignment algorithm and optimizing parameters of aerial images |
CN112785648B (en) * | 2021-04-12 | 2021-07-06 | 成都新西旺自动化科技有限公司 | Visual alignment method, device and equipment based on to-be-imaged area and storage medium |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8269836B2 (en) * | 2008-07-24 | 2012-09-18 | Seiko Epson Corporation | Image capture, alignment, and registration |
CN103198035B (en) * | 2013-02-28 | 2016-07-20 | 北京优纳科技有限公司 | alignment method and alignment system |
JP6344890B2 (en) * | 2013-05-22 | 2018-06-20 | 川崎重工業株式会社 | Component assembly work support system and component assembly method |
CN104281156A (en) * | 2013-07-05 | 2015-01-14 | 鸿富锦精密工业(深圳)有限公司 | Visual alignment device, system and method |
TWI495886B (en) * | 2014-01-06 | 2015-08-11 | Wistron Corp | Automatic alignment system and method |
CN105702169B (en) * | 2016-02-17 | 2019-03-15 | 京东方科技集团股份有限公司 | Alignment system and alignment method |
CN106054543B (en) * | 2016-08-17 | 2018-09-04 | 京东方科技集团股份有限公司 | alignment method and alignment system |
CN107657649A (en) * | 2017-09-13 | 2018-02-02 | 成都尤维克科技有限公司 | A kind of construction method of Machine Vision Detection image library |
-
2018
- 2018-08-22 CN CN201810958055.1A patent/CN109215133B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109215133A (en) | 2019-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7291244B2 (en) | Projector Keystone Correction Method, Apparatus, System and Readable Storage Medium | |
CA3019797C (en) | Camera calibration system | |
WO2017092631A1 (en) | Image distortion correction method for fisheye image, and calibration method for fisheye camera | |
CN110099267B (en) | Trapezoidal correction system, method and projector | |
US20190166339A1 (en) | Camera-assisted arbitrary surface characterization and slope-based correction | |
CN107238996B (en) | Projection system and correction method of projection picture | |
CN108846443B (en) | Visual alignment algorithm screening and parameter optimization method based on massive images | |
US20070273795A1 (en) | Alignment optimization in image display systems employing multi-camera image acquisition | |
CN108629810B (en) | Calibration method and device of binocular camera and terminal | |
CN113920205B (en) | Calibration method of non-coaxial camera | |
TWI647443B (en) | Break analysis apparatus and method | |
CN115830103A (en) | Monocular color-based transparent object positioning method and device and storage medium | |
CN109215133B (en) | Simulation image library construction method for visual alignment algorithm screening | |
CN113920206A (en) | Calibration method of perspective tilt-shift camera | |
CN109146865B (en) | Visual alignment detection graph source generation system | |
CN117576219A (en) | Camera calibration equipment and calibration method for single shot image of large wide-angle fish-eye lens | |
CN113793266A (en) | Multi-view machine vision image splicing method, system and storage medium | |
CN111383264A (en) | Positioning method, positioning device, terminal and computer storage medium | |
EP3606061B1 (en) | Projection device, projection system and an image calibration method | |
CN111131801A (en) | Projector correction system and method and projector | |
CN115239816A (en) | Camera calibration method, system, electronic device and storage medium | |
KR20190110311A (en) | Apparatus for calibrating for around view camera and method thereof | |
CN109612581B (en) | Diffuse reflection imaging strong laser parameter measuring device with camera protection function | |
CN114299164A (en) | Camera calibration method, storage medium and electronic device | |
CN106644393B (en) | The scaling method of remote burnt structured light measurement system based on plane mirror and scaling board |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |