CN113918745A - Splicing and lapping toy automatic guidance method and system based on machine vision - Google Patents

Splicing and lapping toy automatic guidance method and system based on machine vision Download PDF

Info

Publication number
CN113918745A
CN113918745A CN202111144515.5A CN202111144515A CN113918745A CN 113918745 A CN113918745 A CN 113918745A CN 202111144515 A CN202111144515 A CN 202111144515A CN 113918745 A CN113918745 A CN 113918745A
Authority
CN
China
Prior art keywords
image
dimensional
splicing
data
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111144515.5A
Other languages
Chinese (zh)
Inventor
王茂林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kim Dai Intelligence Innovation Technology Co ltd
Original Assignee
Shenzhen Kim Dai Intelligence Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kim Dai Intelligence Innovation Technology Co ltd filed Critical Shenzhen Kim Dai Intelligence Innovation Technology Co ltd
Priority to CN202111144515.5A priority Critical patent/CN113918745A/en
Publication of CN113918745A publication Critical patent/CN113918745A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H33/00Other toys
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H33/00Other toys
    • A63H33/04Building blocks, strips, or similar building parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

A splicing and lapping toy automatic guidance method and a system based on machine vision comprise the following steps: s1, establishing a database: the database records characteristic data of each part and characteristic data of each intermediate state and completion state of the toy; s2, image acquisition and judgment: acquiring intermediate image data of the toy, identifying and judging to judge the intermediate image data; s3, part reminding: judging the parts adopted in the next step according to the identified intermediate state data and the database storage data, and reminding in an audio and/or video mode; s4, part confirmation: collecting image data of the selected part, comparing the image data with data stored in a database, and judging whether the part is the part required by the next step; s5, splicing and lapping confirmation: collecting image data after splicing and lapping of the toy, and carrying out identification and judgment to judge whether the operation of the step is correct or not; and repeating the steps S2-S5 until the toy is spliced or lapped.

Description

Splicing and lapping toy automatic guidance method and system based on machine vision
Technical Field
The invention discloses a machine vision application method, in particular to an automatic guidance method and system for splicing and lapping toys based on machine vision, and belongs to the technical field of machine automation.
Background
Since the advent of human beings, toys have been an indispensable companion tool for human infants and even adults, which not only can enjoy it, but also can develop mental strength, improve intelligence, and the like. The splicing toy is popular with people since the appearance of the toy, is as far as ancient Luban lock, Kongming lock, nine-link and the like in China, and can be classified into the toy as modern building blocks, happy and high, even magic cubes and the like.
The splicing type toys generally have certain difficulty, need examine people's intelligence, thinking, patience etc. and most toys are when leaving the factory, only are furnished with one or simple, or complicated description usually, need the user to go the concatenation with reference to the description, and this to younger infant, has certain difficulty, often needs adult's accompany can accomplish, will occupy the many times of adult like this. The splicing toy can not realize the independent splicing and playing of the infant, and becomes a problem to be solved urgently in the industry.
Disclosure of Invention
Aiming at the problem that the splicing and lapping toys in the prior art cannot be spliced and played by infants independently, the invention provides an automatic splicing and lapping toy guiding method and system based on machine vision.
The technical scheme adopted by the invention for solving the technical problems is as follows: a splicing and lapping type toy automatic guiding method based on machine vision comprises the following steps:
step S1, establishing a database: the database records characteristic data of each part and characteristic data of each intermediate state and completion state of the toy;
step S2, image acquisition and judgment: acquiring intermediate image data of the toy, identifying and judging to judge the intermediate image data;
step S3, part reminding: judging the parts adopted in the next step according to the identified intermediate state data and the database storage data, and reminding in an audio and/or video mode;
step S4, parts confirmation: collecting image data of the selected part, comparing the image data with data stored in a database, judging whether the part is the part required by the next step, and if the part is the required part, sending confirmation information through a display screen or a loudspeaker; if the selected part is not the required part, a denial message is sent out through a display screen or a loudspeaker to remind the user of taking the part by mistake;
step S5, splicing and lapping confirmation: collecting image data after splicing and lapping of the toy, identifying and judging, judging whether the operation of the step is correct, and if the collected image data is the same as the data of the database, sending confirmation information through a display screen or a loudspeaker to indicate that the operation of the step is correct; if the collected image data is different from the database data, a denial message is sent out through a display screen or a loudspeaker to remind a user of an operation error;
and repeating the steps S2-S5 until the toy is spliced or lapped.
The system comprises an operation platform, a three-dimensional imaging device and an audio and/or video reminding device, wherein the three-dimensional imaging device is arranged corresponding to the operation platform, and the audio and/or video reminding device is arranged on the operation platform.
The technical scheme adopted by the invention for solving the technical problem further comprises the following steps:
when the database is established in step S1, modeling each part and the toy intermediate state by using three-dimensional software, and collecting modeling data thereof, including: the shape, size, length, width and height ratio, color characteristics and other characteristics with identification function of the part.
The step S2 includes the following sub-steps:
step S21, projection and image acquisition: generating a stripe pattern, projecting the stripe pattern to a measured object by using a light source, modulating the stripe pattern by the height of the measured object to deform to generate a modulated stripe pattern, synchronously acquiring the modulated stripe pattern by using a left camera and a right camera to respectively obtain a left image and a right image, and synchronously acquiring a depth map of the measured object by using a three-dimensional module;
step S22, stripe matching: guiding the stripes of the left image and the right image to be matched according to the depth image acquired in the step S21, and reversely projecting the depth image into the left image and the right image once when the stripes are matched, so that the accurate matching of the line segments or the stripes of the left image and the right image is realized;
step S23, three-dimensional reconstruction: searching a single point corresponding relation in a corresponding stripe center line segment by using polar line geometric constraint relations of a left camera and a right camera of the matched corresponding stripes of the left image and the right image, and then reconstructing corresponding points into three-dimensional point cloud data according to calibration parameters;
step S24, identification and judgment: and matching the three-dimensional point cloud data generated in the step S23 with the modeling data in the step S1, so as to judge which step in the modeling has the highest similarity, and identifying the current step as the step of splicing or lapping the toy.
When the three-dimensional module gathers the depth map of testee in step, when the three-dimensional scanning module transmission and the light of light source isowavelength, projection and image acquisition further include following step: (1) projecting a stripe pattern to a measured object by the light source, and respectively acquiring a left image and a right image by the left camera and the right camera; (2) the light source is turned off, the three-dimensional module emits light to the measured object, and then a three-dimensional depth image is collected; when the wavelength of the light emitted by the three-dimensional scanning module is not equal to that of the light source emission wavelength, the projection and image acquisition further comprises the following steps: the light source and the three-dimensional module simultaneously project a stripe pattern to the measured object, and the left camera, the right camera and the three-dimensional module simultaneously acquire left and right images and three-dimensional depth images.
The stripe matching in step S22 includes the following sub-steps:
a. extracting central lines of the stripes on the left camera image and the right camera image, and then segmenting a connected domain of each central line to form a plurality of independent line segments;
b. converting the depth map acquired by the three-dimensional module into a three-dimensional point cloud coordinate (pi) under a self coordinate system according to the corresponding calibration internal reference;
c. converting (pi) into a three-dimensional point cloud coordinate (qi) in a left camera coordinate system according to a rotation translation matrix Ms between the calibration three-dimensional module and the left camera;
d. sequentially back projecting the three-dimensional point cloud coordinates (qi) to the left image and the right image according to respective internal references of the left camera and the right camera, wherein each corresponding point has a corresponding serial number, and forming a lookup table corresponding to the coordinates of the left image and the right image;
e. and traversing the corresponding serial number of each point of each stripe line segment on the left image, and directly searching the stripe line segment matched with the right image according to the lookup table, thereby realizing the accurate matching of the left image line segment and the right image line segment or the stripe.
The three-dimensional reconstruction in the step S23 includes the following steps: and searching a single point corresponding relation in the corresponding stripe center line segment by using the epipolar geometric constraint relation of the left camera and the right camera for the corresponding stripe center line segment which is well matched with the left image and the right image, and then reconstructing the corresponding point pair into three-dimensional point cloud data according to the calibration parameters of the system.
The three-dimensional imaging device comprises a light source, two cameras and a three-dimensional module, wherein the two cameras are respectively arranged corresponding to the operating platform, the light source adopts a digital projector, the light source is arranged corresponding to the operating platform, the three-dimensional module adopts a low-resolution three-dimensional scanning module, and the three-dimensional module is arranged corresponding to the operating platform.
The audio reminding device adopts a loudspeaker, the video reminding device adopts a display screen, and the audio reminding device and the video reminding device are installed at the same time or in an alternative mode.
The left camera, the right camera and the three-dimensional module need to be subjected to system calibration to obtain calibration parameters, when the system is calibrated, the left camera and the right camera are calibrated to obtain internal and external parameters of the cameras and a rotation and translation matrix Mc corresponding to the relative position between the cameras, and meanwhile, a rotation and translation matrix Ms corresponding to the relative position relationship between the three-dimensional module and the left camera is calibrated.
The invention has the beneficial effects that: according to the invention, the current picture is acquired through a machine vision technology, the splicing type toy is separated from the current picture, the current operation and the next operation are calculated according to the stored content of the database, and the next operation is reminded to a child user through a display screen or a video playing mode or a voice broadcasting mode, so that the prompt is simple and intuitive, the interface is friendly, and the effect of independent use of the child and the child can be realized.
The invention will be further described with reference to the accompanying drawings and specific embodiments.
Drawings
FIG. 1 is a flow chart of the system of the present invention.
FIG. 2 is a flowchart of the image acquisition and determination steps of the present invention.
FIG. 3 is a schematic diagram of the system architecture of the present invention.
Detailed Description
The present embodiment is a preferred embodiment of the present invention, and other principles and basic structures that are the same as or similar to the present embodiment are within the scope of the present invention.
The invention mainly protects an automatic guidance method and system for splicing and lapping toys based on machine vision, which can be widely applied to the following fields: in the use guidance of the lego splicing toy, the building block lapping toy and the like, the invention mainly comprises the following steps:
step S1, establishing a database: the database records characteristic data of each part and characteristic data of each intermediate state and completion state of the toy. In this embodiment, when the database is built, 3D modeling data is used, and current three-dimensional software (such as Pro-E, 3Dmax, CAD, etc.) can be used to model each part, and the modeling data is collected, including: the shape, size, length-width-height ratio, color characteristics and other shape characteristics of the part (such as a circular convex ring for splicing the lego toy, a groove for splicing or other characteristics with identification functions). The data of each splicing or lapping step of the toy can also be generated by three-dimensional software, and the method also comprises the following steps: the step is characterized by shape, size, length, width and height ratio, color characteristic and other shape characteristic, etc.
Step S2, image acquisition and judgment: acquiring intermediate image data of the toy, identifying and judging to judge the intermediate image data;
in this embodiment, image acquisition and determination are achieved by using a three-dimensional imaging device, the imaging device includes a light source, two cameras (respectively defined as a left camera and a right camera), and a three-dimensional module, the positional relationships of the light source, the cameras, and the three-dimensional module are relatively fixed, after the imaging device is assembled, the left camera, the right camera, and the three-dimensional module need to be calibrated to obtain calibration parameters, when a system is calibrated, the left camera and the right camera are calibrated to obtain internal and external parameters of the cameras and a rotation and translation matrix Mc corresponding to the relative position between the cameras, and at the same time, a rotation and translation matrix corresponding to the relative positional relationship between the three-dimensional module and the left camera is calibrated, and Ms parameter calibration is usually set when a product is shipped.
In this embodiment, the image acquisition and determination includes the following substeps:
step S21, projection and image acquisition: generating a stripe pattern, projecting the stripe pattern to a measured object by using a light source, modulating the stripe pattern by the height of the measured object to deform to generate a modulated stripe pattern, synchronously acquiring the modulated stripe pattern by using a left camera and a right camera to respectively obtain a left image (a left camera acquired image) and a right image (a right camera acquired image), and synchronously acquiring a depth map of the measured object by using a three-dimensional module.
In this embodiment, the three-dimensional module is a three-dimensional scanning module. When the three-dimensional scanning module transmits light with the same wavelength as the light source, the projection and image acquisition further comprises the following steps: (1) projecting a stripe pattern to a measured object by the light source, and respectively acquiring a left image and a right image by the left camera and the right camera; (2) and the light source is closed, the three-dimensional module emits light to the measured object, and then the three-dimensional depth image is acquired. When the wavelength of the light emitted by the three-dimensional scanning module is not equal to that of the light source emission wavelength, the projection and image acquisition further comprises the following steps: the light source and the three-dimensional module simultaneously project a stripe pattern to the measured object, and the left camera, the right camera and the three-dimensional module simultaneously acquire left and right images and three-dimensional depth images.
Step S22, stripe matching: and guiding the stripes of the left image and the right image to be matched according to the depth image acquired in the step S21, and reversely projecting the depth image into the left image and the right image once when the stripes are matched, so that the accurate matching of the line segments or the stripes of the left image and the right image is realized. Specifically, the method comprises the following substeps:
a. extracting central lines of the stripes on the left camera image and the right camera image, and then segmenting a connected domain of each central line to form a plurality of independent line segments;
b. converting the depth map acquired by the three-dimensional module into a three-dimensional point cloud coordinate (pi) under a self coordinate system according to the corresponding calibration internal reference;
c. converting (pi) into a three-dimensional point cloud coordinate (qi) in a left camera coordinate system according to a rotation translation matrix Ms between the calibration three-dimensional module and the left camera;
d. sequentially back projecting the three-dimensional point cloud coordinates (qi) to the left image and the right image according to respective internal references of the left camera and the right camera, wherein each corresponding point has a corresponding serial number, and forming a lookup table corresponding to the coordinates of the left image and the right image;
e. and traversing the corresponding serial number of each point of each stripe line segment on the left image, and directly searching the stripe line segment matched with the right image according to the lookup table, thereby realizing the accurate matching of the left image line segment and the right image line segment or the stripe.
Step S23, three-dimensional reconstruction: and searching a single point corresponding relation in a central line segment of the corresponding stripe by using the epipolar geometric constraint relation of the left camera and the right camera of the matched corresponding stripe of the left image and the right image, and then reconstructing the corresponding point into three-dimensional point cloud data according to the calibration parameters. In this embodiment, the three-dimensional reconstruction further includes the following steps: and searching a single point corresponding relation in the corresponding stripe center line segment by using the epipolar geometric constraint relation of the left camera and the right camera for the corresponding stripe center line segment which is well matched with the left image and the right image, and then reconstructing the corresponding point pair into three-dimensional point cloud data according to the calibration parameters of the system.
Step S24, identification and judgment: and matching the three-dimensional point cloud data generated in the step S23 with the modeling data in the step S1, so as to judge which step in the modeling has the highest similarity, and identifying the current step as the step of splicing or lapping the toy.
In order to more clearly illustrate the implementation process of the present invention, a specific camera parameter calibration example will be described below.
The internal parameters of the calibrated left camera are as follows:
K1=[2271.084, 0, 645.632,
0, 2265.112, 511.553,
0, 0, 1]
the internal parameters of the right camera are:
K2=[2275.181, 0, 644.405,
0, 2270.322, 510.053,
0, 0, 1]
the system structure parameters between the left camera and the right camera are as follows:
R=[8.749981e-001,6 .547051e-003,4 .840819e-001,
-2.904034e-003,9 .999615e-001,-8 .274993e-003,
-4.841175e-001,5 .834813e-003,8 .749835e-001]
T=[-1 .778995e+002,-4 .162821e-001,5.074737e+001]
internal parameters of the low-resolution three-dimensional scanning module:
Ks=[476.927, 0, 312.208,
0, 475.927,245.949,
0, 0, 1]
the system structure parameters between the low-resolution three-dimensional scanning module and the left camera are as follows:
Rs=[9.98946971e-001 ,4 .44611477e-002 ,-1 .13205701e-002 ,
-4 .54442748e-002 ,9 .92786812e-001 ,-1 .10946668e-001 ,
6 .30609650e-003 ,1 .11344293e-001 ,9 .93761884e-001]
Ts=[9 .13387457e+001 ,2 .81182536e+001 ,1 .79046857e+000]
according to the steps described above, a digital simulation laser stripe pattern is projected to the toy and is synchronously acquired by the left camera, the right camera and the resolution three-dimensional scanning module. Converting the depth map into three-dimensional coordinates by using internal parameters, namely internal parameters, of a low-resolution three-dimensional scanning module according to the collected fringe map and the collected low-resolution depth map, simultaneously reversely projecting the three-dimensional coordinates to images of a left camera and a right camera in sequence according to calibration parameters, giving serial numbers to corresponding points on the left and right to form a serial number lookup table, extracting fringe centers on the images of the left camera and the right camera and carrying out connected domain segmentation, matching line segments corresponding to the fringes according to the serial number lookup table, searching corresponding points according to polar line geometric constraint relation of the double cameras by the matched line segments, and then carrying out three-dimensional reconstruction according to the calibration parameters to generate point cloud data.
Step S3, part reminding: judging the parts adopted in the next step according to the identified intermediate state data and the database storage data, and reminding in an audio and/or video mode;
in this embodiment, a display screen and/or a speaker is provided in the system, and the parts to be used next and the assembly positions of the parts can be displayed in the display screen in an animation form or a picture form, and can be prompted by voice assistance or by voice alone.
Step S4, parts confirmation: collecting image data of the selected part, comparing the image data with data stored in a database, and judging whether the part is the part required by the next step;
in the embodiment, after the user selects the part, the part can be taken to the three-dimensional imaging device, the part selected by the user is confirmed through the three-dimensional imaging device, and if the part is the required part, confirmation information is sent out through the display screen or the loudspeaker; if the selected part is not the required part, a negative confirmation message is sent through the display screen or the loudspeaker to remind the user of taking the part by mistake, meanwhile, the correct part is displayed through the display screen or the loudspeaker to remind the user to replace, and after replacement, the correct part can be confirmed again until the user selects the correct part.
In this embodiment, when the three-dimensional imaging device is used to perform part scanning confirmation, the operation steps are the same as those in step S2, and are not described herein again.
Step S5, splicing and lapping confirmation: collecting image data after splicing and lapping of the toy, and carrying out identification and judgment to judge whether the operation of the step is correct or not;
in the embodiment, after selected parts are spliced or lapped, the splicing or lapping in the step can be confirmed through the three-dimensional imaging device, if the acquired image data is the same as the data of the database, confirmation information can be sent out through a display screen or a loudspeaker to indicate that the step is operated correctly; if the collected image data is different from the database data, a denial message can be sent through the display screen or the loudspeaker to remind a user of operation errors, meanwhile, the correct splicing or lapping position can be displayed through the display screen or the loudspeaker to remind the user of replacement, and after the replacement, the user can confirm again until the step is operated correctly by the user.
In this embodiment, when the three-dimensional imaging device is used to perform part splicing or lap scanning confirmation, the operation steps are the same as those in the sub-step of step S2, and are not described herein again.
And repeating the steps S2-S5 until the toy is spliced or lapped.
The invention also provides a splicing and lapping toy automatic guidance system based on machine vision, which comprises an operation platform, a three-dimensional imaging device and an audio and/or video reminding device, wherein the three-dimensional imaging device is arranged corresponding to the operation platform, and the audio and/or video reminding device is arranged on the operation platform.
In this embodiment, three-dimensional imaging device is including the light source, two cameras (define left camera and right camera respectively) and a three-dimensional module, and two cameras are corresponding to operation platform setting respectively, can be used for gathering the image information on the operation platform respectively, and the light source adopts digital projector, and the light source corresponds to operation platform setting, can be used to provide the light source, and three-dimensional module adopts low resolution three-dimensional scanning module for the depth map of synchronous acquisition testee, during concrete implementation, also can select for use other parts that can gather the depth map.
In this embodiment, the audio frequency reminding device can adopt a loudspeaker, the video frequency reminding device can adopt a display screen, and the audio frequency reminding device and the video frequency reminding device can be installed at the same time or installed and used in an alternative mode.
According to the invention, the current picture is acquired through a machine vision technology, the splicing type toy is separated from the current picture, the current operation and the next operation are calculated according to the stored content of the database, and the next operation is reminded to a child user through a display screen or a video playing mode or a voice broadcasting mode, so that the prompt is simple and intuitive, the interface is friendly, and the effect of independent use of the child and the child can be realized.

Claims (10)

1. A splicing and lapping toy automatic guidance method based on machine vision is characterized in that: the method comprises the following steps:
step S1, establishing a database: the database records characteristic data of each part and characteristic data of each intermediate state and completion state of the toy;
step S2, image acquisition and judgment: acquiring intermediate image data of the toy, identifying and judging to judge the intermediate image data;
step S3, part reminding: judging the parts adopted in the next step according to the identified intermediate state data and the database storage data, and reminding in an audio and/or video mode;
step S4, parts confirmation: collecting image data of the selected part, comparing the image data with data stored in a database, judging whether the part is the part required by the next step, and if the part is the required part, sending confirmation information through a display screen or a loudspeaker; if the selected part is not the required part, a denial message is sent out through a display screen or a loudspeaker to remind the user of taking the part by mistake;
step S5, splicing and lapping confirmation: collecting image data after splicing and lapping of the toy, identifying and judging, judging whether the operation of the step is correct, and if the collected image data is the same as the data of the database, sending confirmation information through a display screen or a loudspeaker to indicate that the operation of the step is correct; if the collected image data is different from the database data, a denial message is sent out through a display screen or a loudspeaker to remind a user of an operation error;
and repeating the steps S2-S5 until the toy is spliced or lapped.
2. The machine vision-based automatic guidance method for splicing and lapping toys as claimed in claim 1, wherein the method comprises the following steps: when the database is established in step S1, modeling each part and the toy intermediate state by using three-dimensional software, and collecting modeling data thereof, including: the shape, size, length, width and height ratio, color characteristics and other characteristics with identification function of the part.
3. The machine vision-based automatic guidance method for splicing and lapping toys as claimed in claim 1, wherein the method comprises the following steps: the step S2 includes the following sub-steps:
step S21, projection and image acquisition: generating a stripe pattern, projecting the stripe pattern to a measured object by using a light source, modulating the stripe pattern by the height of the measured object to deform to generate a modulated stripe pattern, synchronously acquiring the modulated stripe pattern by using a left camera and a right camera to respectively obtain a left image and a right image, and synchronously acquiring a depth map of the measured object by using a three-dimensional module;
step S22, stripe matching: guiding the stripes of the left image and the right image to be matched according to the depth image acquired in the step S21, and reversely projecting the depth image into the left image and the right image once when the stripes are matched, so that the accurate matching of the line segments or the stripes of the left image and the right image is realized;
step S23, three-dimensional reconstruction: searching a single point corresponding relation in a corresponding stripe center line segment by using polar line geometric constraint relations of a left camera and a right camera of the matched corresponding stripes of the left image and the right image, and then reconstructing corresponding points into three-dimensional point cloud data according to calibration parameters;
step S24, identification and judgment: and matching the three-dimensional point cloud data generated in the step S23 with the modeling data in the step S1, so as to judge which step in the modeling has the highest similarity, and identifying the current step as the step of splicing or lapping the toy.
4. The machine vision-based automatic guidance method for splicing and lapping toys as claimed in claim 3, wherein the method comprises the following steps: when the three-dimensional module gathers the depth map of testee in step, when the three-dimensional scanning module transmission and the light of light source isowavelength, projection and image acquisition further include following step: (1) projecting a stripe pattern to a measured object by the light source, and respectively acquiring a left image and a right image by the left camera and the right camera; (2) the light source is turned off, the three-dimensional module emits light to the measured object, and then a three-dimensional depth image is collected; when the wavelength of the light emitted by the three-dimensional scanning module is not equal to that of the light source emission wavelength, the projection and image acquisition further comprises the following steps: the light source and the three-dimensional module simultaneously project a stripe pattern to the measured object, and the left camera, the right camera and the three-dimensional module simultaneously acquire left and right images and three-dimensional depth images.
5. The machine vision-based automatic guidance method for splicing and lapping toys as claimed in claim 3, wherein the method comprises the following steps: the stripe matching in step S22 includes the following sub-steps:
a. extracting central lines of the stripes on the left camera image and the right camera image, and then segmenting a connected domain of each central line to form a plurality of independent line segments;
b. converting the depth map acquired by the three-dimensional module into a three-dimensional point cloud coordinate (pi) under a self coordinate system according to the corresponding calibration internal reference;
c. converting (pi) into a three-dimensional point cloud coordinate (qi) in a left camera coordinate system according to a rotation translation matrix Ms between the calibration three-dimensional module and the left camera;
d. sequentially back projecting the three-dimensional point cloud coordinates (qi) to the left image and the right image according to respective internal references of the left camera and the right camera, wherein each corresponding point has a corresponding serial number, and forming a lookup table corresponding to the coordinates of the left image and the right image;
e. and traversing the corresponding serial number of each point of each stripe line segment on the left image, and directly searching the stripe line segment matched with the right image according to the lookup table, thereby realizing the accurate matching of the left image line segment and the right image line segment or the stripe.
6. The machine vision-based automatic guidance method for splicing and lapping toys as claimed in claim 3, wherein the method comprises the following steps: the three-dimensional reconstruction in the step S23 includes the following steps: and searching a single point corresponding relation in the corresponding stripe center line segment by using the epipolar geometric constraint relation of the left camera and the right camera for the corresponding stripe center line segment which is well matched with the left image and the right image, and then reconstructing the corresponding point pair into three-dimensional point cloud data according to the calibration parameters of the system.
7. The utility model provides a concatenation, overlap joint class toy automatic guidance system based on machine vision, characterized by: the system comprises an operation platform, a three-dimensional imaging device and an audio and/or video reminding device, wherein the three-dimensional imaging device is arranged corresponding to the operation platform, and the audio and/or video reminding device is arranged on the operation platform.
8. The machine vision-based automatic guidance system for splicing and splicing toys as claimed in claim 7, wherein: the three-dimensional imaging device comprises a light source, two cameras and a three-dimensional module, wherein the two cameras are respectively arranged corresponding to the operating platform, the light source adopts a digital projector, the light source is arranged corresponding to the operating platform, the three-dimensional module adopts a low-resolution three-dimensional scanning module, and the three-dimensional module is arranged corresponding to the operating platform.
9. The machine vision-based automatic guidance system for splicing and splicing toys as claimed in claim 7, wherein: the audio reminding device adopts a loudspeaker, the video reminding device adopts a display screen, and the audio reminding device and the video reminding device are installed at the same time or in an alternative mode.
10. The machine vision-based automatic guidance system for splicing and splicing toys as claimed in claim 7, wherein: the left camera, the right camera and the three-dimensional module need to be subjected to system calibration to obtain calibration parameters, when the system is calibrated, the left camera and the right camera are calibrated to obtain internal and external parameters of the cameras and a rotation and translation matrix Mc corresponding to the relative position between the cameras, and meanwhile, a rotation and translation matrix Ms corresponding to the relative position relationship between the three-dimensional module and the left camera is calibrated.
CN202111144515.5A 2021-09-28 2021-09-28 Splicing and lapping toy automatic guidance method and system based on machine vision Pending CN113918745A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111144515.5A CN113918745A (en) 2021-09-28 2021-09-28 Splicing and lapping toy automatic guidance method and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111144515.5A CN113918745A (en) 2021-09-28 2021-09-28 Splicing and lapping toy automatic guidance method and system based on machine vision

Publications (1)

Publication Number Publication Date
CN113918745A true CN113918745A (en) 2022-01-11

Family

ID=79236833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111144515.5A Pending CN113918745A (en) 2021-09-28 2021-09-28 Splicing and lapping toy automatic guidance method and system based on machine vision

Country Status (1)

Country Link
CN (1) CN113918745A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115364494A (en) * 2022-07-26 2022-11-22 福州市鹭羽智能科技有限公司 Automatic stacking device and method for building blocks based on patterns
CN116362973A (en) * 2023-05-24 2023-06-30 武汉智筑完美家居科技有限公司 Pattern splicing method, device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115364494A (en) * 2022-07-26 2022-11-22 福州市鹭羽智能科技有限公司 Automatic stacking device and method for building blocks based on patterns
CN115364494B (en) * 2022-07-26 2024-02-23 福州市鹭羽智能科技有限公司 Automatic stacking device and method for building blocks based on patterns
CN116362973A (en) * 2023-05-24 2023-06-30 武汉智筑完美家居科技有限公司 Pattern splicing method, device and storage medium
CN116362973B (en) * 2023-05-24 2023-09-19 武汉智筑完美家居科技有限公司 Pattern splicing method, device and storage medium

Similar Documents

Publication Publication Date Title
CN113918745A (en) Splicing and lapping toy automatic guidance method and system based on machine vision
EP2620913B1 (en) Three-dimensional scan recovery
CN102685533B (en) Methods and systems for converting 2d motion pictures into stereoscopic 3d exhibition
Rivers et al. Sculpting by numbers
US20120176380A1 (en) Forming 3d models using periodic illumination patterns
CN105164728A (en) Diminished and mediated reality effects from reconstruction
CN101968892A (en) Method for automatically adjusting three-dimensional face model according to one face picture
CN101512599A (en) Method and system for obtaining three-dimensional model
CN101968891A (en) System for automatically generating three-dimensional figure of picture for game
CN104169941A (en) Automatic tracking matte system
CN106408664B (en) Three-dimensional model curved surface reconstruction method based on three-dimensional scanning device
CN104871176A (en) Scanning device and method for positioning a scanning device
CN106595523A (en) Portable three-dimensional morphology measurement system and portable three-dimensional morphology measurement system based on smart phone
CN105809741B (en) A kind of interactive mode indoor environment experiencing system
CN108388341A (en) A kind of man-machine interactive system and device based on thermal camera-visible light projector
CN103093504A (en) Three-dimensional image generating method
CN111062869A (en) Curved screen-oriented multi-channel correction splicing method
CN103747196A (en) Kinect sensor-based projection method
CN103500438A (en) Interactive building surface projection method
Reimat et al. Cwipc-sxr: Point cloud dynamic human dataset for social xr
TWI267799B (en) Method for constructing a three dimensional (3D) model
JP2008017386A (en) Key image generation device
CN114219001A (en) Model fusion method and related device
JPH10222668A (en) Motion capture method and system therefor
US20140192045A1 (en) Method and apparatus for generating three-dimensional caricature using shape and texture of face

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination