CN109544686B - Method and system for modeling three-dimensional image and displaying real image - Google Patents
Method and system for modeling three-dimensional image and displaying real image Download PDFInfo
- Publication number
- CN109544686B CN109544686B CN201811250521.7A CN201811250521A CN109544686B CN 109544686 B CN109544686 B CN 109544686B CN 201811250521 A CN201811250521 A CN 201811250521A CN 109544686 B CN109544686 B CN 109544686B
- Authority
- CN
- China
- Prior art keywords
- image
- pattern
- coding
- mask
- bright area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Abstract
The invention provides a method and a system for modeling three-dimensional images and displaying real images, which are provided by the invention, by projecting a group of complementary coding fringe patterns, and using the obtained corresponding first images and second images for establishing a three-dimensional image model of an object to be scanned, and also for establishing the real images of the object to be scanned in the modeling process for browsing monitoring, the method and the system do not need to insert and process images which are useless for three-dimensional modeling, save system resources and processing time, and are beneficial to monitoring and executing modeling in the modeling process faster and better.
Description
Technical Field
The invention relates to the field of three-dimensional image processing, in particular to a method and a system for modeling three-dimensional images and displaying real images.
Background
The technology of non-contact three-dimensional measurement of the surface of an object has been widely used in many fields such as industry, medical treatment, and art. In the medical field, the hand-held equipment for carrying out three-dimensional measurement of human body by utilizing optical scanning replaces the traditional measurement modeling mode to a great extent. For example, in the past, when dental prostheses were to be made, the patient was required to bite into the chemical material for a period of time to obtain a cured dental model, which resulted in a number of inconveniences and adverse experiences. At present, an optical mouth sweeper can acquire a dental model by utilizing a structured light optical imaging mode, projects a plurality of groups of patterns required by three-dimensional modeling onto teeth, and reconstructs a three-dimensional model of the teeth through camera imaging monkey operation. In the three-dimensional modeling process, an operator needs to browse the actual scanning position of the mouth sweeper so as to facilitate the watching and targeted operation.
In the prior art, in order to browse the actual scanned position, a group of images without patterns is usually projected onto the object to be scanned (e.g. teeth) to facilitate the camera to retrieve the current image, for example, US2014/248576A1, which divides the projected images into two groups, one group is an image containing the patterns required for three-dimensional modeling, and the other group is an image without patterns (or a pure white image) for browsing the images in real time. Therefore, when browsing is performed while three-dimensional modeling, a plurality of specific images are required to be generated, projected and acquired, excessive system resources and excessive time are consumed, and browsing instantaneity and smoothness are poor.
Disclosure of Invention
The present invention is directed to a new method and system for modeling three-dimensional images and displaying real images, so as to solve the above-mentioned problems.
In order to achieve the above object, the present invention provides a method for modeling three-dimensional images and displaying real images, comprising the following steps:
s1, projecting a first coding fringe pattern to an object to be scanned, and capturing at least one first image of the object to be scanned; wherein the first encoded fringe pattern comprises a first light area and a first dark area;
s2, projecting a second coding fringe pattern to the object to be scanned, and capturing at least one second image of the object to be scanned; the second coding fringe pattern comprises a second bright area and a second dark area, the second bright area corresponds to the first dark area, and the second bright area is complementary with the first dark area in color or brightness; the second dark region corresponds to the first bright region, and the second dark region is complementary in color or brightness to the first bright region; and
s3, establishing a three-dimensional image model of the object to be scanned according to the at least one first image and the at least one second image; and splicing the first image and the second image to establish and display the real image of the object to be scanned.
Preferably, in step S3, the stitching the first image and the second image further includes:
and splicing a first effective part corresponding to the first bright area in the first image and a second effective part corresponding to the second bright area in the second image to establish and display a real image of the object to be scanned.
Preferably, in step S3, the stitching the first effective portion of the first image corresponding to the first bright area with the second effective portion of the second image corresponding to the second bright area further includes:
s31, acquiring the first effective part by using the first bright area as a mask of the first image; acquiring the second effective part by using the second bright area as a mask of the second image; and
s32, splicing the first effective part and the second effective part to obtain the real image.
Preferably, in step S31, the capturing the first effective portion using the first bright area as a mask of the first image further includes:
obtaining a corresponding binarization image according to the first coding fringe image processing, and obtaining a first mask image;
acquiring the first effective part by using the first mask image as a mask of the first image;
the step S31 of using the second bright area as a mask of the second image to obtain the second effective portion further includes:
obtaining a corresponding binarization image according to the second coding fringe image processing as a second mask image; or, performing inverse color processing on the first mask image to obtain a second mask image;
the second effective portion is acquired using the second mask map as a mask for the first image.
Preferably, the processing obtains a corresponding binarized graph, including:
the first coding stripe pattern or the second coding stripe pattern is a color pattern, and gray level processing and binarization processing are sequentially carried out on the first coding stripe pattern or the second coding stripe pattern to obtain a corresponding binarization pattern; or alternatively, the process may be performed,
the first coding fringe pattern or the second coding fringe pattern is a gray pattern, and binarization processing is carried out on the first coding fringe pattern or the second coding fringe pattern to obtain a corresponding binarization pattern; or alternatively, the process may be performed,
the first coding stripe pattern or the second coding stripe pattern is a binarization pattern, and then the first coding stripe pattern or the second coding stripe pattern is directly used as a corresponding binarization pattern.
To achieve the above object, the present invention provides a system for modeling three-dimensional image and displaying real image, comprising:
the projection unit is used for projecting the first coding fringe pattern to the object to be scanned and then projecting the second coding fringe pattern to the object to be scanned; the first coding fringe pattern comprises a first bright area and a first dark area, the second coding fringe pattern comprises a second bright area and a second dark area, the second bright area corresponds to the first dark area, the second bright area is complementary with the first dark area in color or brightness, and the second dark area corresponds to the first bright area, and the second dark area is complementary with the first bright area in color or brightness;
the image capturing unit is used for capturing at least one first image of the object to be scanned when the first coding fringe pattern is projected; the image capturing unit is used for capturing at least one second image of the object to be scanned when the second coding fringe pattern is projected; and
the processing unit is coupled with the projection unit and the image capturing unit and is used for establishing a three-dimensional image model of the object to be scanned according to the at least one first image and the at least one second image and splicing the first image and the second image so as to establish a real image of the object to be scanned; and
the display unit is used for receiving the real image established by the processing unit and displaying the real image.
Preferably, the processing unit is configured to stitch the first image and the second image to establish a real image of the object to be scanned, and specifically includes: and splicing a first effective part corresponding to the first bright area in the first image and a second effective part corresponding to the second bright area in the second image to establish a real image of the object to be scanned.
Preferably, the processing unit is configured to stitch a first effective portion of the first image corresponding to the first bright area and a second effective portion of the second image corresponding to the second bright area, and specifically includes:
the processing unit obtains the first effective part by using the first bright area as a mask of the first image; the processing unit obtains the second effective part by using the second bright area as a mask of the second image; and the processing unit splices the first effective part and the second effective part to obtain the real image.
Preferably, the acquiring the first effective portion using the first bright area as a mask of the first image specifically includes:
the processing unit sequentially carries out gray level processing and binarization processing on the first coding fringe pattern to obtain a first mask pattern; the processing unit acquires the first effective portion using the first mask pattern as a mask for the first image.
The obtaining the second effective portion by using the second bright area as a mask of the second image specifically includes:
the processing unit sequentially carries out gray level processing and binarization processing on the second coding fringe pattern to obtain a second mask pattern; or the processing unit performs inverse color processing on the first mask image to obtain a second mask image; and
the processing unit acquires the second effective portion using the second mask pattern as a mask for the first image.
Preferably, the processing obtains a corresponding binarized graph, including:
the first coding stripe pattern or the second coding stripe pattern is a color pattern, and gray level processing and binarization processing are sequentially carried out on the first coding stripe pattern or the second coding stripe pattern to obtain a corresponding binarization pattern; or alternatively
The first coding fringe pattern or the second coding fringe pattern is a gray pattern, and binarization processing is carried out on the first coding fringe pattern or the second coding fringe pattern to obtain a corresponding binarization pattern; or alternatively, the process may be performed,
the first coding stripe pattern or the second coding stripe pattern is a binarization pattern, and then the first coding stripe pattern or the second coding stripe pattern is directly used as a corresponding binarization pattern.
Compared with the prior art, the method and the system for modeling the three-dimensional image and displaying the real image provided by the invention have the advantages that by projecting a group of complementary coding fringe patterns, the obtained corresponding first image and second image are used for establishing a three-dimensional image model of an object to be scanned, and the real image of the object to be scanned in the modeling process is also used for browsing and monitoring, the images which are useless for the three-dimensional modeling are not needed to be inserted and processed, the system resources and processing time are saved, and the monitoring and modeling execution of the modeling process are facilitated to be performed faster and better.
Drawings
FIG. 1 is a schematic diagram of a three-dimensional image modeling and real image display system according to a first embodiment of the present invention;
fig. 2 is a flowchart illustrating a method for modeling a three-dimensional image and displaying a real image according to a first embodiment of the present invention.
Detailed Description
For a further understanding of the objects, construction, features, and functions of the invention, reference should be made to the following detailed description of the preferred embodiments.
Certain terms are used throughout the description and claims to refer to particular components. It will be appreciated by those of ordinary skill in the art that manufacturers may refer to a component by different names. The description and claims do not take the form of an element differentiated by name, but rather by functional differences. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to.
Referring to fig. 1, a schematic structure diagram of a system for modeling three-dimensional images and displaying real images according to an embodiment of the invention is disclosed, and the system at least includes a projection unit 110, an image capturing unit 120, a processing unit 130 and a display unit 140.
The projection unit 110 is configured to project a specific encoded fringe pattern onto an object to be scanned, and provide a light source for image capturing, the projection unit 110 may be any type of light emitting device capable of projecting a pattern, the light source may be a single color light, such as red light, infrared light, blue light, etc., or may be a multi-color light, such as a multi-color light mixed into white light, etc., the projection unit may form a pattern of the projected light based on the reflection of a digital micromirror (DMD, digital micromirror device), may control the projected light pattern based on a liquid crystal on silicon (LCOS, liquid Crystal On Silicon) panel, may control the on/off of the light by using a liquid crystal panel (LCD, liquid Crystal Display) to form the projected light pattern, and may control the light to form the projected light pattern based on other spatial light modulators (SLM, spatial Light Modulator), grating projection, etc., which is not limited in the present invention.
The image capturing unit 120 is configured to capture an image of the object 20, and may include a color light sensor array, a monochromatic light sensor array, an infrared light sensor array, and the like, which is not limited to the present invention.
The processing unit 130 is configured to build a three-dimensional image model of the object 20 to be scanned according to the acquired image, and build a real image of the object 20 to be scanned according to the acquired image, wherein the three-dimensional image model can be used for making models, such as dental models, prostheses, etc., and the real image is used for browsing in real time and monitoring the progress of the scanning modeling, so as to facilitate timely adjustment of scanning parameters and timely termination of scanning operations, etc. The processing unit 130 may also be used to control the projection operation of the projection unit 110, the image capturing operation of the image capturing unit 120, and the synchronization between them. The processing unit 130 may also be used to provide a coded fringe pattern preset by the projection unit 110.
The processing unit 130 is respectively in communication connection with the projection unit 110, the image capturing unit 120 and the display unit 140, wherein the communication connection can be wired connection or wireless connection, for example, the system comprises a handheld scanning head, at least the projection unit 110, the image capturing unit 120 and the processing unit 130 are integrated in the scanning head, and at the moment, the three parts are more convenient to realize through wired connection and are also more beneficial to real-time and rapid transmission of data. The display unit 140 may be disposed at an end of the scanning head facing the operator, for example, an end opposite to the scanning end including the projection unit 110 and the image capturing unit 120, so as to be convenient for the operator to adjust and observe at any time. The display unit 140 may also be separately disposed from the handheld scanning head, and a larger display panel may be used to better display the image effect; at this time, the handheld scanning head and the display unit 140 may be connected by a cable, or may be connected wirelessly based on a wireless communication protocol, such as WiFi, bluetooth, zigbee, etc., where the display unit 140 may be connected by a wired manner to the processing unit 130 or by a wireless manner to the processing unit 130. For another example, the processing unit 130 is integrated in a fixed device, and can be connected to the display unit 140 in the fixed device in a wired manner, and simultaneously connected to the projection unit 110 and the image capturing unit 120 in the scanning head in a wired or wireless manner. In addition, some of the functions of the processing unit 130 may be implemented in a sub-unit integrated in the scanning head, and other functions may be implemented in a sub-unit integrated in the stationary device. The invention is not limited thereto.
In one embodiment, the projection unit 110 is configured to project a first encoded fringe pattern onto the object 20 to be scanned, and project a second encoded fringe pattern onto the object 20 to be scanned, so as to provide corresponding light sources for two sequential image acquisitions. The first code stripe pattern and the second code stripe pattern are complementary two patterns, or the first code stripe pattern and the second code stripe pattern are completely opposite. The first encoded fringe pattern comprises a first bright area and a first dark area and the second encoded fringe pattern comprises a second bright area and a second dark area. The first bright area corresponds to the second dark area or is consistent in position and size, and the colors or the brightness of the first bright area and the second dark area are complementary; the first dark area corresponds to the second bright area, or the positions and the sizes of the first dark area and the second bright area are consistent, and the colors or the brightness of the first dark area and the second bright area are complementary; in other words, if the first code fringe pattern and the second code fringe pattern are superimposed, the brightness of each position in the formed image pattern is uniform and constant. The first coding fringe pattern and the second coding fringe pattern can be strips which are transversely, vertically or obliquely arranged and alternate with black and white, checkerboard-shaped strips which are alternately arranged with black and white, dot patterns which are alternately arranged with black and white, wherein the sizes or intervals of the strips or the dots can be the same or different. The first encoding fringe pattern and the second encoding fringe pattern can be other patterns with alternate bicolor/gray scale, and the invention is not limited thereto. Preferably, since the movement of the projection unit 110 is slower relative to the internal operating frequencies of the projection unit 110 and the image capturing unit 120, the first encoding fringe pattern and the second encoding fringe pattern can be projected in two adjacent frames; alternatively, image projection may be performed with an interval of 1 or more frames in consideration of processing time inside each unit. In either case, the image capturing operation of the image capturing unit 120 is synchronized with the projection operation of the projection unit 110.
The image capturing unit 120 captures at least one first image a of the object to be scanned when the projection unit 110 projects the first code fringe pattern; when the projection unit 110 projects the second code fringe pattern, the image capturing unit 120 captures at least one second image B of the object to be scanned.
The processing unit 130 is configured to establish a three-dimensional image model C of the object to be scanned according to the obtained at least one first image a and the at least one second image B. The method for establishing the three-dimensional image model can be any method in the art for establishing the three-dimensional image model based on the structured light acquisition pattern, and the invention is not limited thereto.
In addition, the processing unit 130 is further configured to stitch the first image a and the second image B to create a real image D of the object 20 to be scanned. Preferably, the processing unit 130 concatenates the first effective portion A1 corresponding to the first bright region in the first image a and the second effective portion B1 corresponding to the second bright region in the second image B to create the real image D of the object 20 to be scanned. The first effective portion A1 and the second effective portion B1 are effective portions for acquiring the real image D, that is, they belong to the original image with respect to the real image to be generated. The display unit 140 is configured to receive the real image D established by the processing unit 130 and display the real image D. In an embodiment, the first inactive portion A2 corresponding to the first dark area in the first image a and the second inactive portion B2 corresponding to the second dark area in the second image B are not all dark due to the effect (such as reflection, or multiple scattering, etc.) of the first bright area and the second bright area when capturing the image, or the first dark area and the second dark area may be used as the reference when image stitching, so as to facilitate rapid alignment and stitching of multiple images of the object to be scanned, such as teeth in the oral cavity, which are repeated more regularly.
In one embodiment, processing unit 130 uses the first bright region as a Mask (Mask) for first image A to obtain a first active portion A1; the processing unit 130 uses the second bright area as a mask for the second image B to obtain the second effective portion B1. Acquiring a first effective part A1 by using the first bright area as a Mask (Mask) of the first image A, namely, taking the first bright area as a Mask layer, wherein a part, overlapping with the first bright area, in the first image A is left and takes part in subsequent image processing as an effective part; the area of the first image a, which is not overlapped with the first bright area, is removed or ignored, and no further participation in subsequent image processing is required. This process can also be seen as an and operation of the first bright region in the first encoded fringe pattern with the first image a. The explanation of the following mask is similar and will not be repeated.
In another embodiment, the processing unit 130 processes the first encoded fringe pattern to obtain a first mask pattern, and uses the first mask pattern as a mask of the first image a to obtain the first effective portion A1. The processing may be graying and binarizing processing performed sequentially; if the first coding fringe pattern is a gray-scale pattern, the processing can only carry out binarization processing; if the first encoded fringe pattern is itself a binarized pattern or a fringe pattern with alternating black and white, the processing can be omitted, or the first encoded fringe pattern can be directly used as the binarized pattern to participate in the subsequent processing.
The processing unit 130 processes the second encoded fringe pattern to obtain a second mask pattern, and uses the second mask pattern as a mask of the second image B to obtain a second effective portion B1. The processing may be graying and binarizing processing performed sequentially; if the first coding fringe pattern is a gray-scale pattern, the processing can only carry out binarization processing; the process may be omitted if the first encoded fringe pattern itself is a binarized pattern or a fringe pattern with alternating black and white.
In another embodiment, the processing unit 130 inverts (or performs inverse color processing on) the first mask map to obtain a second mask map, and uses the second mask map as a mask of the second image B to obtain the second effective portion B1. The inversion (or the inverse color treatment) is to modify the original pixel of' 1 into 0 and modify the original pixel of 0 into 1 for the binarization map; for the gray scale, the sum of the original gray scale value and the modified gray scale value is 255, or the gray scale value of each pixel of the original image is subtracted from 255 to obtain the modified gray scale value.
Referring to fig. 2, a flowchart of a method for modeling a three-dimensional image and displaying a real image according to an embodiment of the invention is disclosed, which includes the following steps:
s1, projecting a first coding fringe pattern to an object to be scanned, and capturing at least one first image of the object to be scanned.
S2, projecting a second coding fringe pattern to the object to be scanned, and capturing at least one second image of the object to be scanned.
The first coding fringe pattern comprises a first bright area and a first dark area, the second coding fringe pattern comprises a second bright area and a second dark area, the second bright area corresponds to the first dark area, the second bright area is complementary with the first dark area in color or brightness, and the second dark area corresponds to the first bright area, and the second dark area is complementary with the first bright area in color or brightness.
S3, establishing a three-dimensional image model of the object to be scanned according to the at least one first image and the at least one second image; and splicing the first image and the second image to establish and display the real image of the object to be scanned.
In one embodiment, in step S3, a first effective portion of the first image corresponding to the first bright area and a second effective portion of the second image corresponding to the second bright area are spliced together to create and display a real image of the object to be scanned.
In one embodiment, the stitching in step S3 between the first effective portion of the first image corresponding to the first bright area and the second effective portion of the second image corresponding to the second bright area includes:
s31, acquiring the first effective part by using the first bright area as a mask of the first image; acquiring the second effective part by using the second bright area as a mask of the second image; and
s32, splicing the first effective part and the second effective part to obtain the real image.
In one embodiment, the capturing the first effective portion in step S31 using the first bright area as a mask for the first image further includes: obtaining a corresponding binarization image according to the first coding fringe image processing, and obtaining a first mask image; the first effective portion is acquired using the first mask map as a mask for the first image.
In one embodiment, the obtaining the second effective portion in step S31 using the second bright area as a mask for the second image further includes: obtaining a corresponding binarization image according to the second coding fringe image processing as a second mask image; or, performing inverse color processing on the first mask image to obtain a second mask image. The second effective portion is acquired using the second mask map as a mask for the first image.
The processing obtains a corresponding binarization map, which specifically comprises the following steps: the first coding stripe pattern or the second coding stripe pattern is a color pattern, and gray level processing and binarization processing are sequentially carried out on the first coding stripe pattern or the second coding stripe pattern to obtain a corresponding binarization pattern; or, the first coding fringe pattern or the second coding fringe pattern is a gray pattern, and binarization processing is carried out on the first coding fringe pattern or the second coding fringe pattern to obtain a corresponding binarization pattern; or, if the first code stripe pattern or the second code stripe pattern is a binarization pattern, the first code stripe pattern or the second code stripe pattern is directly used as a corresponding binarization pattern.
In summary, the method and system for modeling three-dimensional image and displaying real image provided by the invention project a set of mutually inverted coding fringe patterns, and use the obtained corresponding first image and second image for establishing a three-dimensional image model of the object to be scanned, and also for establishing the real image of the object to be scanned in the modeling process for browsing and monitoring, which does not need to additionally insert, project and capture images useless for modeling, saves system resources and scanning modeling time, and is beneficial to monitoring and executing modeling in the modeling process more quickly and better.
The invention has been described with respect to the above-described embodiments, however, the above-described embodiments are merely examples of practicing the invention. It should be noted that the disclosed embodiments do not limit the scope of the invention. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.
Claims (6)
1. The method for modeling the three-dimensional image and displaying the real image is characterized by comprising the following steps of:
s1, projecting a first coding fringe pattern to an object to be scanned, and capturing at least one first image of the object to be scanned; wherein the first encoded fringe pattern comprises a first light area and a first dark area;
s2, projecting a second coding fringe pattern to the object to be scanned, and capturing at least one second image of the object to be scanned; the second coding fringe pattern comprises a second bright area and a second dark area, the second bright area corresponds to the first dark area, and the second bright area is complementary with the first dark area in color or brightness; the second dark region corresponds to the first bright region, and the second dark region is complementary in color or brightness to the first bright region; and
s3, establishing a three-dimensional image model of the object to be scanned according to the at least one first image and the at least one second image; and splicing the first image and the second image to establish and display the real image of the object to be scanned, which comprises the following steps:
splicing a first effective part corresponding to the first bright area in the first image and a second effective part corresponding to the second bright area in the second image; the method specifically comprises the following steps:
s31, acquiring the first effective part by using the first bright area as a mask of the first image; acquiring the second effective part by using the second bright area as a mask of the second image; and
s32, splicing the first effective part and the second effective part to obtain the real image.
2. The method of three-dimensional image modeling and real image display according to claim 1, wherein the step S31 of using the first bright area as a mask for the first image to obtain the first effective portion further comprises:
obtaining a corresponding binarization image according to the first coding fringe image processing, and obtaining a first mask image;
acquiring the first effective part by using the first mask image as a mask of the first image;
in step S31, the capturing the second effective portion by using the second bright area as the mask of the second image further includes:
obtaining a corresponding binarization image according to the second coding fringe image processing as a second mask image; or, performing inverse color processing on the first mask image to obtain a second mask image;
the second effective portion is acquired using the second mask pattern as a mask for the first image.
3. The method of three-dimensional image modeling and real image display according to claim 2, wherein said processing to obtain a corresponding binarized map comprises:
the first coding stripe pattern or the second coding stripe pattern is a color pattern, and gray level processing and binarization processing are sequentially carried out on the first coding stripe pattern or the second coding stripe pattern to obtain a corresponding binarization pattern; or alternatively, the process may be performed,
the first coding fringe pattern or the second coding fringe pattern is a gray pattern, and binarization processing is carried out on the first coding fringe pattern or the second coding fringe pattern to obtain a corresponding binarization pattern; or alternatively, the process may be performed,
the first coding stripe pattern or the second coding stripe pattern is a binarization pattern, and then the first coding stripe pattern or the second coding stripe pattern is directly used as a corresponding binarization pattern.
4. A system for three-dimensional image modeling and real image display, comprising:
the projection unit is used for projecting the first coding fringe pattern to the object to be scanned and then projecting the second coding fringe pattern to the object to be scanned; the first coding fringe pattern comprises a first bright area and a first dark area, the second coding fringe pattern comprises a second bright area and a second dark area, the second bright area corresponds to the first dark area, the second bright area is complementary with the first dark area in color or brightness, and the second dark area corresponds to the first bright area, and the second dark area is complementary with the first bright area in color or brightness;
the image capturing unit is used for capturing at least one first image of the object to be scanned when the first coding fringe pattern is projected; the image capturing unit is used for capturing at least one second image of the object to be scanned when the second coding fringe pattern is projected; and
the processing unit is coupled to the projection unit and the image capturing unit, and is configured to establish a three-dimensional image model of the object to be scanned according to the at least one first image and the at least one second image, and splice the first image and the second image to establish a real image of the object to be scanned, and specifically includes: splicing a first effective part corresponding to the first bright area in the first image and a second effective part corresponding to the second bright area in the second image to establish a real image of the object to be scanned; and
the display unit is used for receiving the real image established by the processing unit and displaying the real image;
the processing unit is configured to splice a first effective portion of the first image corresponding to the first bright area and a second effective portion of the second image corresponding to the second bright area, and specifically includes: the processing unit obtains the first effective part by using the first bright area as a mask of the first image; the processing unit uses the second bright area as a mask of the second image to acquire the second effective part; and the processing unit splices the first effective part and the second effective part to obtain the real image.
5. The system for three-dimensional image modeling and real image display according to claim 4, wherein said obtaining said first effective portion using said first bright area as a mask for said first image comprises:
the processing unit sequentially carries out gray level processing and binarization processing on the first coding fringe pattern to obtain a first mask pattern; the processing unit uses the first mask image as a mask of the first image to acquire the first effective part;
the obtaining the second effective portion by using the second bright area as a mask of the second image specifically includes:
the processing unit sequentially carries out gray level processing and binarization processing on the second coding fringe pattern to obtain a second mask pattern; or the processing unit performs inverse color processing on the first mask image to obtain a second mask image; and
the processing unit acquires the second effective portion using the second mask pattern as a mask for the first image.
6. The system for three-dimensional image modeling and real image display according to claim 5, wherein said processing to obtain a corresponding binarized map comprises:
the first coding stripe pattern or the second coding stripe pattern is a color pattern, and gray level processing and binarization processing are sequentially carried out on the first coding stripe pattern or the second coding stripe pattern to obtain a corresponding binarization pattern; or alternatively
The first coding fringe pattern or the second coding fringe pattern is a gray pattern, and binarization processing is carried out on the first coding fringe pattern or the second coding fringe pattern to obtain a corresponding binarization pattern; or alternatively, the process may be performed,
the first coding stripe pattern or the second coding stripe pattern is a binarization pattern, and then the first coding stripe pattern or the second coding stripe pattern is directly used as a corresponding binarization pattern.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811250521.7A CN109544686B (en) | 2018-10-25 | 2018-10-25 | Method and system for modeling three-dimensional image and displaying real image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811250521.7A CN109544686B (en) | 2018-10-25 | 2018-10-25 | Method and system for modeling three-dimensional image and displaying real image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109544686A CN109544686A (en) | 2019-03-29 |
CN109544686B true CN109544686B (en) | 2023-05-23 |
Family
ID=65844904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811250521.7A Active CN109544686B (en) | 2018-10-25 | 2018-10-25 | Method and system for modeling three-dimensional image and displaying real image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109544686B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111571826A (en) * | 2020-05-20 | 2020-08-25 | 南京欧赛尔齿业有限公司 | Method and equipment for digitally cutting denture material |
WO2021258276A1 (en) * | 2020-06-23 | 2021-12-30 | 广东省航空航天装备技术研究所 | Three-dimensional imaging method, three-dimensional imaging apparatus, electronic device, and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101794461A (en) * | 2010-03-09 | 2010-08-04 | 深圳大学 | Three-dimensional modeling method and system |
CN104994371A (en) * | 2015-06-25 | 2015-10-21 | 苏州佳世达光电有限公司 | Image acquiring apparatus and image adjusting method |
CN106131454A (en) * | 2016-07-27 | 2016-11-16 | 苏州佳世达电通有限公司 | A kind of image acquisition system and image acquisition method |
CN107888898A (en) * | 2017-12-28 | 2018-04-06 | 盎锐(上海)信息科技有限公司 | Image capture method and camera device |
-
2018
- 2018-10-25 CN CN201811250521.7A patent/CN109544686B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101794461A (en) * | 2010-03-09 | 2010-08-04 | 深圳大学 | Three-dimensional modeling method and system |
CN104994371A (en) * | 2015-06-25 | 2015-10-21 | 苏州佳世达光电有限公司 | Image acquiring apparatus and image adjusting method |
CN106131454A (en) * | 2016-07-27 | 2016-11-16 | 苏州佳世达电通有限公司 | A kind of image acquisition system and image acquisition method |
CN107888898A (en) * | 2017-12-28 | 2018-04-06 | 盎锐(上海)信息科技有限公司 | Image capture method and camera device |
Also Published As
Publication number | Publication date |
---|---|
CN109544686A (en) | 2019-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109475394B (en) | Three-dimensional scanner and artificial product processing device using same | |
CN110709886B (en) | Intraoral scanning method | |
CN109544686B (en) | Method and system for modeling three-dimensional image and displaying real image | |
US6925205B2 (en) | Methods, systems and computer program products for color matching | |
CN106580506B (en) | Timesharing 3 D scanning system and method | |
JP5518018B2 (en) | Data acquisition method for 3D imaging | |
CN106456292B (en) | Systems, methods, devices for collecting color information related to an object undergoing a 3D scan | |
JP2016017961A (en) | Imaging method having projecting light source and imaging apparatus thereof | |
CN107430324A (en) | Digital light projector with black light passage | |
CN107393011A (en) | A kind of quick three-dimensional virtual fitting system and method based on multi-structured light vision technique | |
CN106408664B (en) | Three-dimensional model curved surface reconstruction method based on three-dimensional scanning device | |
EP1881478A3 (en) | Projection display apparatus, display method for same and image display apparatus | |
JP6783895B2 (en) | Methods and Related Devices for Producing Contrast Images with Reduced Reflection | |
CA2538162A1 (en) | High speed multiple line three-dimensional digitization | |
CN108261171B (en) | Three-dimensional scanner and method in mouthful | |
CN107860338A (en) | Industrial automation three-dimensional detection system and method | |
US20220330831A1 (en) | Image filtering method | |
JP2006305426A (en) | Method, apparatus and computer program for application state inspection | |
US20130260340A1 (en) | Powder for enhancing feature contrast for intraoral digital image scanning | |
CN114831756A (en) | Intraoral scanning processing method and system, electronic device and medium | |
GB2365648A (en) | Colour correction in image processing | |
JP2004506187A (en) | Color matching system | |
US11223755B2 (en) | Image scanning, displaying and lighting system for eliminating color difference | |
KR101867351B1 (en) | Three-dimensional micro surface imaging device for A stereoscopic image and a photometric stereo are mixed to acquire a three-dimensional fine surface image of the object | |
US9413982B2 (en) | System and method for video frame sequence control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |