WO2019037582A1 - 一种图像处理方法及装置 - Google Patents

一种图像处理方法及装置 Download PDF

Info

Publication number
WO2019037582A1
WO2019037582A1 PCT/CN2018/098388 CN2018098388W WO2019037582A1 WO 2019037582 A1 WO2019037582 A1 WO 2019037582A1 CN 2018098388 W CN2018098388 W CN 2018098388W WO 2019037582 A1 WO2019037582 A1 WO 2019037582A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional image
image data
image
user
block
Prior art date
Application number
PCT/CN2018/098388
Other languages
English (en)
French (fr)
Inventor
欧阳聪星
Original Assignee
欧阳聪星
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 欧阳聪星 filed Critical 欧阳聪星
Priority to US16/641,066 priority Critical patent/US10937238B2/en
Priority to EP18849208.6A priority patent/EP3675037A4/en
Publication of WO2019037582A1 publication Critical patent/WO2019037582A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
  • the traditional oral endoscope is a kind of equipment used for dental optical modulating.
  • the optical gun head for taking the mold needs to be moved in order on the user's upper dentition and lower dentition, and the scanning gun head is not supported in the oral cavity. Roaming freely, professionalism is very strong, user interaction is poor, and requires professional personnel to operate.
  • the embodiment of the invention provides an image processing method and device, which solves the problem that the user interaction effect of the oral image presentation in the prior art is poor, and it is difficult to support the user self-service three-dimensional true color modulo.
  • An image processing method comprising:
  • Step A receiving image data of a user sent by the endoscope; wherein the image data includes at least image data captured by the camera unit in the endoscope, and the type of the image data is a depth image;
  • Step B save the received image data, and determine whether the saved image data can be spliced with each other. When it is determined that the image data can be spliced, the saved image data is spliced to obtain the spliced image data;
  • Step C determining, according to the saved three-dimensional image frame database, a block corresponding to the stitched image data, and determining a position of the block in the saved three-dimensional image contour of the user, and the stitched Reconstructing the image data to a corresponding determined position in the contour of the three-dimensional image of the user, and obtaining the reconstructed three-dimensional image data, wherein the three-dimensional image frame database stores the block that divides the three-dimensional image frame image Image data and position information of an image of each block;
  • Step D updating the currently saved three-dimensional image model of the user according to the reconstructed three-dimensional image data; wherein an initial value of the three-dimensional image model of the user is a three-dimensional image contour of the user;
  • Step E Display the updated three-dimensional image model of the user.
  • the image data of the block includes: number information, image feature information;
  • the location information of the image of the block includes: a spatial positional relationship between each block;
  • An image of each of the three-dimensional image contours is a three-dimensional curved shape based on an image of the block in the three-dimensional image frame database or the three-dimensional image model of the user, including a preset single color and a single texture image .
  • the location of the spliced image data is determined according to the saved three-dimensional image frame database, and the position of the block in the saved three-dimensional image contour of the user is determined, which specifically includes:
  • the location of the spliced image data is determined according to the saved three-dimensional image frame database, and the position of the block in the saved three-dimensional image contour of the user is determined, which specifically includes:
  • the endoscope includes at least two preset camera units with fixed relative positions, according to a relative spatial position relationship of each camera unit in the endoscope, and the image data according to the preset endoscope Identifying, by the identifier of the camera unit carried in, respectively, a relative spatial positional relationship of the stitched image data;
  • the stitched image data and the three-dimensional image are respectively respectively based on the image feature information of the block in the three-dimensional image frame database and the relative spatial positional relationship of the stitched image data.
  • the images of the blocks in the frame database are matched, and the first mapping relationship between the stitched image data and the blocks in the three-dimensional image frame database is obtained;
  • the method further includes:
  • the spliced image data and the three-dimensional image frame database are obtained according to the spatial positional relationship between the preset blocks in the three-dimensional image frame database.
  • the first mapping relationship of the middle block is obtained according to the spatial positional relationship between the preset blocks in the three-dimensional image frame database.
  • the method further comprises:
  • the first preset number of first mapping relationships are selected from the at least two sets of first mapping relationships according to the confidence of each set of the first mapping relationship, and Determining, by using the first preset number of first mapping relationships, for the next time the image data of the user sent by the endoscope is received, calculating the first mapping relationship, so that for the next received image data, Obtaining each mapping relationship based on the selected first preset number of first mapping relationships, respectively, until obtaining a maximum first mapping relationship that is not greater than the second preset number, respectively determining the second preset number
  • the superimposed confidence of the first mapping relationship of each group is determined, if it is determined that the superimposed confidence of any one of the second preset number of first mapping relationships is not less than a preset threshold,
  • the arbitrary set of first mapping relationships is used as a second mapping relationship between the stitched image data and the blocks in the three-dimensional image framework database.
  • the spliced image data is reconstructed in a corresponding determined position in the contour of the three-dimensional image of the user, and the reconstructed three-dimensional image data is obtained, which specifically includes:
  • the extracted three-dimensional curved surface image is replaced with an image at a corresponding determined position in the three-dimensional image contour of the user, and the reconstructed three-dimensional image data is obtained.
  • the currently saved three-dimensional image model of the user is updated according to the reconstructed three-dimensional image data, and specifically includes:
  • the method further comprises:
  • the step B is performed.
  • the method further includes:
  • An image processing apparatus comprising:
  • a receiving unit configured to receive image data of a user sent by the endoscope; wherein the image data includes at least image data captured by an imaging unit in the endoscope, and the type of the image data is a depth image;
  • the processing unit is configured to save the received image data, and determine whether the saved image data can be spliced with each other, and when it is determined that the image data can be spliced, splicing the saved image data to obtain the spliced image data; a three-dimensional image frame database, determining a block corresponding to the stitched image data, determining a position of the block in the saved three-dimensional image contour of the user, and reconstructing the stitched image data Reconstructed three-dimensional image data is obtained at a corresponding determined position in the three-dimensional image contour of the user, wherein the three-dimensional image frame database stores image data of each of the blocks that divide the three-dimensional image frame image and each Position information of the image of the block, and updating the currently saved three-dimensional image model of the user according to the reconstructed three-dimensional image data; wherein an initial value of the three-dimensional image model of the user is the user's Three-dimensional image outline;
  • a display unit for displaying the updated three-dimensional image model of the user.
  • the image data of the block includes: number information, image feature information;
  • the location information of the image of the block includes: a spatial positional relationship between each block;
  • An image of each of the three-dimensional image contours is a three-dimensional curved shape based on an image of the block in the three-dimensional image frame database or the three-dimensional image model of the user, including a preset single color and a single texture image .
  • the block corresponding to the spliced image data is determined according to the saved three-dimensional image frame database, and the position of the block in the saved three-dimensional image contour of the user is determined, and the processing unit is specifically configured to:
  • the block corresponding to the spliced image data is determined according to the saved three-dimensional image frame database, and the position of the block in the saved three-dimensional image contour of the user is determined, and the processing unit is specifically configured to:
  • the endoscope includes at least two preset camera units with fixed relative positions, according to a relative spatial position relationship of each camera unit in the endoscope, and the image data according to the preset endoscope Identifying, by the identifier of the camera unit carried in, respectively, a relative spatial positional relationship of the stitched image data;
  • the stitched image data and the three-dimensional image are respectively respectively based on the image feature information of the block in the three-dimensional image frame database and the relative spatial positional relationship of the stitched image data.
  • the images of the blocks in the frame database are matched, and the first mapping relationship between the stitched image data and the blocks in the three-dimensional image frame database is obtained;
  • the processing unit is further configured to:
  • the spliced image data and the three-dimensional image frame database are obtained according to the spatial positional relationship between the preset blocks in the three-dimensional image frame database.
  • the first mapping relationship of the middle block is obtained according to the spatial positional relationship between the preset blocks in the three-dimensional image frame database.
  • the processing unit is further configured to:
  • the first preset number of first mapping relationships are selected from the at least two sets of first mapping relationships according to the confidence of each set of the first mapping relationship, and Determining, by using the first preset number of first mapping relationships, for the next time the image data of the user sent by the endoscope is received, calculating the first mapping relationship, so that for the next received image data, Obtaining each mapping relationship based on the selected first preset number of first mapping relationships, respectively, until obtaining a maximum first mapping relationship that is not greater than the second preset number, respectively determining the second preset number
  • the superimposed confidence of the first mapping relationship of each group is determined, if it is determined that the superimposed confidence of any one of the second preset number of first mapping relationships is not less than a preset threshold,
  • the arbitrary set of first mapping relationships is used as a second mapping relationship between the stitched image data and the blocks in the three-dimensional image framework database.
  • the spliced image data is reconstructed in a corresponding determined position in the contour of the user's three-dimensional image to obtain reconstructed three-dimensional image data
  • the processing unit is specifically configured to:
  • the extracted three-dimensional curved surface image is replaced with an image at a corresponding determined position in the three-dimensional image contour of the user, and the reconstructed three-dimensional image data is obtained.
  • the currently saved three-dimensional image model of the user is updated according to the reconstructed three-dimensional image data
  • the processing unit is specifically configured to:
  • the processing unit is further used to:
  • the receiving unit is further configured to: when receiving the image data of the user sent by the endoscope, and the processing unit is further configured to return to perform the saving of the received image data, and respectively determine the saved image data.
  • the saved image data is spliced to obtain the spliced image data; and the saved three-dimensional image frame database and the saved three-dimensional image contour of the user are determined.
  • the stitched image data corresponds to a position in the contour of the user's three-dimensional image, and the stitched image data is reconstructed into a corresponding determined position in the contour of the user's three-dimensional image to obtain a reconstructed image.
  • three-dimensional image data, and updating the currently saved three-dimensional image model of the user according to the reconstructed three-dimensional image data is further configured to: when receiving the image data of the user sent by the endoscope, and the processing unit is further configured to return to perform the saving of the received image data, and respectively determine the saved image data. Whether it is splicable with each other, when
  • the method further includes:
  • an operation unit configured to receive an operation instruction of the user, and perform a corresponding operation on the updated three-dimensional image model of the user that is displayed according to the operation instruction.
  • the image data of the user sent by the endoscope is received; wherein the image data includes at least image data captured by the camera unit in the endoscope, and the type of the image data is a depth image;
  • the received image data is saved, and whether the saved image data can be spliced with each other is determined.
  • the saved image data is spliced to obtain the spliced image data; according to the saved three-dimensional image frame database and Preserving the three-dimensional image contour of the user, determining that the stitched image data corresponds to a position in a contour of the user's three-dimensional image, and reconstructing the stitched image data into a contour of the three-dimensional image of the user Reconstructing the three-dimensional image data in the corresponding determined position; updating the currently saved three-dimensional image model of the user according to the reconstructed three-dimensional image data; wherein, the three-dimensional image model of the user The initial value is the three-dimensional image outline of the user; the updated three-dimensional image model of the user is displayed In this way, according to the established three-dimensional image frame database and the user's three-dimensional image contour, when the image data is received, the image data is saved and spliced, and the spliced image data is processed and reconstructed to obtain the reconstructed three
  • the user can use the endoscope to scan the oral cavity at will, as long as the image data of the inner surface of the oral cavity is obtained, whether or not there is Order, you can reconstruct the three-dimensional image of the oral cavity, improve the efficiency of 3D image reconstruction, and do not require professional operation, well support the user self-service oral endoscopy, and not only can present the three-dimensional image of the user's mouth, Moreover, it can be displayed dynamically, the display effect is better, the user experience and interaction effect are improved, and the user self-service three-dimensional true color modulo can be well supported.
  • the three-dimensional image framework database includes at least each block of the pre-divided three-dimensional image frame, and a complete block label system is established, and each zone is The block includes: number information, name information, file attribute description information, three-dimensional surface pattern, image feature information, and spatial positional relationship between each block.
  • FIG. 1 is a flowchart of a method for processing a three-dimensional image according to Embodiment 1 of the present invention
  • Embodiment 3 is a three-dimensional image displayed during a scanning process according to Embodiment 4 of the present invention.
  • Embodiment 4 is a three-dimensional image displayed after the scanning is completed according to Embodiment 4 of the present invention.
  • FIG. 5 is a schematic diagram of an environment architecture according to Embodiment 5 of the present invention.
  • FIG. 6 is a schematic structural diagram of a three-dimensional image processing apparatus according to Embodiment 6 of the present invention.
  • a three-dimensional image frame database and a three-dimensional image contour are established, and the received image data is acquired.
  • Processing, reconstructing on the user's three-dimensional image contour can establish a three-dimensional oral image, and can dynamically display the reconstructed three-dimensional image.
  • the reconstruction of the three-dimensional image of the oral cavity is mainly performed, wherein the endoscope can be an intraocular speculum.
  • the endoscope can be an intraocular speculum.
  • it is not limited to the three-dimensional image of the oral cavity, and other fields are The three-dimensional image reconstruction can also be applied. The following is only an example of the oral cavity.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the specific process of the image processing method is as follows:
  • Step 100 Receive image data of a user sent by an endoscope; wherein the image data includes at least image data captured by an imaging unit in the endoscope, and the type of the image data is a depth image.
  • the user often needs to view the image of the oral cavity.
  • the oral cavity speculum scans the oral cavity to obtain an image in the oral cavity.
  • the prior art After scanning the oral cavity, only a partial image can be obtained, and the overall three-dimensional image cannot be presented. The user cannot see the overall three-dimensional image of the oral cavity, and it is impossible to determine which position of the broken tooth or the problematic part is in the oral cavity.
  • a unique initial region is set, and then the initial region is used as the only anchor point, and the image sequence acquired at the front end is collected.
  • Image stitching is continuously performed in order, thereby continuously expanding the aforementioned area and deriving it as a main body area, continuously scanning in the oral cavity until the scanning is completed.
  • the image data is discarded, that is, the portion that the user wants to see cannot be arbitrarily scanned. The user can only see a three-dimensional image formed by successively stitching the aforementioned unique initial regions.
  • the image data is received and reconstructed directly into the contour of the user's three-dimensional image, which not only can present a three-dimensional image of the oral cavity, but also can support multiple initial region scanning and splicing, and the user can scan any part of the oral cavity at will, Also, you can see the specific location of the image in the mouth, and the user can clearly see the outline and three-dimensional image of his or her mouth.
  • the endoscope is, for example, an oral endoscope, and the endoscope is provided with an imaging unit for capturing an image.
  • the endoscope may be provided with an imaging unit, or may be provided with a plurality of imaging units, and
  • the type of image data captured is a depth image, that is, an RGBD image, which is a three-dimensional true color image, so that three-dimensional information of the image can be acquired, which facilitates subsequent three-dimensional image reconstruction.
  • Step 110 Save the received image data, and determine whether the saved image data can be spliced with each other. When it is determined that the image data can be spliced, the saved image data is spliced to obtain the spliced image data.
  • the obtained spliced image data represents all the spliced image data, wherein not only the image data that can be spliced, but spliced successfully and spliced with other image data is included, and the image is judged. After the stitching is unsuccessful, the image data still exists in isolation.
  • each time the image data is received it is saved, and it is determined whether all the currently saved image data can be spliced with each other, not only for the received image data once, but for the currently saved image. All image data, first determine whether it can be spliced.
  • each of the imaging units generally adopts a micro-focus imaging unit.
  • the micro-focus camera unit each time the captured image is a small surface with a small area, for example, the area is 2mm*2mm or 3mm*3mm. In most cases, it cannot cover a block completely, just a zone. The partial surface of the block, therefore, the image data can be spliced first and then matched, which can improve the efficiency and accuracy of the matching.
  • the saved image data includes not only the image data received this time, but also all the previously received image data, if these Some of the images can be spliced, spliced and matched first, and the number of images can be reduced, the number of images and blocks can be reduced, the time is reduced, and the execution efficiency is improved.
  • Step 120 Determine, according to the saved three-dimensional image frame database, a block corresponding to the stitched image data, and determine a position of the block in the saved three-dimensional image contour of the user, and the stitched Reconstructing the image data to a corresponding determined position in the contour of the three-dimensional image of the user, and obtaining the reconstructed three-dimensional image data, wherein the three-dimensional image frame database stores the block that divides the three-dimensional image frame image Image data and position information of images of each block.
  • a three-dimensional image frame database and a three-dimensional image outline of the oral cavity are established in advance.
  • the image data of the block includes: number information and image feature information.
  • the location information of the image of the block includes: a spatial positional relationship between each block.
  • the image feature information includes at least parameter information related to the shape, color, and texture of the image.
  • the three-dimensional image contour of the user is obtained based on the three-dimensional image frame database or the three-dimensional image model of the user; wherein an image of each block in the three-dimensional image contour is based on the three-dimensional image frame database Or a three-dimensional curved shape of the image of the block in the three-dimensional image model of the user, including a preset single color and a single textured image.
  • the three-dimensional image frame database of the oral cavity and the corresponding three-dimensional image contour are established, and the oral cavity is divided into blocks, and various related information is recorded, thus implementing the present invention.
  • the reconstruction of the three-dimensional image of the oral cavity is provided, and the technical basis and support are provided.
  • the three-dimensional image frame database and the three-dimensional image profile will be described in detail below.
  • step 120 the method specifically includes:
  • the block corresponding to the stitched image data is determined, and the position of the block in the saved three-dimensional image contour of the user is determined.
  • the spliced image data is reconstructed into corresponding determined positions in the three-dimensional image contour of the user, and the reconstructed three-dimensional image data is obtained.
  • the boundary feature of each block can be determined. For example, some stitched image data P(a) corresponds to block 1, but P(a) may cover more than the area.
  • the information of block 1 at this time, according to the boundary feature information of block 1, the corresponding three-dimensional surface image can be extracted from P(a) according to the boundary.
  • the upper boundary of the block is the mucosal reflex line of the upper oral vestibular groove, which is in contact with the upper lip mucosa block.
  • the lower boundary is connected to the side of the upper lip of the upper dentition.
  • the left border is connected to the left buccal mucosal block of the maxillary alveolar ridge.
  • the right border is connected to the right buccal mucosa of the maxillary alveolar ridge.
  • the extraction may be performed according to the boundary feature information of the block.
  • the spliced image data is the block, only The partial image including the upper boundary feature information of the block, from the upper boundary to the lower boundary, may be used to remove the upper image data of the upper boundary according to the upper boundary feature information of the block, and retain the upper image belonging to the block. Image data of the boundary and the upper boundary.
  • the three-dimensional curved image of the block extracted from the stitched image data is only a partial image of the block.
  • the block image displayed is only the partial 3D surface image that has been extracted by the block.
  • the complete 3D surface image of the block can be displayed step by step. 2) replacing the extracted three-dimensional curved surface image with the image at the corresponding determined position in the contour of the three-dimensional image of the user, and obtaining the reconstructed three-dimensional image data.
  • the extracted three-dimensional curved surface image is replaced with the image at the corresponding determined position in the contour of the three-dimensional image of the user, and can be further divided into the following cases:
  • the first case specifically includes:
  • the stitched image data corresponds to the block in the three-dimensional image frame database, whether it exists in the three-dimensional image contour of the user.
  • the curved image replaces the image of the added corresponding block.
  • the following three operations may be performed on the contour of the three-dimensional image: directly replacing the image of the block, adding or deleting Replace the image after the block.
  • image 1 corresponds to block a in the three-dimensional image frame database
  • image 2 corresponds to block b in the three-dimensional image frame database
  • image 3 corresponds to block c in the three-dimensional image frame database
  • block b is in block a And block c, adjacent to block a and block c, respectively, if block a and block c are adjacent in the three-dimensional image profile of the user, and block b is not included
  • Image 1 and Image 3 directly replace the image of block a and block c in the three-dimensional image outline, and add block b between block a and block c, and replace image 2 with the block added in the contour of the three-dimensional image.
  • the image of b is not included
  • the endoscopic image data appears as:
  • the left anterior 3 teeth are located in the gingival sulcus adjacent to the left side of the adjacent side of the tooth, and the area of the sacral mastoid that is connected to the left side is relatively large.
  • the sacral mastoid mass extends to the mucosal area covered by the apical ridge of the lower alveolar ridge. Piece.
  • the left side of the mucosal block is connected to the proximal middle surface of the left lower 5 teeth through the gingival sulcus.
  • the relevant blocks of the lower left 4 teeth will be removed, including: the lower left 4 teeth buccal block, the lower left 4 teeth near the middle block, the lower left 4 teeth far middle adjacent block, The lower left 4 teeth occlusal block, the lower left 4 teeth lingual block, the lower left 4 ⁇ mastoid block and so on.
  • the above is the block deletion operation in the image three-dimensional reconstruction process.
  • the lower boundary of the user's lower left 2 teeth lingual block is at least partially bordered by the upper boundary of the calculus block, rather than the lower left 2 lingual gingival block.
  • the contour of the three-dimensional image of the user is updated and the contour is extracted from the updated three-dimensional image model
  • the outline of the stitched image data of the replaced oral mucosa block is directly extracted for the oral mucosa block.
  • the ulcer block is covered and deleted.
  • the second case specifically includes:
  • the stitched image data corresponds to a corresponding block in the latest three-dimensional image frame database.
  • the endoscopic image data is expressed as follows: 1) the lower boundary of the lingual block of the lower left 2 teeth of the user and the upper lingual block of the lower left tongue side The boundaries are connected.
  • the calculus block (eg, block number 2412203) between the lower left 2 lingual block and the lower left 2 lingual gingival block will be deleted.
  • the above is the block deletion operation in the three-dimensional reconstruction process of the endoscopic image.
  • the three-dimensional image obtained by adding or deleting the block may further reflect the real state of the user's oral cavity.
  • the user's three-dimensional image contour has four connected blocks, which are sequentially Block a, block b, block c, block d.
  • the spliced image data it is determined that the block b is deleted, and the block a and the block d are connected by the block c, and the block a and the block c and the block d are spliced.
  • the user will see that the block a and the block d are connected only by the block c, and the position originally belonging to the block b becomes a spare part, which directly displays transparency and does not include any image.
  • an image of a part of a certain block may be replaced, which is not limited in the embodiment of the present invention, and may be based on the method in the embodiment of the present invention.
  • the extracted three-dimensional curved surface image replaces the image at the corresponding determined position in the contour of the three-dimensional image of the user, thereby realizing the effect of updating the contour of the three-dimensional image of the user.
  • the image data sent by the endoscope is received, saved and spliced, the spliced image data is identified and matched, and mapped to the block, and the spliced image data is determined to correspond to the user.
  • the position on the contour of the three-dimensional image, and then the three-dimensional surface image belonging to the corresponding block in the stitched image data can be replaced with the image at the corresponding determined position in the contour of the three-dimensional image, so that the actual block in the user's mouth Whether the block in the outline of the three-dimensional image is exactly the same can be updated to the actual oral image of the user.
  • the user's tooth may be relatively large, and the area of the tooth in the three-dimensional image contour is relatively small.
  • the three-dimensional image surface of the block is extracted, and the image of the block of the tooth in the contour of the three-dimensional image is directly replaced, and the obtained three-dimensional image contour of the user is the same as the block of the user actually in the tooth.
  • Step 130 Update the currently saved three-dimensional image model of the user according to the reconstructed three-dimensional image data; wherein an initial value of the three-dimensional image model of the user is a three-dimensional image contour of the user.
  • step 130 the method specifically includes:
  • the reconstructed three-dimensional image data is replaced with an image of a corresponding determined position in the currently saved three-dimensional image model of the user.
  • the image at the corresponding position in the user's three-dimensional image model can be continuously replaced, thereby realizing the effect of dynamically updating the user's three-dimensional image model.
  • the user's three-dimensional image contour can also be updated, specifically:
  • the oral endoscopic image for the user can be constructed according to the updated three-dimensional image model of the user, so that Different users record the oral conditions of different users separately, which is convenient for follow-up tracking.
  • the oral health status of the user can be tracked, and the oral treatment situation can be tracked.
  • Step 140 Display the updated three-dimensional image model of the user.
  • the three-dimensional image contour is initially displayed to the user.
  • the currently saved user's three-dimensional image model is updated, and the updated image is displayed.
  • the user's three-dimensional image model because the three-dimensional image outline contains a preset single color and a single texture image, for example, a gray image, and the acquired image data is a three-dimensional true color image, including actual various colors.
  • the texture is updated, the user can see that the user's three-dimensional image model is gradually replaced by the three-dimensional true color image. It can be seen as gradually lighting up the three-dimensional image model.
  • the scan is completed, the user sees that it includes 3D true color image of information such as color and texture.
  • the method further includes:
  • the user can zoom in or out on the user's three-dimensional image model, and can also rotate the user's three-dimensional image model so that the user can more clearly view the user's three-dimensional image model.
  • the three-dimensional image frame database and the three-dimensional image contour are preset, and after receiving the user image data sent by the endoscope, the splicing is performed, and the spliced image data is reconstructed into the corresponding three-dimensional image contour of the user.
  • Which block in the oral cavity replaces the default surface image of the corresponding position of the block in the original three-dimensional image contour with the captured three-dimensional true color curved surface image of the block, and displays it on the user's terminal.
  • the efficiency of three-dimensional reconstruction can be significantly improved, so that not only a three-dimensional image of the user's mouth can be obtained, but also the user can view the specific part of the oral cavity, and the user can scan at will. Any position in the mouth, without the need to continuously scan from the only initial area in the mouth, To users, enhance the user experience, dynamic display and then scanned into a three-dimensional image of the oral cavity, showing a better, more convenient and flexible.
  • the three-dimensional image frame database and the three-dimensional image contour are established.
  • the three-dimensional image model displayed at the beginning is a three-dimensional image contour, and the user obtains a more scanning operation in the oral cavity.
  • the image data of the oral cavity can be reconstructed and updated based on the method in the example of the present invention, so that each block in the three-dimensional image is gradually replaced by the three-dimensional true color image, and the reconstruction and update have not been completed yet.
  • the default image on the contour of the 3D image is still displayed.
  • each camera unit on the endoscope can collect more three-dimensional true color images of the blocks that are still the default images.
  • the three-dimensional true color image captured by the camera unit of the endoscope will gradually cover the entire inner surface of the oral cavity, and will obtain a full-mouth digital endoscopic image, which can better support the user's self-service oral cavity without the need of professional operation. Peep scan.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the block corresponding to the stitched image data is determined according to the saved three-dimensional image frame database, and the block is determined in the saved three-dimensional image contour of the user. Location, its specific implementation method is introduced:
  • the image data corresponds to a block in the user's three-dimensional image frame database
  • the stitched image data is determined to correspond to a position in the user's three-dimensional image outline.
  • a three-dimensional image frame database is established for all users' oral cavity, wherein each block image is a three-dimensional true color curved surface, and image feature information thereof can be acquired for subsequent image matching.
  • each of the spliced image data and the three-dimensional image frame database may be used according to image feature information of each block in the three-dimensional image frame database.
  • a block is matched.
  • the block in the three-dimensional image frame database may be divided according to the region.
  • the region of the image data after the stitching may be determined first, and then directly according to the image feature information of the block in the corresponding region. Matching, there is no need to match every block in the 3D image framework database.
  • the block corresponding to the image data in the three-dimensional image frame database can be determined, and then the position in the three-dimensional image frame database can be determined, and then the position in the user's three-dimensional image contour can be determined.
  • the position of the stitched image data in the contour of the user's three-dimensional image is determined.
  • the image collected by the endoscope can be identified and matched, and then the image can be determined to correspond to a specific position in the oral cavity, that is, the three-dimensional true color image collected by the endoscope can be determined. Which one or which blocks in the mouth are corresponding.
  • the endoscope includes at least two preset camera units with fixed relative positions, according to a relative spatial positional relationship in each endoscope of the preset endoscope, and the The identifiers of the camera units carried in the image data respectively determine the relative spatial positional relationship of the stitched image data.
  • a plurality of imaging units may be disposed in the endoscope, and the relative positions of the plurality of imaging units in the endoscope are preset.
  • imaging units there are six imaging units, which are an imaging unit A, an imaging unit B, an imaging unit C, and an imaging unit D, an imaging unit E, and an imaging unit F.
  • the fixed relative spatial position relationship of each camera unit is that the imaging unit A and the imaging unit B are opposite sides, and the imaging unit A is on the same side as the stretching portion of the oral endoscope.
  • the imaging unit C and the imaging unit D are opposite to each other.
  • the imaging unit E and the imaging unit F are opposite to each other.
  • the connection between the imaging units A and B and the imaging units C and D are perpendicular to each other and are in an orthogonal relationship.
  • the connecting lines of the imaging units A and B and the imaging units E and F are perpendicular to each other and have an orthogonal relationship.
  • the connecting lines of the imaging units C and D and the imaging units E and F are perpendicular to each other and have an orthogonal relationship.
  • the image pickup unit C is on the left side
  • the image pickup unit D is on the right side
  • the image pickup unit is on the right side from the side of the image pickup unit B to the side where the image pickup unit A (that is, the stretched portion side) is located along the line connecting the image pickup units A and B.
  • E is on the upper side
  • the imaging unit F is on the lower side.
  • the endoscope supplies the image data to the camera unit, and increases the identification of the corresponding camera unit, so that the relative spatial positional relationship between the captured image data can be determined according to the relative spatial positional relationship of the camera unit.
  • the second way is based on the first mode, and the layout scheme adopting multiple camera units in the endoscope to form a spherical field of view can simultaneously receive image data acquired by multiple camera units at the same time, thereby improving image pattern recognition. Accuracy and efficiency can also reduce the recognition operation, which is the time required to determine the mapping relationship.
  • the image information P(A) covering both the lower left 5 second pre-molar lingual block and the lower left 6 first molar lingual block is actually acquired by the imaging unit A, according to the preset image pattern recognition algorithm and each The image feature information of the tooth surface block, it is known that the image P(A) covers the adjacent lingual lingual block and the lingual side block of the molar, and the mapping relationship between the obtained image information P(A) and the block is obtained.
  • the images acquired by the other imaging units B, C, D, E, and F are also acquired, and the image information P(B), P(C), and P(D) are acquired. , P (E), P (F).
  • the imaging unit E Based on the relative spatial positional relationship between the imaging unit A, the imaging unit B, the imaging unit C, and the imaging unit D, the imaging unit E, and the imaging unit F, if the upper side of the image information P(B) is a soft mucosa, the lower side is a tongue surface.
  • the image information P(E) is a hard mucosa
  • the image information P(F) is a tongue surface, which further confirms that the image information P(A) is a curved image of the lingual block, and the oral endoscope is currently located at the user's Intrinsic oral cavity.
  • the image information P(C) has an oral mucosa at the occlusion gap
  • the image information P(D) has a lingual curved surface image of the other lower teeth.
  • the oral endoscope is currently located on the left side of the user's own oral cavity, and the image information P(A) covers the curved image of the lingual side of the user's lower left teeth.
  • the image information P(A) corresponds to the user's lower left 5 second premolar lingual side block and the lower left 6 first molar lingual side block.
  • each camera unit in the endoscope can collect more.
  • Complete oral 3D images can greatly improve the efficiency of 3D reconstruction.
  • the third method when the spliced image data is matched with the image of the block in the three-dimensional image frame database, respectively, based on the first mode and the second mode, further comprising:
  • the spliced image data and the three-dimensional image frame database are obtained according to the spatial positional relationship between the preset blocks in the three-dimensional image frame database.
  • the first mapping relationship of the middle block is obtained according to the spatial positional relationship between the preset blocks in the three-dimensional image frame database.
  • the third way may be based on the first mode and the second mode, and further improve the accuracy and efficiency of the identification according to the spatial positional relationship between the blocks.
  • the spatial positional relationship between the blocks is recorded, for example, the adjacency relationship between each block, and the adjacencies include: front boundary and rear Boundary, left, right, upper, lower, etc., which makes it difficult to determine the block corresponding to the image by image pattern recognition even when an image information sufficient to cover the entire surface of one block is acquired
  • the surface image of the plurality of blocks can be used as a whole for pattern recognition, thereby removing a lot of uncertainty, improving the accuracy of the recognition and matching, and shortening the recognition. The time required for the operation.
  • an image information P(a) covering the lingual block of the lower left 5 second premolar is obtained, due to the similarity of the left lower 4 first premolar lingual block and the lower left 5 second premolar lingual block. Higher, it is only possible to match the image features identified by the image pattern, and it is difficult to determine the final mapping relationship.
  • the endoscope collects more image information and returns it, and can continue the splicing operation and gradually enlarge the P(a) image area.
  • the P(a) image area is enlarged to cover the image information P(b) of the lower left 5 second premolar lingual block and the lower left 6 first molar lingual block, according to the preset image pattern recognition algorithm and each
  • the image feature information of the tooth surface block can be known that the adjacent left lower front molar lingual block and the left lower molar lingual block are covered in the image P(b).
  • the first manner may be directed to the case where there is only one or more camera units in the endoscope
  • the second method is directed to the case where there are multiple camera units in the endoscope.
  • the third way is also applicable to the case where there is only one or more camera units in the endoscope.
  • the first method is mainly based on image pattern recognition, and determines the mapping relationship between the stitched image data and the block in the three-dimensional image frame database; in the second method, the relative spatial position relationship of the camera unit can be further referred to, and the determined stitching is improved.
  • the accuracy and efficiency of the relationship between image data and block mapping can more accurately determine that the captured data corresponds to a specific location in the oral cavity; the third way, based on the first mode and the second mode, Further referring to the spatial positional relationship between the blocks, the accuracy of determining the mapping relationship between the stitched image data and the blocks can also be improved.
  • the method further includes:
  • the first preset number of first mapping relationships are selected from the at least two sets of first mapping relationships according to the confidence of each set of the first mapping relationship, and Determining, by using the first preset number of first mapping relationships, for the next time the image data of the user sent by the endoscope is received, calculating the first mapping relationship, so that for the next received image data, Obtaining each mapping relationship based on the selected first preset number of first mapping relationships, respectively, until obtaining a maximum first mapping relationship that is not greater than the second preset number, respectively determining the second preset number
  • the superimposed confidence of the first mapping relationship of each group is determined, if it is determined that the superimposed confidence of any one of the second preset number of first mapping relationships is not less than a preset threshold,
  • the arbitrary set of first mapping relationships is used as a second mapping relationship between the stitched image data and the blocks in the three-dimensional image framework database.
  • the whole process described above can be regarded as a process of constructing a search tree. For example, if eight camera units are provided in the endoscope, image data captured by eight camera units, that is, eight RGBD images, can be received each time.
  • the first preset number is 3, and the second preset number is 1000.
  • the first three confidence levels are selected from the first mapping relationship of the n groups.
  • the group mapping relationship is, for example, n(11), n(12), and n(13), respectively, in which case there are three groups.
  • a 3 ⁇ m group mapping relationship can be obtained. If 3 ⁇ m is a maximum value not greater than 1000, a decision is made, from which the 3 ⁇ m group is selected. One of the groups with the highest degree of confidence is used as the second mapping relationship between the spliced image data and the block, that is, the final mapping relationship, and then the spliced image data can be reconstructed based on the second mapping relationship. The reconstructed three-dimensional image data is obtained corresponding to the determined position on the contour of the three-dimensional image, and then the three-dimensional image model of the user is updated and displayed.
  • step 1 obtaining an image information P(1) including a partial surface of the lingual block of the second lower premolar of the lower left, according to a preset image pattern recognition algorithm and image feature information of each block, it is assumed that The mapping relationship of P(1) is related to the surface of the tooth surface, and multiple sets of first mapping relationships can be obtained, instead of the relevant surface of the gingival surface or the tongue surface or various oral mucosal surfaces.
  • Step 2 After that, as the user roams the oral endoscope inside his own mouth, more image information is collected and returned, and the splicing operation is continued and the P(1) image area is gradually enlarged. After the P(1) area is expanded to cover the image information P(2) of the entire lingual block of the lower left 5 second premolar, based on the preset image pattern recognition algorithm and the image feature information of each tooth surface block, based on P (1) Corresponding mapping relationship, obtaining the mapping relationship of P(2) is the lingual side block of the premolar, for example, the lower left 4 first premolar or the lower left 5 second premolar, or the lower right 4 first premolar, The lower right 5 second premolar, the upper left 4 first premolar, the upper left 5 second premolar, the upper right 4 first premolar, the upper right 5 second premolar and other premolars.
  • Step 3 As the user roams the oral endoscope in his or her mouth, more image data is acquired, and when the P(2) area is expanded to cover the lower left 5 second premolar lingual block and the lower left 6 first After the image information P(3) of the lingual block of the molar, according to the preset image pattern recognition algorithm and the image feature information of each tooth surface block, the lingual block of the left lower front premolar covering the adjacent image can be obtained.
  • the left lower molar lingual block As the user roams the oral endoscope in his or her mouth, more image data is acquired, and when the P(2) area is expanded to cover the lower left 5 second premolar lingual block and the lower left 6 first After the image information P(3) of the lingual block of the molar, according to the preset image pattern recognition algorithm and the image feature information of each tooth surface block, the lingual block of the left lower front premolar covering the adjacent image can be obtained.
  • the left lower molar lingual block As the user roams the oral endoscope in his or her mouth, more image
  • the image information P(3) covers the lower left 5 second premolar lingual block and the lower left 6 first molar lingual block.
  • the shooting interval time of the image capturing unit in the endoscope can be set, for example, 20 frames per second, so that the captured image data can be acquired in a short time, and then the above process is performed.
  • the time is also short, and it will not affect the subsequent display process. The user will not have the perception of pause and will not affect the user experience.
  • the final mapping relationship may not be obtained every time the image data is received, but as more image data is acquired, the determination may be sequentially performed to obtain the maximum confidence.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • the three-dimensional image frame database and the three-dimensional image profile are described in detail below.
  • the three-dimensional image framework database is constructed based on various situations of the human oral cavity, and the three-dimensional image framework database stores general framework data of the three-dimensional image model of the human oral cavity, and the framework data covers the human oral cavity in various situations.
  • Image feature information of all surface areas such as shape features, color features, texture features, and the like. These conditions include normal oral health scenes of adults, dirty scenes of oral cavity, pathological scenes of oral cavity, scenes of oral malformations, scenes of oral trauma, scenes of deciduous teeth growing up to permanent teeth, and normal oral scenes of children, oral organs. Stained scenes, oral pathology scenes, oral malformation scenes, oral trauma scenes, deciduous teeth eruption scenes.
  • the human full oral internal surface three-dimensional image frame database of the present invention can be continuously updated and expanded, for example, image feature information in a new oral pathological scene can be added, or new ones can be added. Image feature information in an oral trauma scene. In this way, the accuracy of the matching can be further improved.
  • the three-dimensional image framework database includes at least each block of the pre-divided three-dimensional image frame.
  • the three-dimensional image frame when dividing a block, the three-dimensional image frame may be directly divided into individual blocks.
  • the area when dividing the block, the area may be first divided, and then each block is divided in each area. Therefore, the division efficiency is higher.
  • the limitation is not limited thereto.
  • each region and each block is determined, and the inner surface of the whole oral cavity is divided into a series of mutually intersecting regions, each of which is divided into a series of mutual The block that was handed over.
  • the division of the region may be divided according to the functions of the various parts in the oral cavity. Also, each area has at least one number information.
  • the inner surface of the oral cavity can be divided into 14 regions, namely: the anterior wall surface area of the oral vestibule, the posterior wall surface area of the oral vestibule, the upper oral vestibular groove area, the lower oral vestibular groove area, the left occlusal gap area, and the right occlusal space.
  • Each of the regions corresponds to one number information, for example, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14, respectively.
  • the division of the region in the oral cavity is not limited, and the purpose is to divide the oral cavity into regions that can be distinguished, and each region can be connected to form a complete oral structure.
  • the interior of a block is as single texture as possible, as single color as possible, and as close as possible to the planar layout.
  • the size and specific shape of the block are not limited in the embodiment of the present invention, and may be determined by comprehensively considering the accuracy requirement and the calculation amount requirement of the three-dimensional image.
  • the image data of the block includes: number information and image feature information.
  • the location information of the image of the block includes: a spatial positional relationship between each block.
  • each block has its own unique label (for example, a name). And its number) and related image feature information, etc., establish a complete block labeling system, each block can be found quickly, and the image feature information of each block can be known.
  • the labeling system of the established block includes at least: number information, image feature information, and spatial positional relationship between each block, and may also include name information, file attribute description information, and a three-dimensional surface. Patterns, etc.
  • the posterior wall surface area of the oral vestibule numbered 2 it can be divided into six blocks, namely: block (2.1), and the upper jaw alveolar labial mucosal block.
  • block (2.2) the left buccal mucosal block of the maxillary alveolar ridge is connected.
  • Block (2.3) the right buccal mucosa of the maxillary alveolar ridge.
  • Block (2.4) the labial mucosal block of the mandibular alveolar ridge.
  • Block (2.5) the left buccal mucosal block of the mandibular alveolar ridge.
  • Block (2.6) the right buccal mucosal block of the mandibular alveolar ridge.
  • the division of the blocks in the area is not limited, and may be divided according to actual conditions to ensure that each block can form a complete corresponding area.
  • the constructed three-dimensional image framework database not only divides the area and the block of the oral cavity, but also establishes a label system of the block, which can accurately identify various positions in the oral cavity, and is convenient. Perform 3D image matching and reconstruction. Moreover, this enables the image processing apparatus of the present invention to acquire the semantic information of the image data received from the endoscope during the processing of the received image data, and to create an oral endoscopic image inspection using artificial intelligence technology. condition.
  • the three-dimensional image contour stores shape contour data of a three-dimensional image of each region (including each block) of the inner surface of the human full oral cavity.
  • the three-dimensional image contour of the user stores at least shape contour data of the three-dimensional image of each block in the oral cavity of the user.
  • the three-dimensional image contour of the user is obtained based on the three-dimensional image frame database or the three-dimensional image model of the user; wherein an image of each of the three-dimensional image contours is based on the three-dimensional image frame database or The three-dimensional surface shape of the image of the block in the user's three-dimensional image model, including a preset single color and a single texture image.
  • the default image of each block of the oral endoscopic panoramic view is covered in the three-dimensional image contour.
  • the default image of each block in the three-dimensional image profile only includes the three-dimensional surface shape of each block, and is a single color + a single texture, that is, without the true color and actual texture of each block.
  • the outer surface of the three-dimensional surface that sets the default image for each block in the three-dimensional image outline is only a dark gray of smooth texture.
  • the three-dimensional image contour of the user may be updated according to the actual situation of each user.
  • the three-dimensional image contour of the user may be updated according to the actual situation of each user.
  • the first three-dimensional image outline that is preset based on the user's age information is displayed.
  • the user scans the oral cavity using the oral endoscope, and continuously collects the image in the oral cavity, based on the image processing method in the above embodiment of the present invention, the operations of splicing, recognizing, reconstructing, etc., as long as the processing of one block is completed,
  • the default surface image of the block in the original contour is replaced with the acquired three-dimensional true color surface image of the block, and displayed on the user terminal.
  • the user's 3D image model is continuously updated, and the oral endoscopic 3D color is displayed on the user terminal.
  • the surface image is getting more and more complete, and the default surface image left by the original 3D image outline is less and less.
  • the panoramic images of the oral endoscope displayed on the user terminal are all composed of three-dimensional true color curved surface images.
  • the three-dimensional image contour is extracted therefrom, and the previously preset standard three-dimensional image contour is updated, and the user uses the oral endoscope again next time.
  • the first three-dimensional image outline of the user is displayed. That is to say, when all the different users use it for the first time, the standard three-dimensional image contours preset according to the user's age information are displayed.
  • the 3D image contour is updated, which is related to the user, and can represent the actual three-dimensional image contour of the user.
  • the contour of the three-dimensional image is continuously updated, and the obtained three-dimensional image contours are different.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • FIG. 2 is a schematic diagram of an implementation of an image processing method in an embodiment of the present invention.
  • the endoscope is an oral endoscope
  • the oral endoscope uploads the collected oral image to the intelligent terminal
  • the intelligent terminal performs the image processing method in the embodiment of the present invention, and dynamically displays the three-dimensional image of the oral cavity in the intelligent terminal.
  • the smart terminal for example, a mobile phone, a computer, or the like, is not limited in the embodiment of the present invention, and may perform operations such as matching and reconstruction in the cloud, and display a three-dimensional image of the oral cavity on the mobile phone side.
  • the three-dimensional image model of the user is initially displayed in the smart terminal as a preset three-dimensional image contour. At this time, both are gray single color, single texture images.
  • the camera unit in the oral endoscope will take a three-dimensional true color image in the oral cavity and upload it to the smart terminal, and the smart terminal will receive based on the image processing method in the above embodiment of the present invention.
  • the updated user's 3D image model is displayed during the scanning process when a partial 3D image is reconstructed. It can be seen from Fig.
  • the upper frame retains the original frame and does not light up; the original frame remains on the back of the tongue and does not light up; the upper dentition row lights up from the right side of the first and second teeth.
  • the scan has been completed; the other teeth of the upper dentition retain the original frame and are not lit; the lower dentition is lit from the lingual side of the third and fourth teeth on the left, indicating that the scan has been completed; Keep the original frame, not lit.
  • the portion that has been lit is actually a three-dimensional true color image, and is only used to distinguish it from the unlit portion, and a darker gray is used for explanation.
  • the reconstructed part showing the user's real three-dimensional true color image of the oral cavity, including the actual color and texture information
  • the unreconstructed part still showing the preset gray in the outline of the 3D image Single color single textured image.
  • the current display is the 3D after all the blocks have been reconstructed.
  • the image model displays a three-dimensional true color image including information such as color, texture, etc. (in which, in Fig. 4, it is also only for distinguishing from the unlit portion, and the darker gray is used for explanation) .
  • the three-dimensional true color image is consistent with the image of the user's mouth, and can truly reflect the image and condition of the user's mouth.
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • FIG. 5 a schematic diagram of an environment architecture of an application scenario is shown in Embodiment 5 of the present invention.
  • An application software may be developed for implementing the image processing method in the embodiment of the present invention, and the application software may be installed in a user terminal, and the user terminal is respectively connected to the endoscope and the network subsystem to implement communication.
  • the user terminal can be any smart device such as a mobile phone, a computer, an ipad, and the like.
  • a mobile phone such as a smart phone, a computer, an ipad, and the like.
  • only the mobile phone is taken as an example for description.
  • the user scans the oral cavity using the endoscope, collects the image in the oral cavity, and the endoscope sends the acquired image to the user terminal, and the user terminal acquires the three-dimensional image frame database and the three-dimensional image contour from the server through the network subsystem, and then receives the image.
  • the obtained image data is processed, saved and spliced, and the spliced image data is determined to correspond to a position in the contour of the user's three-dimensional image, and the spliced image data is reconstructed into a corresponding determined position in the contour of the user's three-dimensional image.
  • the reconstructed three-dimensional image data is obtained, and the currently saved user's three-dimensional image model is updated, displayed, and the oral endoscopic scan is completed to obtain a three-dimensional image of the user's mouth.
  • the image processing apparatus specifically includes:
  • a receiving unit 60 configured to receive image data of a user sent by the endoscope; wherein the image data includes at least image data captured by an image capturing unit in the endoscope, and the type of the image data is a depth image;
  • the processing unit 61 is configured to save the received image data, and determine whether the saved image data can be spliced with each other. When it is determined that the image data can be spliced, the saved image data is spliced to obtain the spliced image data; a saved three-dimensional image frame database, determining a block corresponding to the stitched image data, determining a position of the block in the saved three-dimensional image contour of the user, and reconstructing the stitched image data Reconstructed three-dimensional image data is obtained at a corresponding determined position in the three-dimensional image contour of the user, wherein the three-dimensional image frame database stores image data of a block that divides the three-dimensional image frame image and each Position information of the image of the block, and updating the currently saved three-dimensional image model of the user according to the reconstructed three-dimensional image data; wherein an initial value of the three-dimensional image model of the user is the user Three-dimensional image outline;
  • the display unit 62 is configured to display the updated three-dimensional image model of the user.
  • the image data of the block includes: number information, image feature information;
  • the location information of the image of the block includes: a spatial positional relationship between each block;
  • An image of each of the three-dimensional image contours is a three-dimensional curved shape based on an image of the block in the three-dimensional image frame database or the three-dimensional image model of the user, including a preset single color and a single texture image .
  • the block corresponding to the spliced image data is determined according to the saved three-dimensional image frame database, and the position of the block in the saved three-dimensional image contour of the user is determined, and the processing unit 61 is specifically configured to: :
  • the block corresponding to the spliced image data is determined according to the saved three-dimensional image frame database, and the position of the block in the saved three-dimensional image contour of the user is determined, and the processing unit 61 is specifically configured to: :
  • the endoscope includes at least two preset camera units with fixed relative positions, according to a relative spatial position relationship of each camera unit in the endoscope, and the image data according to the preset endoscope Identifying, by the identifier of the camera unit carried in, respectively, a relative spatial positional relationship of the stitched image data;
  • the stitched image data and the three-dimensional image are respectively respectively based on the image feature information of the block in the three-dimensional image frame database and the relative spatial positional relationship of the stitched image data.
  • the images of the blocks in the frame database are matched, and the first mapping relationship between the stitched image data and the blocks in the three-dimensional image frame database is obtained;
  • the processing unit 61 is further configured to:
  • the spliced image data and the three-dimensional image frame database are obtained according to the spatial positional relationship between the preset blocks in the three-dimensional image frame database.
  • the first mapping relationship of the middle block is obtained according to the spatial positional relationship between the preset blocks in the three-dimensional image frame database.
  • the processing unit 61 is further configured to:
  • the first preset number of first mapping relationships are selected from the at least two sets of first mapping relationships according to the confidence of each set of the first mapping relationship, and Determining, by using the first preset number of first mapping relationships, for the next time the image data of the user sent by the endoscope is received, calculating the first mapping relationship, so that for the next received image data, Obtaining each mapping relationship based on the selected first preset number of first mapping relationships, respectively, until obtaining a maximum first mapping relationship that is not greater than the second preset number, respectively determining the second preset number
  • the superimposed confidence of the first mapping relationship of each group is determined, if it is determined that the superimposed confidence of any one of the second preset number of first mapping relationships is not less than a preset threshold,
  • the arbitrary set of first mapping relationships is used as a second mapping relationship between the stitched image data and the blocks in the three-dimensional image framework database.
  • the spliced image data is reconstructed in a corresponding determined position in the contour of the user's three-dimensional image to obtain the reconstructed three-dimensional image data
  • the processing unit 61 is specifically configured to:
  • the extracted three-dimensional curved surface image is replaced with an image at a corresponding determined position in the three-dimensional image contour of the user, and the reconstructed three-dimensional image data is obtained.
  • the currently saved three-dimensional image model of the user is updated according to the reconstructed three-dimensional image data, and the processing unit 61 is specifically configured to:
  • the processing unit 61 is further configured to:
  • the receiving unit 60 is further configured to: when the image data of the user sent by the endoscope is received again, and the processing unit 61 is further configured to return to perform the saving of the received image data, and respectively determine the saved image data.
  • image data can be spliced with each other, when it is determined that splicing is possible, splicing the saved image data to obtain spliced image data; and determining according to the saved three-dimensional image frame database and the saved three-dimensional image contour of the user
  • the stitched image data corresponds to a position in a contour of the user's three-dimensional image, and the stitched image data is reconstructed into a corresponding determined position in the contour of the user's three-dimensional image to obtain reconstruction
  • the subsequent three-dimensional image data, and the currently saved three-dimensional image model of the user is updated according to the reconstructed three-dimensional image data.
  • the method further includes:
  • the operation unit 63 is configured to receive an operation instruction of the user, and perform a corresponding operation on the updated three-dimensional image model of the user that is displayed according to the operation instruction.
  • the receiving unit 60, the processing unit 61, the display unit 62, and the operating unit 63 may be integrated into one user terminal, for example, integrated in the mobile phone, and of course, may be separated.
  • the receiving unit 60 and the processing unit 61 can be integrated into the handle of the endoscope, the display unit 62 and the operating unit 63 are integrated in the handset; or, the receiving unit 60, and
  • the functions of the processing unit 61 are integrated into the handle of the endoscope, and the other functions of the processing unit 61, the display unit 62 and the operating unit 63 are integrated in the mobile phone, which can be implemented in actual implementation. No restrictions are imposed.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Endoscopes (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Nuclear Medicine (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

本发明涉及图像处理技术领域,尤其涉及一种图像处理方法及装置,该方法为,接收内窥器发送的用户的图像数据,进行保存、拼接,获得拼接后的图像数据;根据保存的三维图像框架数据库,确定拼接后的图像数据对应的区块,并确定该区块在保存的用户的三维图像轮廓中的位置,并将拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,更新当前保存的用户的三维图像模型;展示更新后的用户的三维图像模型,这样,基于三维图像框架数据库和三维图像轮廓,无需对口腔连续有序扫描,就可以重构出口腔三维图像,动态展示,提升用户互动效果,很好地支持用户自助式的三维真彩取模。

Description

一种图像处理方法及装置
本申请要求在2017年08月25日提交中国专利局、申请号为201710744863.3、发明名称为“一种图像处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及图像处理技术领域,尤其涉及一种图像处理方法及装置。
背景技术
传统的口腔内窥器是用于牙科光学取模的一种设备,在使用时需要把取模用的光学枪头在用户上牙列、下牙列有序移动,不支持扫描枪头在口腔内随意漫游,专业性很强,用户互动效果差,需要专业人士才能操作。
目前,也出现了一种用户自助式的口腔内窥器,用户手持这种设备,将摄像头部分放入口腔,转动摄像头对口腔进行内窥。但内窥图像每次只能看到一个很小的局部,虽然也能看到牙齿的画面,但难以确认这是哪一颗牙的画面,也不知道当前所看到的细节处于口腔的哪个具体位置。并且,缺乏三维图像信息,无法对接全口牙列取模,无法实时生成当下的全口牙列数字化模型,因而也无法支撑牙套式洁齿器3D打印等牙科应用。
发明内容
本发明实施例提供一种图像处理方法及装置,以解决现有技术中口腔图像呈现的用户互动效果差、难以支持用户自助式的三维真彩取模的问题。
本发明实施例提供的具体技术方案如下:
一种图像处理方法,包括:
步骤A:接收内窥器发送的用户的图像数据;其中,所述图像数据至少包括所述内窥器中摄像单元拍摄到的图像数据,并且所述图像数据的类型为深度图像;
步骤B:保存接收到的图像数据,并分别判断保存的图像数据相互之间是否能够拼接,当确定能够拼接时,将保存的图像数据进行拼接,获得拼接后的图像数据;
步骤C:根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,并将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,其中,所述三维图像框架数据库存储有将三维图像框架图像划分出的区块的图像数据以及每个 区块的图像的位置信息;
步骤D:根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型;其中,所述用户的三维图像模型的初始值为所述用户的三维图像轮廓;
步骤E:展示更新后的所述用户的三维图像模型。
较佳的,所述区块的图像数据包括:编号信息、图像特征信息;
所述区块的图像的位置信息包括:每个区块相互之间的空间位置关系;
所述三维图像轮廓中每一个区块的图像,为基于所述三维图像框架数据库或所述用户的三维图像模型中区块的图像的三维曲面形状,包括预设的单一颜色和单一纹理的图像。
较佳的,根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,具体包括:
根据预设的图像模式识别算法,基于所述三维图像框架数据库中区块的图像特征信息,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系;
根据区块相互之间的空间位置关系和/或编号信息,确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置。
较佳的,根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,具体包括:
若所述内窥器中包括至少两个预设的相对位置固定的摄像单元,则根据预设的内窥器中每一个摄像单元在内窥器中的相对空间位置关系,以及所述图像数据中携带的摄像单元的标识,分别确定所述拼接后的图像数据的相对空间位置关系;
根据预设的图像模式识别算法,基于所述三维图像框架数据库中区块的图像特征信息、和拼接后的图像数据的相对空间位置关系,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系;
根据区块相互之间的空间位置关系和/或编号信息,确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置。
较佳的,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配时,进一步包括:
当确定拼接后的图像数据至少对应两个区块时,则根据所述三维图像框架数据库中预设的区块相互之间的空间位置关系,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系。
较佳的,进一步包括:
若获得至少两组第一映射关系,则根据每组第一映射关系的置信度,从所述至少两组 第一映射关系中,选择出第一预设数目的第一映射关系,并将所述选择出的第一预设数目的第一映射关系,用于下一次接收到内窥器发送的用户的图像数据时,计算第一映射关系中,以使针对下一次接收到的图像数据,分别获得基于所述选择出的第一预设数目的第一映射关系的各映射关系,直到获取到不大于第二预设数目的最大值个第一映射关系时,分别判断第二预设数目个第一映射关系中,每组第一映射关系的叠加置信度,若判断出所述第二预设数目个第一映射关系中任意一组映射关系的叠加置信度不小于预设阈值,则将所述任意一组第一映射关系,作为拼接后的图像数据与所述三维图像框架数据库中区块的第二映射关系。
较佳的,将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,具体包括:
根据所述三维图像框架数据库中区块的边界特征信息,从所述拼接后的图像数据中提取出属于对应的区块的三维曲面图像;其中,所述图像特征信息中至少包括区块的边界特征信息;
将提取出的三维曲面图像替换所述用户的三维图像轮廓中对应的确定的位置上的图像,获得重构后的三维图像数据。
较佳的,根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型,具体包括:
将所述重构后的三维图像数据,替换掉当前保存的所述用户的三维图像模型中对应的确定的位置上的图像;
进一步包括:
根据更新后的所述用户的三维图像模型,获取更新后的所述用户的三维图像模型对应的三维图像轮廓,并根据所述更新后的所述用户的三维图像模型对应的三维图像轮廓,更新保存的所述用户的三维图像轮廓。
较佳的,进一步包括:
当再次接收到内窥器发送的用户的图像数据,返回执行所述步骤B。
较佳的,展示更新后的所述用户的三维图像模型之后,进一步包括:
接收用户的操作指令,并根据所述操作指令,对展示的更新后的所述用户的三维图像模型执行相应的操作。
一种图像处理装置,包括:
接收单元,用于接收内窥器发送的用户的图像数据;其中,所述图像数据至少包括所述内窥器中摄像单元拍摄到的图像数据,并且所述图像数据的类型为深度图像;
处理单元,用于保存接收到的图像数据,并分别判断保存的图像数据相互之间是否能够拼接,当确定能够拼接时,将保存的图像数据进行拼接,获得拼接后的图像数据;并根 据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,并将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,其中,所述三维图像框架数据库存储有将三维图像框架图像划分出的区块的图像数据以及每个区块的图像的位置信息,以及,根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型;其中,所述用户的三维图像模型的初始值为所述用户的三维图像轮廓;
展示单元,用于展示更新后的所述用户的三维图像模型。
较佳的,所述区块的图像数据包括:编号信息、图像特征信息;
所述区块的图像的位置信息包括:每个区块相互之间的空间位置关系;
所述三维图像轮廓中每一个区块的图像,为基于所述三维图像框架数据库或所述用户的三维图像模型中区块的图像的三维曲面形状,包括预设的单一颜色和单一纹理的图像。
较佳的,根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,处理单元具体用于:
根据预设的图像模式识别算法,基于所述三维图像框架数据库中区块的图像特征信息,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系;
根据区块相互之间的空间位置关系和/或编号信息,确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置。
较佳的,根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,处理单元具体用于:
若所述内窥器中包括至少两个预设的相对位置固定的摄像单元,则根据预设的内窥器中每一个摄像单元在内窥器中的相对空间位置关系,以及所述图像数据中携带的摄像单元的标识,分别确定所述拼接后的图像数据的相对空间位置关系;
根据预设的图像模式识别算法,基于所述三维图像框架数据库中区块的图像特征信息、和拼接后的图像数据的相对空间位置关系,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系;
根据区块相互之间的空间位置关系和/或编号信息,确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置。
较佳的,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配时,处理单元进一步用于:
当确定拼接后的图像数据至少对应两个区块时,则根据所述三维图像框架数据库中预设的区块相互之间的空间位置关系,获得拼接后的图像数据与所述三维图像框架数据库中 区块的第一映射关系。
较佳的,处理单元进一步用于:
若获得至少两组第一映射关系,则根据每组第一映射关系的置信度,从所述至少两组第一映射关系中,选择出第一预设数目的第一映射关系,并将所述选择出的第一预设数目的第一映射关系,用于下一次接收到内窥器发送的用户的图像数据时,计算第一映射关系中,以使针对下一次接收到的图像数据,分别获得基于所述选择出的第一预设数目的第一映射关系的各映射关系,直到获取到不大于第二预设数目的最大值个第一映射关系时,分别判断第二预设数目个第一映射关系中,每组第一映射关系的叠加置信度,若判断出所述第二预设数目个第一映射关系中任意一组映射关系的叠加置信度不小于预设阈值,则将所述任意一组第一映射关系,作为拼接后的图像数据与所述三维图像框架数据库中区块的第二映射关系。
较佳的,将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,处理单元具体用于:
根据所述三维图像框架数据库中区块的边界特征信息,从所述拼接后的图像数据中提取出属于对应的区块的三维曲面图像;其中,所述图像特征信息中至少包括区块的边界特征信息;
将提取出的三维曲面图像替换所述用户的三维图像轮廓中对应的确定的位置上的图像,获得重构后的三维图像数据。
较佳的,根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型,处理单元具体用于:
将所述重构后的三维图像数据,替换掉当前保存的所述用户的三维图像模型中对应的确定的位置上的图像;
处理单元进一步用于:
根据更新后的所述用户的三维图像模型,获取更新后的所述用户的三维图像模型对应的三维图像轮廓,并根据所述更新后的所述用户的三维图像模型对应的三维图像轮廓,更新保存的所述用户的三维图像轮廓。
较佳的,接收单元进一步用于:当再次接收到内窥器发送的用户的图像数据,并所述处理单元进一步用于返回执行所述保存接收到的图像数据,并分别判断保存的图像数据相互之间是否能够拼接,当确定能够拼接时,将保存的图像数据进行拼接,获得拼接后的图像数据;并根据保存的三维图像框架数据库和保存的所述用户的三维图像轮廓,确定所述拼接后的图像数据对应于所述用户的三维图像轮廓中的位置,并将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,以及,根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型。
较佳的,展示更新后的所述用户的三维图像模型之后,进一步包括:
操作单元,用于接收用户的操作指令,并根据所述操作指令,对展示的更新后的所述用户的三维图像模型执行相应的操作。
本发明实施例中,接收内窥器发送的用户的图像数据;其中,所述图像数据至少包括所述内窥器中摄像单元拍摄到的图像数据,并且所述图像数据的类型为深度图像;保存接收到的图像数据,并分别判断保存的图像数据相互之间是否能够拼接,当确定能够拼接时,将保存的图像数据进行拼接,获得拼接后的图像数据;根据保存的三维图像框架数据库和保存的所述用户的三维图像轮廓,确定所述拼接后的图像数据对应于所述用户的三维图像轮廓中的位置,并将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据;根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型;其中,所述用户的三维图像模型的初始值为所述用户的三维图像轮廓;展示更新后的所述用户的三维图像模型,这样,根据建立的三维图像框架数据库和用户的三维图像轮廓,接收到图像数据时,进行保存、拼接,对拼接后的图像数据进行处理、重构等操作,获得重构后的三维图像,并实时更新当前保存的用户的三维图像模型,进而进行展示,不需要内窥器对口腔连续有序扫描,用户可以使用内窥器随意扫描口腔,只要获得口腔内表面各处图像数据而不论是否有序,就可以重构出口腔的三维图像,提高了三维图像重构效率,并且,不需要专业人士操作,很好地支持了用户自助式口腔内窥,以及不仅可以呈现用户口腔的三维图像,而且可以动态进行展示,展示效果更好,提升了用户的使用体验和互动效果,可以很好地支持用户自助式的三维真彩取模。
而且,由于本发明实施例中建立了三维图像框架数据库,所述三维图像框架数据库中至少包括预划分的三维图像框架的每一个区块,并建立了完整的区块的标签体系,每一个区块括:编号信息、名称信息、档案属性描述信息、三维曲面图样、图像特征信息,以及每一个区块相互之间的空间位置关系等。这使得本发明的图像处理装置在对接收到的图像数据进行处理的过程中,能获取从内窥器接收到的图像数据的语义信息。这就为采用人工智能技术来开展口腔内窥影像检查创造了条件。
附图说明
图1为本发明实施例一提供的三维图像处理方法流程图;
图2为本发明实施例四提供的初始展示三维图像;
图3为本发明实施例四提供的在扫描过程中展示的三维图像;
图4为本发明实施例四提供的扫描完成后展示的三维图像;
图5为本发明实施例五提供的环境架构示意图;
图6为本发明实施例六提供的三维图像处理装置结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,并不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
为了解决现有技术中口腔图像呈现用户互动效果差、难以支持用户自助式口腔三维真彩扫描的问题,本发明实施例中,建立了三维图像框架数据库和三维图像轮廓,对接收到的图像数据进行处理,重构于用户的三维图像轮廓上,可以实现三维口腔图像的建立,并且可以动态展示重构的三维图像。
下面通过具体实施例对本发明方案进行详细描述,当然,本发明并不限于以下实施例。
值得说明的是,本发明实施例中,主要针对口腔三维图像的重构,其中,内窥器可以为口腔内窥器,当然,本发明实施例中,并不仅限于口腔三维图像,对于其它领域的三维图像重构同样也可以适用,下面仅以口腔为例进行说明。
实施例一:
参阅图1所示,本发明实施例中,图像处理方法的具体流程如下:
步骤100:接收内窥器发送的用户的图像数据;其中,所述图像数据至少包括所述内窥器中摄像单元拍摄到的图像数据,并且所述图像数据的类型为深度图像。
实际中,用户经常会有想要查看口腔的图像的需求,例如,牙疼时或某个牙坏掉时,通过口腔内窥器扫描口腔,可以得到口腔中的图像,但是,现有技术中,扫描口腔后,只能得到局部的图像,不能呈现整体的三维图像,用户无法看到自己口腔的整体三维画面,也无法确定坏掉的牙或出现问题的部位具体在口腔的哪个位置。目前,也有一种可以呈现口腔的三维图像的技术,在三维图像重构过程中,设定一个唯一的初始区域,之后,就把前述初始区域作为唯一的锚定点,对前端采集到的图像序列依次不断做图像拼接,从而不断扩大前述区域并衍生为主体区域,在口腔中连续扫描直到扫描完成。但是,这种方法,在三维重构过程中,如果采集到的某个图像数据与前述初始区域或主体区域无法拼接,则会把该图像数据丢弃,即无法随意扫描用户想看到的部位,用户只能看到一个由前述唯一初始区域连续拼接而形成的三维图像。
本发明实施例中,将接收到图像数据,直接重构于用户的三维图像轮廓中,不仅可以呈现口腔的三维图像,而且可以支持多初始区域扫描拼接,用户可以随意扫描口腔中的任意部位,并且,可以看到图像在口腔中的具体位置,用户可以清楚地看到自己口腔的轮廓和三维图像。
其中,内窥器,例如为口腔内窥器,该内窥器中设置有摄像单元,用于拍摄图像,内窥器中可以设置有一个摄像单元,也可以设置有多个摄像单元,并且,拍摄到的图像数据 的类型为深度图像,也就是RGBD图像,为三维真彩图像,这样可以获取图像的三维信息,便于后续进行三维图像重构。
步骤110:保存接收到的图像数据,并分别判断保存的图像数据相互之间是否能够拼接,当确定能够拼接时,将保存的图像数据进行拼接,获得拼接后的图像数据。
其中,获得的拼接后的图像数据,表示所有经过拼接处理后的图像数据,其中,不仅包括能够拼接,并拼接成功、与其他图像数据拼接后组成的更大面积的图像数据,还包括经过判断后拼接不成功,依旧孤立存在的图像数据。
本发明实施例中,每次接收到图像数据时,都会进行保存,并确定当前保存的所有的图像数据相互之间是否能够拼接,不仅仅只是针对一次接收的到图像数据,是针对当前保存的所有的图像数据,先判断是否能够拼接。
这是因为,对于口腔中内窥器,为便于用户使用的舒适性和便利性,一般比较小,其每一个摄像单元一般采用微焦距的摄像单元。对于微焦距的摄像单元,每次采集到的图像多数情况下都是面积很小的一块曲面,例如,面积为2mm*2mm或3mm*3mm,多数情况下无法完全覆盖一个区块,只是一个区块的局部表面,因此,可以先把图像数据进行拼接,再进行匹配,可以提高匹配的效率和准确性。
并且,每次接收到的图像数据都会进行保存,也就是说,判断是否能够拼接时,保存的图像数据中不仅包括本次接收到的图像数据,还包括之前所有接收到的图像数据,如果这些图像其中部分能够拼接,先进行拼接再匹配,也可以减少图像数目,减少图像与区块的匹配次数,减少时间,也提高了执行效率。
步骤120:根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,并将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,其中,所述三维图像框架数据库存储有将三维图像框架图像划分出的区块的图像数据以及每个区块的图像的位置信息。
本发明实施例中,预先建立了口腔的三维图像框架数据库和三维图像轮廓。
其中,所述区块的图像数据包括:编号信息、图像特征信息。
所述区块的图像的位置信息包括:每个区块相互之间的空间位置关系。
其中,所述图像特征信息至少包括图像的形状、色彩和纹理相关的参数信息。
所述用户的三维图像轮廓,是基于所述三维图像框架数据库或所述用户的三维图像模型得到的;其中,所述三维图像轮廓中每一个区块的图像,为基于所述三维图像框架数据库或所述用户的三维图像模型中区块的图像的三维曲面形状,包括预设的单一颜色和单一纹理的图像。
也就是说,本发明实施例中,建立了口腔的三维图像框架数据库和相应的三维图像轮 廓,并对口腔进行了区块的划分,以及记录了相关的各种信息,这样,给本发明实施例中,实现口腔的三维图像的重构提供了技术基础和支持,具体地,对于三维图像框架数据库和三维图像轮廓,将在下文再详细进行介绍。
执行步骤120时,具体包括:
首先,根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置。
其中,执行该步骤具体的方式,将在下文再进行详细说明。
然后,将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据。
具体地:
1)根据所述三维图像框架数据库中区块的边界特征信息,从所述拼接后的图像数据中提取出属于对应的区块的三维曲面图像;其中,所述图像特征信息中至少包括区块的边界特征信息。
这样,根据区块的边界特征信息,可以确定每个区块的边界特征,例如,某拼接后的图像数据P(a)对应于区块1,但P(a)中可能覆盖多于该区块1的信息,这时,可以根据区块1的边界特征信息,按照边界从P(a)中提取相应的三维曲面图像。
又例如,对于某些面积大的区块,例如,上颌牙槽嵴唇侧黏膜区块,该区块上界为上口腔前庭沟的黏膜反折线,与上唇黏膜区块相接。下界与上牙列牙龈唇侧面相接。左界与上颌牙槽嵴左颊侧黏膜区块相接。右界与上颌牙槽嵴右颊侧黏膜区块相接。
如果此时获得的拼接后的图像数据,仅是该区块的一部分,则也可以按照该区块的各边界特征信息,进行提取,例如,该拼接后的图像数据是该区块的,仅包含该区块上界特征信息,从上界到下界中间的部分图像,则在提取时,可以根据该区块的上界特征信息,去除上界的外部图像数据,保留属于该区块的上界以及上界向内的图像数据。
这时,从拼接后的图像数据中一次提取出来的该区块的三维曲面图像只是该区块的一个局部图像。之后展示时,展示的该区块图像也只是该区块已经提取出的局部三维曲面图像。之后,随着扫描和图像拼接的继续进行,就能逐步展示出该区块的完整三维曲面图像。2)将提取出的三维曲面图像替换所述用户的三维图像轮廓中对应的确定的位置上的图像,获得重构后的三维图像数据。
其中,将提取出的三维曲面图像替换所述用户的三维图像轮廓中对应的确定的位置上的图像,还可以分为以下几种情况:
第一种情况:具体包括:
首先,根据所述第一映射关系或第二映射关系,确定拼接后的图像数据对应于所述三维图像框架数据库中的区块。
然后,根据区块相互之间的空间位置关系和/或编号信息,分别判断拼接后的图像数据对应于所述三维图像框架数据库中的区块,是否在所述用户的三维图像轮廓中存在。
最后,若是,则直接将提取出的三维曲面图像替换所述用户的三维图像轮廓中对应的区块的图像;
若否,则根据区块相互之间的空间位置关系,确定拼接后的图像数据对应于所述三维图像框架数据库中的区块,在所述用户的三维图像轮廓中的位置,并在所述用户的三维图像轮廓中的位置上增加对应的区块,以及若所述用户的三维图像轮廓中的位置上已有其它区块,则将所述其它区块删除,以及,将提取出的三维曲面图像替换所述增加的对应的区块的图像。
也就是说,在重构三维图像时,替换用户的三维图像轮廓中对应的确定的位置上的图像时,可能会对三维图像轮廓有以下几种操作:直接替换区块的图像、增加或删除区块后再替换其图像。例如,图像1对应三维图像框架数据库中的区块a、图像2对应三维图像框架数据库中的区块b、图像3对应三维图像框架数据库中的区块c,并且,区块b在区块a和区块c之间,分别与区块a和区块c相邻,则若在该用户的三维图像轮廓中,区块a和区块c是相邻的,没有包括区块b,则将图像1和图像3直接替换三维图像轮廓中区块a和区块c的图像,并在区块a和区块c中间增加区块b,并将图像2替换在三维图像轮廓中增加的区块b的图像。
具体地,再列举以下场景进行说明。
1)区块删减操作。
例如,用户因外伤或牙病造成左下4牙齿脱落。若用户的左下4牙齿脱落,在内窥图像数据中表现为:
1)左下3牙齿远中邻接面区块左侧的牙间缝空间会比较大,而此牙间缝的左侧是左下5牙齿近中邻面区块。
2)左下3牙齿远中邻面左侧相接的牙龈沟再往左侧相接的龈乳突区块的面积比较大,该龈乳突区块延展成为下牙槽嵴顶端覆盖的黏膜区块。该黏膜区块的左侧通过牙龈沟与左下5牙齿近中邻面相接。
因此,在图像重构过程中,将把左下4牙齿的相关区块剔除,包括:左下4牙齿颊侧区块、左下4牙齿近中邻面区块、左下4牙齿远中邻面区块、左下4牙齿咬合面区块、左下4牙齿舌侧区块、左下4龈乳突区块等。以上就是图像三维重构处理过程中的区块删减操作。
2)区块增加操作。
例如,用户左下2牙齿舌侧存在牙结石症状。若用户左下2牙齿舌侧存在牙结石症状,在内窥图像数据中表现为:
1)用户左下2牙齿舌侧区块的下界至少有部分边界与牙结石区块的上界相接,而非与左下2舌侧牙龈区块相接。
2)牙结石区块的下界与左下2舌侧牙龈区块相接。
因此,在图像重构过程中,将在左下2牙齿舌侧区块和左下2舌侧牙龈区块之间增加一个牙结石区块(例如:区块编号为2412203)。以上就是图像三维重构处理过程中的区块增加操作。
3)例如,用户的三维图像轮廓中的口腔黏膜区块的中间有溃疡区块,后来已经好转,该溃疡区块消失了。重构后,替换掉三维图像轮廓中口腔黏膜区块的图像,由于没有获得溃疡区块的图,这时,口腔黏膜区块中间的溃疡区块就被口腔黏膜区块对应的拼接后的图像数据直接覆盖了。
这样,在更新用户的三维图像轮廓,从更新后的三维图像模型中提取轮廓时,针对该口腔黏膜区块,直接提取到的就是替换后口腔黏膜区块的拼接后的图像数据的轮廓,就不会有溃疡区块的轮廓了。在所述用户的三维图像轮廓中,该溃疡区块被覆盖掉了并被删除。
第二种情况:具体包括:
首先,根据所述第一映射关系或第二映射关系,确定拼接后的图像数据对应于最新的三维图像框架数据库中对应的区块。
然后,分别判断拼接后的图像数据是否是相邻的,若是,则进一步确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置是否是相邻的,若不相邻,则在所述用户的三维图像轮廓中,将相邻的拼接后的图像数据对应的区块之间的各区块进行删除。
例如,用户之前扫描时,有左下2牙齿舌侧存在牙结石症状,后来,经过洗牙治疗后,去除了左下2牙齿舌侧的牙结石症状。
则若用户经洗牙治疗去除了左下2牙齿舌侧的牙结石症状,在内窥图像数据中表现为:1)用户左下2牙齿舌侧区块的下界与左下2舌侧牙龈区块的上界相接。
2)用户左下2牙齿舌侧区块的下界与左下2舌侧牙龈区块的上界之间的位置上没有其他区块。
因此,在图像重构过程中,将把左下2牙齿舌侧区块和左下2舌侧牙龈区块之间的牙结石区块(如:区块编号为2412203)删除。以上就是内窥图像三维重构处理过程中的区块删除操作。
这样,本发明实施例中,进一步地经过区块增加或删除,得到的三维图像,可以更加反映用户口腔的真实状态,例如,用户的三维图像轮廓中有4个相连的区块,依次为区块a、区块b、区块c、区块d。根据拼接后的图像数据,确定删除了区块b,区块a和区块d之间通过区块c相连,则将区块a和区块c、区块d拼接。在展示时,用户会看到,区块a和区块d之间仅通过区块c相连,而原属于区块b的位置则变成空余的部分,直接会显示透明,不包括任何图像。
当然,并不仅限于上述几种情况,还可以包括其它情况,例如,也可以替换某个区块的一部分的图像,本发明实施例中并不进行限制,都可以基于本发明实施例中的方法,将提取出的三维曲面图像替换用户的三维图像轮廓中对应的确定的位置上的图像,实现更新用户的三维图像轮廓的效果。
也就是说,本发明实施例中,接收内窥器发送的图像数据,进行保存和拼接,对拼接后的图像数据进行识别和匹配,映射到区块上,确定拼接后的图像数据对应于用户三维图像轮廓上的位置,进而可以将拼接后的图像数据中属于对应的区块的三维曲面图像,替换三维图像轮廓中对应的确定的位置上的图像,这样,不管用户口腔中实际的区块和三维图像轮廓中区块是否完全一样,都可以更新为用户实际的口腔图像。
进一步地,替换完之后,该区块相接的其它区块,相应地往外移动位置或向里移动位置,保证替换后,相接的区块仍然是相接的。
例如,对于用户口腔中的某颗牙的区块,可能该用户这颗牙比较大,而三维图像轮廓中这颗牙的区块的面积比较小,这时,根据区块的边界特征信息,提取出该区块的三维图像曲面,直接替换三维图像轮廓中这颗牙的区块的图像,得到的用户的三维图像轮廓中,该区块和该用户实际中该牙的区块相同。
步骤130:根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型;其中,所述用户的三维图像模型的初始值为所述用户的三维图像轮廓。
执行步骤130时,具体包括:
将所述重构后的三维图像数据,替换掉当前保存的所述用户的三维图像模型中对应的确定的位置上的图像。
这样,每次得到重构后的三维图像数据后,都可以不断地替换用户的三维图像模型中相应位置上的图像,实现动态更新用户的三维图像模型的效果。
进一步地,还可以更新用户的三维图像轮廓,具体为:
根据更新后的所述用户的三维图像模型,获取更新后的所述用户的三维图像模型对应的三维图像轮廓,并根据所述更新后的所述用户的三维图像模型对应的三维图像轮廓,更新保存的所述用户的三维图像轮廓。
这样,可以针对不同的用户,会有相应的符合该用户的口腔三维图像轮廓,在之后再进行扫描口腔时,可以更容易看到该用户实际的口腔图像信息,并且,不断更新用户的三维图像模型和该用户的三维图像轮廓,也可以提高匹配的效率和准确性。
这样,可以不仅保存该用户的三维图像轮廓,还同时保存该用户的三维图像模型,根据该更新后的该用户的三维图像模型,构建一个针对该用户的口腔内窥图像数据库,这样,可以针对不同的用户,分别记录不同用户的口腔状况,便于后续进行跟踪查询,例如,可以追踪该用户的口腔健康状况,跟踪口腔治疗情况等。
步骤140:展示更新后的所述用户的三维图像模型。
进一步地,当再次接收到内窥器发送的用户的图像数据,返回执行上述步骤110。
这样,本发明实施例中,一开始展示给用户的是三维图像轮廓,随着不断的重构完成,每次重构完成,就会更新当前保存的用户的三维图像模型,进而展示更新后的用户的三维图像模型,由于三维图像轮廓中包含的是预设的单一颜色和单一纹理的图像,例如为灰色的图像,而获取到的图像数据为三维真彩图像,包括实际的各种颜色的纹理,更新后,用户可以看到,一开始展示的用户的三维图像模型,逐渐被三维真彩图像替换,可以看作,逐渐点亮三维图像模型,当扫描完成后,用户看到的就是包括颜色和纹理等信息的三维真彩图像。
进一步地,执行步骤140之后还包括:
接收用户的操作指令,并根据所述操作指令,对展示的更新后的所述用户的三维图像模型执行相应的操作。
例如,用户可以放大或缩小该用户的三维图像模型,也可以转动该用户的三维图像模型,以便用户更清楚地查看该用户的三维图像模型。
本发明实施例中,预设三维图像框架数据库和三维图像轮廓,接收到内窥器发送的用户图像数据后,进行拼接,将拼接后的图像数据重构于用户的三维图像轮廓中对应的确定的位置上,并更新当前保存的用户的三维图像模型,展示更新后的用户的三维图像模型,这样,在三维重构过程中,只要能识别出摄像单元前端采集到的三维真彩图像信息对应于口腔中的哪一个区块,就会用采集到的该区块的三维真彩曲面图像替换原三维图像轮廓中该区块对应位置的缺省曲面图像,并在用户的终端上进行展示,不需要确定一个唯一的初始区域后再进行拼接,因此,可以显著提高三维重构的效率,这样,不仅可以获得用户口腔的三维图像,使得用户可以查看口腔的具体部位,而且,用户可以随意扫描口腔中的任意位置,无需在口腔中从唯一的初始区域连续扫描,便于用户使用,提升了用户的使用体验,进而动态展示扫描到的口腔的三维图像,呈现效果更好,更加方便灵活。
并且,本发明实施例中建立了三维图像框架数据库和三维图像轮廓,在用户执行扫描操作之前,一开始展示的三维图像模型为一个三维图像轮廓,随着用户在口腔中的扫描操作,获得更多的口腔的图像数据,就可以基于本发明实例中的方法,实现重构和更新,这样,三维图像中的各个区块就会逐渐被三维真彩图像替换,而尚未完成重构和更新的部分,依旧展示的是三维图像轮廓上缺省的图像,因此,用户可以直观地感知到哪些部分尚未重构,或未扫描到,用户可以自主配合,通过操作内窥器,使得内窥器漫游到那些依旧为缺省图像的区块。这样,内窥器上的各个摄像单元就可以更多采集到那些依旧为缺省图像的区块的口腔三维真彩图像。最终,内窥器的摄像单元采集到的三维真彩图像,将逐渐覆盖整个口腔内表面,将获得全口腔数码内窥影像,不需要专业人士操作,可以更好地支持用 户自助式的口腔内窥扫描。
实施例二:
下面对实施例一中执行步骤120中,根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,其具体执行方式进行介绍:
具体为:至少根据所述三维图像框架数据库中区块的图像特征信息,确定所述拼接后的图像数据对应于所述用户的三维图像框架数据库中的区块,以及,根据所述拼接后的图像数据对应于所述用户的三维图像框架数据库中的区块,确定所述拼接后的图像数据对应于所述用户的三维图像轮廓中的位置。
具体地,分为以下几种方式:
第一种方式:
1)根据预设的图像模式识别算法,基于所述三维图像框架数据库中区块的图像特征信息,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系。
本发明实施例中,针对所有用户的口腔建立三维图像框架数据库,其中每一个区块的图像,均是三维真彩曲面,可以获取其图像特征信息,用于之后的图像匹配。
具体地,本发明实施例中,例如,可以采用区块穷举匹配方式,即可以根据三维图像框架数据库中每一个区块的图像特征信息,将拼接后的图像数据与三维图像框架数据库中每一个区块进行匹配。
又例如,可以将三维图像框架数据库中的区块,根据区域进行划分,在匹配时,可以先确定拼接后的图像数据属于哪个区域,然后直接根据对应区域中的区块的图像特征信息,进行匹配,就不需要与三维图像框架数据库中每一个区块进行匹配了。
2)根据区块相互之间的空间位置关系和/或编号信息,确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置。
这样,根据第一映射关系,就可以确定图像数据对应的在三维图像框架数据库中的区块,再确定在三维图像框架数据库中的区块,在用户的三维图像轮廓中的位置,进而就可以确定拼接后的图像数据在用户的三维图像轮廓中的位置。
也就是说,本发明实施例中,能够对内窥器采集到的图像进行识别匹配,进而可以确定该图像对应于口腔中的具体位置,即可以判断出内窥器采集到的三维真彩图像对应于口腔中的哪一个或哪些区块。
第二种方式:
1)若所述内窥器中包括至少两个预设的相对位置固定的摄像单元,则根据预设的内窥器中每一个摄像单元在内窥器中的相对空间位置关系,以及所述图像数据中携带的摄像 单元的标识,分别确定所述拼接后的图像数据的相对空间位置关系。
本发明实施例中,内窥器中也可以设置多个摄像单元,并且,预先设定这多个摄像单元在内窥器中的相对位置。
例如,有六个摄像单元,分别为摄像单元A、摄像单元B、摄像单元C和摄像单元D、摄像单元E和摄像单元F。其中,各个摄像单元固定预设的相对空间位置关系为:摄像单元A与摄像单元B互为对侧,摄像单元A与口腔内窥器的拉伸部在同一侧。摄像单元C与摄像单元D互为对侧。摄像单元E与摄像单元F互为对侧。摄像单元A、B连线与摄像单元C、D连线相互垂直,为正交关系。摄像单元A、B连线与摄像单元E、F连线相互垂直,为正交关系。摄像单元C、D连线与摄像单元E、F连线相互垂直,为正交关系。沿着摄像单元A、B连线从摄像单元B一侧向摄像单元A(也即拉伸部一侧)所在的一侧看,摄像单元C在左侧,摄像单元D在右侧,摄像单元E在上侧,摄像单元F在下侧。
内窥器对摄像单元采集到图像数据,都增加相应的摄像单元的标识,进而就可以根据摄像单元的相对空间位置关系,确定拍摄到的图像数据之间的相对空间位置关系。
2)根据预设的图像模式识别算法,基于所述三维图像框架数据库中区块的图像特征信息、和拼接后的图像数据的相对空间位置关系,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系。
3)根据区块相互之间的空间位置关系和/或编号信息,确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置。
第二种方式在第一种方式的基础上,针对内窥器中采用多个摄像单元以形成球状视野的布局方案,可以同时接收多个摄像单元同时采集到的图像数据,提高图像模式识别的准确性和效率,也可以减少识别操作,即确定映射关系所需的时间。
例如,通过摄像单元A实际获取了同时涵盖左下5第二前磨牙舌侧区块和左下6第一磨牙舌侧区块的图像信息P(A)之后,根据预设的图像模式识别算法和各个牙面区块的图像特征信息,获知该图像P(A)中涵盖相邻的前磨牙舌侧区块和磨牙舌侧区块,获得图像信息P(A)与区块之间的映射关系有四种可能,分别是:左下5第二前磨牙舌侧区块+左下6第一磨牙舌侧区块、或右下5第二前磨牙舌侧区块+右下6第一磨牙舌侧区块、或左上5第二前磨牙舌侧区块+左上6第一磨牙舌侧区块、或右上5第二前磨牙舌侧区块+右上6第一磨牙舌侧区块。
并且,在获取图像信息P(A)期间,还通过其他摄像单元B、C、D、E、F同步采集到的图像,获取到了图像信息P(B)、P(C)、P(D)、P(E)、P(F)。
基于上述摄像单元A、摄像单元B、摄像单元C和摄像单元D、摄像单元E和摄像单元F之间相对空间位置关系,则若图像信息P(B)的上方为软腭黏膜,下方为舌面,图像信息 P(E)为硬腭黏膜,图像信息P(F)为舌面,这就进一步印证了图像信息P(A)为舌侧区块的曲面图像,而口腔内窥器当前位于用户的固有口腔中。图像信息P(C)中有咬合间隙处的口腔黏膜,图像信息P(D)中有其他下牙的舌侧曲面图像。因此,这就可以确定口腔内窥器当前位于用户固有口腔的左侧,而图像信息P(A)涵盖的是用户左下牙舌侧的曲面图像。综上,就可以判断,图像信息P(A)对应的是用户左下5第二前磨牙舌侧区块和左下6第一磨牙舌侧区块。
进一步地,第二种方式中,可以利用多个摄像单元之间的相对空间位置关系,这样,同时接收到多个摄像单元拍摄到的图像数据后,将这些图像数据进行保存和拼接,进而将拼接后的图像数据与三维图像框架数据库进行匹配识别,会分别对应相应的区块,并且,一般不是相接的区块,这就相当于可以同时构建多个初始区域,初始区域不是唯一的,例如为Q(0,m;m=1,2,3,4,5,6)。
这样,无需用户在口腔中接续扫描某一部位,可以随意扫描口腔中的任何部位,随着用户将内窥器在口腔内部漫游扫描,内窥器中每一个摄像单元就可以采集到更多的图像数据,进而,可以基于接收到的更多的图像数据,继续进行匹配并拼接,可以使得每一个初始区域Q(0,m;m=1,2,3,4,5,6)的图像面积逐渐扩大,多个初始区域可以并行进行图像拼接,由此可以形成多个衍生的主体区域Q(n,m;m=1,2,3,4,5,6),直到扫描完成或生成完整的口腔三维图像,可以极大地提高三维重构的效率。
第三种方式:基于第一种方式和第二种方式,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配时,进一步包括:
当确定拼接后的图像数据至少对应两个区块时,则根据所述三维图像框架数据库中预设的区块相互之间的空间位置关系,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系。
也就是说,第三种方式,可以是在第一种方式和第二种方式的基础上,进一步依据区块之间的空间位置关系,提高识别的准确性和效率。
本发明实施例中,在建立的三维图像框架数据库中,记录了各个区块之间的空间位置关系,例如为,每一个区块相互之间的邻接关系,这些邻接关系包括:前界、后界、左界、右界、上界、下界,等等,这使得即使当获取了足以涵盖一个区块全部表面的一幅图像信息也难以通过图像模式识别确定这个图像对应的区块时,可以继续执行拼接操作以获取更大范围的曲面图像信息直至覆盖多个区块。这时,就可以根据多个区块相互之间的邻接关系,把多个区块的曲面图像作为一个整体来做模式识别,就可以去除很多不确定性,提高识别匹配的准确性,缩短识别操作所需的时间。
例如,获取了涵盖左下5第二前磨牙的舌侧区块的一幅图像信息P(a),由于左下4第一前磨牙舌侧区块、左下5第二前磨牙舌侧区块相似度较高,可能仅通过图像模式识别的 图像特征匹配,难以确定最终的映射关系。
随着用户把口腔内窥器在自己口腔内部漫游,内窥器采集到更多图像信息并回传,可以继续执行拼接操作并使得P(a)图像面积逐渐扩大。当P(a)图像面积扩大为同时涵盖左下5第二前磨牙舌侧区块和左下6第一磨牙舌侧区块的图像信息P(b)之后,根据预设的图像模式识别算法和各个牙面区块的图像特征信息,可以获知该图像P(b)中涵盖相邻的左下前磨牙舌侧区块和左下磨牙舌侧区块。因磨牙与前磨牙的舌侧区块曲面形状有明显区别,而左下4第一前磨牙舌侧区块与左下6磨牙舌侧区块没有邻接关系,因此,就可以确定,图像信息P(b)涵盖的是左下5第二前磨牙舌侧区块和左下6第一磨牙舌侧区块,P(a)对应的是左下5第二前磨牙舌侧区块。
值得说明的是,本发明实施例中,第一种方式可以针对内窥器中只有一个或多个摄像单元的情况,而第二种方式针对的是内窥器中有多个摄像单元的情况,第三种方式对于内窥器中只有一个或多个摄像单元的情况也都是适用的。
第一种方式,主要基于图像模式识别,确定拼接后的图像数据与三维图像框架数据库中区块的映射关系;第二种方式中可以进一步参考摄像单元的相对空间位置关系,提高确定拼接后的图像数据与区块映射关系的准确性和效率,可以更加准确地确定拍摄到的数据对应于口腔中的具体位置;第三种方式,在第一种方式和第二种方式的基础上,可以进一步参考区块之间的空间位置关系,也可以提高确定拼接后的图像数据与区块的映射关系的准确性。
进一步地,基于上述第一种方式、第二种方式和第三种方式,还包括:
若获得至少两组第一映射关系,则根据每组第一映射关系的置信度,从所述至少两组第一映射关系中,选择出第一预设数目的第一映射关系,并将所述选择出的第一预设数目的第一映射关系,用于下一次接收到内窥器发送的用户的图像数据时,计算第一映射关系中,以使针对下一次接收到的图像数据,分别获得基于所述选择出的第一预设数目的第一映射关系的各映射关系,直到获取到不大于第二预设数目的最大值个第一映射关系时,分别判断第二预设数目个第一映射关系中,每组第一映射关系的叠加置信度,若判断出所述第二预设数目个第一映射关系中任意一组映射关系的叠加置信度不小于预设阈值,则将所述任意一组第一映射关系,作为拼接后的图像数据与所述三维图像框架数据库中区块的第二映射关系。
上述整个过程,可以看作是一个搜索树构建的过程,例如,内窥器中设置有8个摄像单元,则每次可以接收到8个摄像单元拍摄到的图像数据,即8个RGBD图像,第一预设数目为3,第二预设数目为1000。
(1)假设第一次接收到图像数据后,得到n(n>1)组第一映射关系,则根据置信度,从这n组第一映射关系中,选择出置信度较大的前3组映射关系,例如,分别为n(11)、n(12) 和n(13),这时,共有3组。
(2)第二次接收到图像数据后,基于n(11)、n(12)和n(13)这三组映射关系,分别得到相应的n组第一映射关系,即n(11)下有1,2,3,...n组,n(12)下有1,2,3,...n组,n(13)下有1,2,3,...n组,分别从每组中再选择置信度较大的前3组,分别为n(111)、n(112)、n(113),n(121)、n(122)、n(123),以及n(131)、n(132)、n(133),这时,共有3^2=9组。
(3)第三次接收到图像数据后,基于n(111)、n(112)、n(113),n(121)、n(122)、n(123),以及n(131)、n(132)、n(133)这9组映射关系,分别得到相应的n组第一映射关系,即n(111)下有1,2,3,...n组,n(112)下有1,2,3,...n组,n(113)下有1,2,3,...n组,依次类推,然后,分别从每组中再选择置信度较大的前3组,分别为n(1111)、n(1112)、n(1113),n(1121)、n(1122)、n(1123),....,n(1331)、n(1332)、n(1333),这时,共有3^3=27组。
(4)依次类推,直到第m次接收到图像数据后,可以得到3^m组映射关系,若3^m为不大于1000的最大值,则进行决策,从这3^m组中,选择其中一组叠加置信度最大的一组,作为拼接后的图像数据与区块的第二映射关系,即最终的映射关系,之后就可以基于第二映射关系,将拼接后的图像数据重构于三维图像轮廓上对应的确定的位置上,获得重构后的三维图像数据,进而更新用户的三维图像模型,并进行展示。
例如,步骤1:获取到了包括左下5第二前磨牙舌侧区块局部表面的一幅图像信息P(1)之后,根据预设的图像模式识别算法和各个区块的图像特征信息,假设获得P(1)的映射关系是牙齿表面相关区块,可以得到多组第一映射关系,而非牙龈表面或舌面或各类口腔黏膜表面等等相关区块。
步骤2:之后,随着用户把口腔内窥器在自己口腔内部漫游,采集到更多图像信息并回传,继续执行拼接操作并使得P(1)图像面积逐渐扩大。当P(1)面积扩大为涵盖左下5第二前磨牙整个舌侧区块的图像信息P(2)之后,根据预设的图像模式识别算法和各个牙面区块的图像特征信息,基于P(1)对应的映射关系,获得P(2)的映射关系是前磨牙舌侧区块,例如,左下4第一前磨牙或左下5第二前磨牙,或者是右下4第一前磨牙、右下5第二前磨牙、左上4第一前磨牙、左上5第二前磨牙、右上4第一前磨牙、右上5第二前磨牙等其他前磨牙。
步骤3:随着用户把口腔内窥器在自己口腔内部漫游,获取到更多的图像数据,当P(2)面积扩大为同时涵盖左下5第二前磨牙舌侧区块和左下6第一磨牙舌侧区块的图像信息P(3)之后,根据预设的图像模式识别算法和各个牙面区块的图像特征信息,可以得到该图像中涵盖相邻的左下前磨牙舌侧区块和左下磨牙舌侧区块。因磨牙与前磨牙的舌侧区块的曲面形状有明显区别,而左下4第一前磨牙舌侧区块与左下6第一磨牙舌侧区块没有邻接关系,因此,根据置信度,可以确定图像信息P(3)涵盖的是左下5第二前磨牙舌侧区块和左下6第一磨牙舌侧区块。
本发明实施例中,可以设置内窥器中摄像单元的拍摄间隔时间,例如,每秒20帧,这样,可以在很短时间内获取到多次拍摄到的图像数据,进而在执行上述过程时,时间也比较短,也不会影响后续的展示过程,用户不会有停顿的感知,不影响用户的使用体验。
也就是说,本发明实施例中,可能并不是每次接收到图像数据时,都可以得到最终的映射关系,但随着获取到更多的图像数据,可以依次进行确定,得到置信度最大的一组映射关系,进而再进行图像重构和更新展示,这样,可以进一步提高准确性。
实施例三:
基于上述实施例中,下面对三维图像框架数据库和三维图像轮廓进行详细介绍。
1)三维图像框架数据库。
本发明实施例中,三维图像框架数据库,基于人类口腔的各种情况进行构建,该三维图像框架数据库存储了人类口腔的三维图像模型的通用框架数据,该框架数据涵盖了各种情况下人类口腔全部表面区域的图像特征信息,例如形状特征、色彩特征、纹理特征等信息。这些情况包括成年人的健康口腔正常场景、口腔脏污场景、口腔病理场景、口腔畸形场景、口腔外伤场景、乳牙向恒牙成长的替牙期场景,也包括儿童的健康口腔正常场景、口腔脏污场景、口腔病理场景、口腔畸形场景、口腔外伤场景、乳牙萌出期场景。随着本发明方法和装置的普及使用,本发明的人类全口腔内表面三维图像框架数据库还可以进行不断更新和扩展,例如,可以增加新的口腔病理场景下的图像特征信息,或增加新的口腔外伤场景下的图像特征信息。这样,可以进一步提高匹配的准确性。
其中,三维图像框架数据库中至少包括预划分的三维图像框架的每一个区块。本发明实施例中,在划分区块时,可以直接将三维图像框架划分成各个区块,当然,在划分区块时,也可以先划分出区域,然后在每一个区域中划分各个区块,这样划分效率更高,本发明实施例中,对此并不进行限制。
下面是以划分成区域和区块为例,进行说明的。本发明实施例中,根据人类口腔通用模型的实际形状和面积,确定各个区域和各个区块的空间布局,把全口腔内表面划分为一系列相互交接的区域,每一个区域划分为一系列相互交接的区块。
a、区域。
本发明实施例中,区域的划分,可以根据口腔中各部分的功能进行划分。并且,各个区域也至少有一个编号信息。
例如,可以将口腔内表面划分为14个区域,分别为:口腔前庭的前壁面区域、口腔前庭的后壁面区域、上口腔前庭沟区域、下口腔前庭沟区域、左咬合间隙区域、右咬合间隙区域、上牙列区域、下牙列区域、上颌牙槽嵴底面区域、下颌牙槽嵴顶面区域、固有口腔上壁面区域、固有口腔底壁面区域、舌体上表面区域、舌体下表面区域。
其中,每个区域对应一个编号信息,例如,分别依次为1、2、3、4、5、6、7、8、9、 10、11、12、13、14。
具体地,对于口腔中区域的划分,本发明实施例中,并不进行限制,目的是将口腔划分为各个可以区分的区域,并且各个区域可以相接组成完整的口腔的结构。
b、区块。
本发明实施例,在划分区块时,遵循的原则是:一个区块的内部尽量为单一纹理、尽量单一颜色,且尽量接近平面布局。
其中,区块的面积大小、具体形状,本发明实施例中,并不进行限制,可以综合考虑三维图像的精度要求和计算量要求予以确定。
其中,区块的图像数据包括:编号信息、图像特征信息。
区块的图像的位置信息包括:每个区块相互之间的空间位置关系。
本发明实施例中,对每一个区块与其他区块之间的表面相接关系和相对空间位置关系做了系统化的梳理和描述,每一个区块都有自己唯一的标签(例如,名称及其编号)和相关图像特征信息等,建立了完整的区块的标签体系,可以很快找到每个区块,并且,可以获知每个区块的图像特征信息。
其中,建立的区块的标签体系,每一个区块至少包括:编号信息、图像特征信息,以及每一个区块相互之间的空间位置关系,还可以包括名称信息、档案属性描述信息、三维曲面图样等。
例如,对于编号为2的口腔前庭的后壁面区域,可以划分为6个区块,分别为:区块(2.1),上颌牙槽嵴唇侧黏膜区块。区块(2.2),上颌牙槽嵴左颊侧黏膜区块相接。区块(2.3),上颌牙槽嵴右颊侧黏膜区块。区块(2.4):下颌牙槽嵴唇侧黏膜区块。区块(2.5),下颌牙槽嵴左颊侧黏膜区块。区块(2.6),下颌牙槽嵴右颊侧黏膜区块。
这样,可以根据编号信息,索引到某个区块,并且也可以知道该区块图像特征信息,也包括边界特征信息等。
同样地,本发明实施例中,对于区域中的区块的划分,也不进行限制,可以根据实际情况进行划分,保证各个区块可以组成完整的相应的区域。
也就是说,本发明实施例中,构建的三维图像框架数据库,不仅对口腔进行了区域和区块的划分,还建立了区块的标签体系,可以很准确地标识口腔中的各个位置,便于进行三维图像匹配和重构。并且,这使得本发明的图像处理装置在对接收到的图像数据进行处理的过程中,能获取从内窥器接收到的图像数据的语义信息,为采用人工智能技术开展口腔内窥影像检查创造条件。
2)三维图像轮廓。
本发明实施例中,三维图像轮廓存储了人类全口腔内表面各个区域(含各个区块)的三维图像的形状轮廓数据。
其中,用户的三维图像轮廓至少存储了所述用户的口腔中各个区块的三维图像的形状轮廓数据。
用户的三维图像轮廓,是基于所述三维图像框架数据库或所述用户的三维图像模型得到的;其中,所述三维图像轮廓中每一个区块的图像,为基于所述三维图像框架数据库或所述用户的三维图像模型中区块的图像的三维曲面形状,包括预设的单一颜色和单一纹理的图像。
也就是说,本发明实施例中,三维图像轮廓中涵盖了口腔内窥全景的各个区块的缺省图像。三维图像轮廓中的各个区块的缺省图像仅仅包括各个区块的三维曲面形状,且为单一颜色+单一纹理,也即不含各个区块的真彩颜色和实际纹理。例如,设置三维图像轮廓中各个区块缺省图像的三维曲面的外表面仅为光滑纹理的深灰色。
并且,本发明实施例中,在实际使用时,可以根据每个用户的实际情况,更新该用户的三维图像轮廓,随着不同用户的使用,针对每个用户,就会有属于自己口腔的三维图像轮廓。
例如,用户首次使用时,一开始展示的是根据用户年龄信息预设的标准三维图像轮廓。随着用户使用口腔内窥器扫描口腔,不断采集口腔中的图像,基于上述本发明实施例中的图像处理方法,进行拼接、识别、重构等操作,只要完成了一个区块的处理,就会用采集到的该区块的三维真彩曲面图像替换原轮廓中该区块的缺省曲面图像,并在用户终端上进行展示。
随着三维图像轮廓中的一个又一个区块的缺省曲面图像不断被采集到的三维真彩曲面图像所替换,不断地更新用户的三维图像模型,用户终端上展示的口腔内窥三维真彩曲面图像越来越完整,而原三维图像轮廓遗留的缺省曲面图像越来越少。用户完成全口腔内窥采集后,用户终端上展示的口腔内窥全景图像,就全部由三维真彩曲面图像拼接组成。
进而,可以根据最后展示的三维真彩曲面图像,即更新后的用户的三维图像模型,从中提取出三维图像轮廓,更新之前预设的标准三维图像轮廓,当该用户下一次再使用口腔内窥器时,一开始展示的就是该用户上一次更新后的三维图像轮廓,也就是说,所有不同用户第一次使用时,展示的都是根据用户年龄信息预设的标准三维图像轮廓,随着不同用户的使用,三维图像轮廓进行更新,是跟用户相关的,可以表示该用户实际的口腔三维图像轮廓,针对不同的用户,之后三维图像轮廓不断更新,得到的三维图像轮廓是不同的。
实施例四:
下面采用一个具体的应用场景对上述实施例作出进一步详细说明。具体参阅图2所示,本发明实施例中,图像处理方法的实现的示意图。
例如,内窥器为口腔内窥器,口腔内窥器将采集到的口腔图像上传到智能终端,智能终端执行本发明实施例中的图像处理方法,并在智能终端中动态展示口腔的三维图像。
其中,智能终端,例如为手机、电脑等,本发明实施例中,并不进行限制,也可以是云端执行匹配重构等操作,并在手机侧进行展示口腔的三维图像。
首先,当用户使用口腔内窥器扫描口腔,并确定口腔内窥器与智能终端通信连接时,参阅图2所示,在智能终端中初始展示用户的三维图像模型为预设的三维图像轮廓。此时,均是灰色的单一颜色、单一纹理的图像。
然后,随着用户在口腔中的扫描,口腔内窥器中摄像单元会拍摄口腔中的三维真彩图像,并上传给智能终端,智能终端基于上述本发明实施例中的图像处理方法,将接收到三维真彩图像重构于用户的三维图像轮廓中对应的确定的位置上,并更新当前保存的用户的三维图像模型,进而展示更新后的用户的三维图像模型,例如,参阅图3所示,为在扫描过程中,重构了部分三维图像时,展示的更新后的用户的三维图像模型。从图3中可以看到,上颚面保留原始框架,不点亮;舌背面保留原始框架,不点亮;上牙列从右边数第一、第二颗牙的颚侧面区块点亮,表示已完成扫描;上牙列其他各个牙齿保留原始框架,不点亮;下牙列从左边数第三、第四颗牙的舌侧面区块点亮,表示已完成扫描;下牙列其它各个牙齿保留原始框架,不点亮。其中,值得说明的是,已经点亮的部分实际中是三维真彩图像,此处仅是为了与未点亮的部分进行区分,采用更深的灰色来进行说明。
此时,已经重构完成的部分,展示的用户真实的口腔的三维真彩图像,包括实际的颜色和纹理等信息,未被重构的部分,仍然展示的是三维图像轮廓中预设的灰色的单一颜色单一纹理的图像。
最后,当扫描完成,或三维图像轮廓中所有区块都被替换完成后,用户停止扫描,例如,参阅图4所示,扫描完成后,当前展示的是所有区块都重构完成后的三维图像模型,此时,展示的就是一个包括颜色、纹理等信息的三维真彩图像,(其中,图4中也同样仅是为了与未点亮的部分进行区分,采用更深的灰色来进行说明)。并且,该三维真彩图像和用户口腔的图像一致,可以真实反映用户口腔的图像和状况。
实施例五:
基于上述实施例,参阅图5所示,本发明实施例五中,一种应用场景的环境架构示意图。
可以开发一个应用软件,用于实现本发明实施例中的图像处理方法,并且,该应用软件可以安装在用户终端,用户终端分别与内窥器和网络子系统连接,实现通信。
其中,用户终端可以为手机、电脑、ipad等任何智能设备,本发明实施例五中仅以手机为例进行说明。
例如,用户使用内窥器扫描口腔,采集口腔中图像,内窥器将采集到的图像发送给用户终端,用户终端通过网络子系统,从服务器获取三维图像框架数据库和三维图像轮廓,进而对接收到的图像数据进行处理,进行保存和拼接,确定拼接后的图像数据对应于用户 的三维图像轮廓中的位置,并将拼接后的图像数据重构于用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,进而更新当前保存的用户的三维图像模型,进行展示,完成口腔内窥扫描,获得用户口腔的三维图像。
实施例六:
基于上述实施例,参阅图6所示,本发明实施例中,图像处理装置,具体包括:
接收单元60,用于接收内窥器发送的用户的图像数据;其中,所述图像数据至少包括所述内窥器中摄像单元拍摄到的图像数据,并且所述图像数据的类型为深度图像;
处理单元61,用于保存接收到的图像数据,并分别判断保存的图像数据相互之间是否能够拼接,当确定能够拼接时,将保存的图像数据进行拼接,获得拼接后的图像数据;并根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,并将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,其中,所述三维图像框架数据库存储有将三维图像框架图像划分出的区块的图像数据以及每个区块的图像的位置信息,以及,根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型;其中,所述用户的三维图像模型的初始值为所述用户的三维图像轮廓;
展示单元62,用于展示更新后的所述用户的三维图像模型。
较佳的,所述区块的图像数据包括:编号信息、图像特征信息;
所述区块的图像的位置信息包括:每个区块相互之间的空间位置关系;
所述三维图像轮廓中每一个区块的图像,为基于所述三维图像框架数据库或所述用户的三维图像模型中区块的图像的三维曲面形状,包括预设的单一颜色和单一纹理的图像。
较佳的,根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,处理单元61具体用于:
根据预设的图像模式识别算法,基于所述三维图像框架数据库中区块的图像特征信息,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系;
根据区块相互之间的空间位置关系和/或编号信息,确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置。
较佳的,根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,处理单元61具体用于:
若所述内窥器中包括至少两个预设的相对位置固定的摄像单元,则根据预设的内窥器中每一个摄像单元在内窥器中的相对空间位置关系,以及所述图像数据中携带的摄像单元的标识,分别确定所述拼接后的图像数据的相对空间位置关系;
根据预设的图像模式识别算法,基于所述三维图像框架数据库中区块的图像特征信 息、和拼接后的图像数据的相对空间位置关系,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系;
根据区块相互之间的空间位置关系和/或编号信息,确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置。
较佳的,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配时,处理单元61进一步用于:
当确定拼接后的图像数据至少对应两个区块时,则根据所述三维图像框架数据库中预设的区块相互之间的空间位置关系,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系。
较佳的,处理单元61进一步用于:
若获得至少两组第一映射关系,则根据每组第一映射关系的置信度,从所述至少两组第一映射关系中,选择出第一预设数目的第一映射关系,并将所述选择出的第一预设数目的第一映射关系,用于下一次接收到内窥器发送的用户的图像数据时,计算第一映射关系中,以使针对下一次接收到的图像数据,分别获得基于所述选择出的第一预设数目的第一映射关系的各映射关系,直到获取到不大于第二预设数目的最大值个第一映射关系时,分别判断第二预设数目个第一映射关系中,每组第一映射关系的叠加置信度,若判断出所述第二预设数目个第一映射关系中任意一组映射关系的叠加置信度不小于预设阈值,则将所述任意一组第一映射关系,作为拼接后的图像数据与所述三维图像框架数据库中区块的第二映射关系。
较佳的,将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,处理单元61具体用于:
根据所述三维图像框架数据库中区块的边界特征信息,从所述拼接后的图像数据中提取出属于对应的区块的三维曲面图像;其中,所述图像特征信息中至少包括区块的边界特征信息;
将提取出的三维曲面图像替换所述用户的三维图像轮廓中对应的确定的位置上的图像,获得重构后的三维图像数据。
较佳的,根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型,处理单元61具体用于:
将所述重构后的三维图像数据,替换掉当前保存的所述用户的三维图像模型中对应的确定的位置上的图像;
处理单元61进一步用于:
根据更新后的所述用户的三维图像模型,获取更新后的所述用户的三维图像模型对应 的三维图像轮廓,并根据所述更新后的所述用户的三维图像模型对应的三维图像轮廓,更新保存的所述用户的三维图像轮廓。
较佳的,接收单元60进一步用于:当再次接收到内窥器发送的用户的图像数据,并所述处理单元61进一步用于返回执行所述保存接收到的图像数据,并分别判断保存的图像数据相互之间是否能够拼接,当确定能够拼接时,将保存的图像数据进行拼接,获得拼接后的图像数据;并根据保存的三维图像框架数据库和保存的所述用户的三维图像轮廓,确定所述拼接后的图像数据对应于所述用户的三维图像轮廓中的位置,并将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,以及,根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型。
较佳的,展示更新后的所述用户的三维图像模型之后,进一步包括:
操作单元63,用于接收用户的操作指令,并根据所述操作指令,对展示的更新后的所述用户的三维图像模型执行相应的操作。
值得说明的是,本发明实施例中,上述接收单元60、处理单元61、展示单元62和操作单元63,都可以集成在一个用户终端中,例如都集成在手机中,当然,也可以分开,例如,对于带有手柄的内窥器,可以将接收单元60和处理单元61集成在内窥器的手柄中,展示单元62和操作单元63集成在手机中;又或者,将接收单元60,以及处理单元61的部分功能集成在内窥器的手柄中,处理单元61的其它功能、展示单元62和操作单元63集成在手机中,在实际实现时,都是可以的,本发明实施例中并不进行限制。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个 方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。
显然,本领域的技术人员可以对本发明实施例进行各种改动和变型而不脱离本发明实施例的精神和范围。这样,倘若本发明实施例的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (20)

  1. 一种图像处理方法,其特征在于,包括:
    步骤A:接收内窥器发送的用户的图像数据;其中,所述图像数据至少包括所述内窥器中摄像单元拍摄到的图像数据;
    步骤B:保存接收到的图像数据,并分别判断保存的图像数据相互之间是否能够拼接,当确定能够拼接时,将保存的图像数据进行拼接,获得拼接后的图像数据;
    步骤C:根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,并将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,其中,所述三维图像框架数据库存储有将三维图像框架图像划分出的区块的图像数据以及每个区块的图像的位置信息;
    步骤D:根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型;其中,所述用户的三维图像模型的初始值为所述用户的三维图像轮廓;
    步骤E:展示更新后的所述用户的三维图像模型。
  2. 如权利要求1所述的方法,其特征在于,所述区块的图像数据包括:编号信息、图像特征信息;
    所述区块的图像的位置信息包括:每个区块相互之间的空间位置关系;
    所述三维图像轮廓中每一个区块的图像,为基于所述三维图像框架数据库或所述用户的三维图像模型中区块的图像的三维曲面形状,包括预设的单一颜色和单一纹理的图像。
  3. 如权利要求2所述的方法,其特征在于,根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,具体包括:
    根据预设的图像模式识别算法,基于所述三维图像框架数据库中区块的图像特征信息,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系;
    根据区块相互之间的空间位置关系和/或编号信息,确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置。
  4. 如权利要求2所述的方法,其特征在于,根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,具体包括:
    若所述内窥器中包括至少两个预设的相对位置固定的摄像单元,则根据预设的内窥器中每一个摄像单元在内窥器中的相对空间位置关系,以及所述图像数据中携带的摄像单元 的标识,分别确定所述拼接后的图像数据的相对空间位置关系;
    根据预设的图像模式识别算法,基于所述三维图像框架数据库中区块的图像特征信息、和拼接后的图像数据的相对空间位置关系,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系;
    根据区块相互之间的空间位置关系和/或编号信息,确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置。
  5. 如权利要求3或4所述的方法,其特征在于,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配时,进一步包括:
    当确定拼接后的图像数据至少对应两个区块时,则根据所述三维图像框架数据库中预设的区块相互之间的空间位置关系,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系。
  6. 如权利要求5所述的方法,其特征在于,进一步包括:
    若获得至少两组第一映射关系,则根据每组第一映射关系的置信度,从所述至少两组第一映射关系中,选择出第一预设数目的第一映射关系,并将所述选择出的第一预设数目的第一映射关系,用于下一次接收到内窥器发送的用户的图像数据时,计算第一映射关系中,以使针对下一次接收到的图像数据,分别获得基于所述选择出的第一预设数目的第一映射关系的各映射关系,直到获取到不大于第二预设数目的最大值个第一映射关系时,分别判断第二预设数目个第一映射关系中,每组第一映射关系的叠加置信度,若判断出所述第二预设数目个第一映射关系中任意一组映射关系的叠加置信度不小于预设阈值,则将所述任意一组第一映射关系,作为拼接后的图像数据与所述三维图像框架数据库中区块的第二映射关系。
  7. 如权利要求3-6任一项所述的方法,其特征在于,将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,具体包括:
    根据所述三维图像框架数据库中区块的边界特征信息,从所述拼接后的图像数据中提取出属于对应的区块的三维曲面图像;其中,所述图像特征信息中至少包括区块的边界特征信息;
    将提取出的三维曲面图像替换所述用户的三维图像轮廓中对应的确定的位置上的图像,获得重构后的三维图像数据。
  8. 如权利要求1-7任一项所述的方法,其特征在于,根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型,具体包括:
    将所述重构后的三维图像数据,替换掉当前保存的所述用户的三维图像模型中对应的确定的位置上的图像;
    进一步包括:
    根据更新后的所述用户的三维图像模型,获取更新后的所述用户的三维图像模型对应的三维图像轮廓,并根据所述更新后的所述用户的三维图像模型对应的三维图像轮廓,更新保存的所述用户的三维图像轮廓。
  9. 如权利要求1-8任一项所述的方法,其特征在于,进一步包括:
    当再次接收到内窥器发送的用户的图像数据,返回执行所述步骤B。
  10. 如权利要求1所述的方法,其特征在于,展示更新后的所述用户的三维图像模型之后,进一步包括:
    接收用户的操作指令,并根据所述操作指令,对展示的更新后的所述用户的三维图像模型执行相应的操作。
  11. 一种图像处理装置,其特征在于,包括:
    接收单元,用于接收内窥器发送的用户的图像数据;其中,所述图像数据至少包括所述内窥器中摄像单元拍摄到的图像数据;
    处理单元,用于保存接收到的图像数据,并分别判断保存的图像数据相互之间是否能够拼接,当确定能够拼接时,将保存的图像数据进行拼接,获得拼接后的图像数据;并根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,并将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,其中,所述三维图像框架数据库存储有将三维图像框架图像划分出的区块的图像数据以及每个区块的图像的位置信息,以及,根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型;其中,所述用户的三维图像模型的初始值为所述用户的三维图像轮廓;
    展示单元,用于展示更新后的所述用户的三维图像模型。
  12. 如权利要求11所述的装置,其特征在于,所述区块的图像数据包括:编号信息、图像特征信息;
    所述区块的图像的位置信息包括:每个区块相互之间的空间位置关系;
    所述三维图像轮廓中每一个区块的图像,为基于所述三维图像框架数据库或所述用户的三维图像模型中区块的图像的三维曲面形状,包括预设的单一颜色和单一纹理的图像。
  13. 如权利要求12所述的装置,其特征在于,根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,处理单元具体用于:
    根据预设的图像模式识别算法,基于所述三维图像框架数据库中区块的图像特征信息,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系;
    根据区块相互之间的空间位置关系和/或编号信息,确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置。
  14. 如权利要求12所述的装置,其特征在于,根据保存的三维图像框架数据库,确定所述拼接后的图像数据对应的区块,并确定该区块在保存的所述用户的三维图像轮廓中的位置,处理单元具体用于:
    若所述内窥器中包括至少两个预设的相对位置固定的摄像单元,则根据预设的内窥器中每一个摄像单元在内窥器中的相对空间位置关系,以及所述图像数据中携带的摄像单元的标识,分别确定所述拼接后的图像数据的相对空间位置关系;
    根据预设的图像模式识别算法,基于所述三维图像框架数据库中区块的图像特征信息、和拼接后的图像数据的相对空间位置关系,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系;
    根据区块相互之间的空间位置关系和/或编号信息,确定拼接后的图像数据对应的所述三维图像框架数据库中区块,在所述用户的三维图像轮廓中的位置。
  15. 如权利要求13或14所述的装置,其特征在于,分别将所述拼接后的图像数据与所述三维图像框架数据库中区块的图像进行匹配时,处理单元进一步用于:
    当确定拼接后的图像数据至少对应两个区块时,则根据所述三维图像框架数据库中预设的区块相互之间的空间位置关系,获得拼接后的图像数据与所述三维图像框架数据库中区块的第一映射关系。
  16. 如权利要求15所述的装置,其特征在于,处理单元进一步用于:
    若获得至少两组第一映射关系,则根据每组第一映射关系的置信度,从所述至少两组第一映射关系中,选择出第一预设数目的第一映射关系,并将所述选择出的第一预设数目的第一映射关系,用于下一次接收到内窥器发送的用户的图像数据时,计算第一映射关系中,以使针对下一次接收到的图像数据,分别获得基于所述选择出的第一预设数目的第一映射关系的各映射关系,直到获取到不大于第二预设数目的最大值个第一映射关系时,分别判断第二预设数目个第一映射关系中,每组第一映射关系的叠加置信度,若判断出所述第二预设数目个第一映射关系中任意一组映射关系的叠加置信度不小于预设阈值,则将所述任意一组第一映射关系,作为拼接后的图像数据与所述三维图像框架数据库中区块的第二映射关系。
  17. 如权利要求13-16任一项所述的装置,其特征在于,将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,处理单元具体用于:
    根据所述三维图像框架数据库中区块的边界特征信息,从所述拼接后的图像数据中提 取出属于对应的区块的三维曲面图像;其中,所述图像特征信息中至少包括区块的边界特征信息;
    将提取出的三维曲面图像替换所述用户的三维图像轮廓中对应的确定的位置上的图像,获得重构后的三维图像数据。
  18. 如权利要求11-17任一项所述的装置,其特征在于,根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型,处理单元具体用于:
    将所述重构后的三维图像数据,替换掉当前保存的所述用户的三维图像模型中对应的确定的位置上的图像;
    处理单元进一步用于:
    根据更新后的所述用户的三维图像模型,获取更新后的所述用户的三维图像模型对应的三维图像轮廓,并根据所述更新后的所述用户的三维图像模型对应的三维图像轮廓,更新保存的所述用户的三维图像轮廓。
  19. 如权利要求11-18任一项所述的装置,其特征在于,接收单元进一步用于:当再次接收到内窥器发送的用户的图像数据,并所述处理单元进一步用于返回执行所述保存接收到的图像数据,并分别判断保存的图像数据相互之间是否能够拼接,当确定能够拼接时,将保存的图像数据进行拼接,获得拼接后的图像数据;并根据保存的三维图像框架数据库和保存的所述用户的三维图像轮廓,确定所述拼接后的图像数据对应于所述用户的三维图像轮廓中的位置,并将所述拼接后的图像数据重构于所述用户的三维图像轮廓中对应的确定的位置上,获得重构后的三维图像数据,以及,根据所述重构后的三维图像数据,更新当前保存的所述用户的三维图像模型。
  20. 如权利要求11所述的装置,其特征在于,展示更新后的所述用户的三维图像模型之后,进一步包括:
    操作单元,用于接收用户的操作指令,并根据所述操作指令,对展示的更新后的所述用户的三维图像模型执行相应的操作。
PCT/CN2018/098388 2017-08-25 2018-08-02 一种图像处理方法及装置 WO2019037582A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/641,066 US10937238B2 (en) 2017-08-25 2018-08-02 Image processing method and device
EP18849208.6A EP3675037A4 (en) 2017-08-25 2018-08-02 IMAGE PROCESSING METHOD AND DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710744863.3 2017-08-25
CN201710744863.3A CN107644454B (zh) 2017-08-25 2017-08-25 一种图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2019037582A1 true WO2019037582A1 (zh) 2019-02-28

Family

ID=61110707

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/098388 WO2019037582A1 (zh) 2017-08-25 2018-08-02 一种图像处理方法及装置

Country Status (5)

Country Link
US (1) US10937238B2 (zh)
EP (1) EP3675037A4 (zh)
CN (1) CN107644454B (zh)
TW (1) TWI691933B (zh)
WO (1) WO2019037582A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644454B (zh) 2017-08-25 2020-02-18 北京奇禹科技有限公司 一种图像处理方法及装置
CN107909609B (zh) * 2017-11-01 2019-09-20 欧阳聪星 一种图像处理方法及装置
WO2020044523A1 (ja) * 2018-08-30 2020-03-05 オリンパス株式会社 記録装置、画像観察装置、観察システム、観察システムの制御方法、及び観察システムの作動プログラム
CN109410318B (zh) * 2018-09-30 2020-09-08 先临三维科技股份有限公司 三维模型生成方法、装置、设备和存储介质
CN109166625B (zh) * 2018-10-10 2022-06-07 欧阳聪星 一种牙齿虚拟编辑方法及系统
CN111125122A (zh) * 2018-10-31 2020-05-08 北京国双科技有限公司 数据的更新方法及装置
CN113240799B (zh) * 2021-05-31 2022-12-23 上海速诚义齿有限公司 一种基于医疗大数据的牙齿三维模型构建系统
CN114445388A (zh) * 2022-01-28 2022-05-06 北京奇禹科技有限公司 一种基于多光谱的图像识别方法、装置及存储介质
CN114407024B (zh) * 2022-03-15 2024-04-26 上海擎朗智能科技有限公司 一种位置引领方法、装置、机器人及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080273773A1 (en) * 2004-01-27 2008-11-06 Maurice Moshe Ernst Three-dimensional modeling of the oral cavity
WO2014100950A1 (en) * 2012-12-24 2014-07-03 Carestream Health, Inc. Three-dimensional imaging system and handheld scanning device for three-dimensional imaging
CN106663327A (zh) * 2014-08-27 2017-05-10 卡尔斯特里姆保健公司 3‑d表面的自动重新拼接
CN107644454A (zh) * 2017-08-25 2018-01-30 欧阳聪星 一种图像处理方法及装置

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW576729B (en) * 2003-06-12 2004-02-21 Univ Nat Taipei Technology Apparatus and technique for automatic 3-D dental data required for crown reconstruction
US7967742B2 (en) * 2005-02-14 2011-06-28 Karl Storz Imaging, Inc. Method for using variable direction of view endoscopy in conjunction with image guided surgical systems
DE102007060263A1 (de) * 2007-08-16 2009-02-26 Steinbichler Optotechnik Gmbh Vorrichtung zur Ermittlung der 3D-Koordinaten eines Objekts, insbesondere eines Zahns
US8471895B2 (en) * 2008-11-25 2013-06-25 Paul S. Banks Systems and methods of high resolution three-dimensional imaging
TWI524873B (zh) * 2011-01-11 2016-03-11 Advance Co Ltd Intraocular photography display system
US9191648B2 (en) * 2011-02-22 2015-11-17 3M Innovative Properties Company Hybrid stitching
KR101315032B1 (ko) * 2012-11-08 2013-10-08 주식회사 메가젠임플란트 임플란트 영상 생성방법 및 임플란트 영상 생성 시스템
US9510757B2 (en) * 2014-05-07 2016-12-06 Align Technology, Inc. Identification of areas of interest during intraoral scans
US10453269B2 (en) * 2014-12-08 2019-10-22 Align Technology, Inc. Intraoral scanning using ultrasound and optical scan data
DE102015212806A1 (de) * 2015-07-08 2017-01-12 Sirona Dental Systems Gmbh System und Verfahren zum Scannen von anatomischen Strukturen und zum Darstellen eines Scanergebnisses
CN105654548B (zh) * 2015-12-24 2018-10-16 华中科技大学 一种基于大规模无序图像的多起点增量式三维重建方法
US9460557B1 (en) * 2016-03-07 2016-10-04 Bao Tran Systems and methods for footwear fitting
US10410365B2 (en) * 2016-06-02 2019-09-10 Verily Life Sciences Llc System and method for 3D scene reconstruction with dual complementary pattern illumination

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080273773A1 (en) * 2004-01-27 2008-11-06 Maurice Moshe Ernst Three-dimensional modeling of the oral cavity
WO2014100950A1 (en) * 2012-12-24 2014-07-03 Carestream Health, Inc. Three-dimensional imaging system and handheld scanning device for three-dimensional imaging
CN106663327A (zh) * 2014-08-27 2017-05-10 卡尔斯特里姆保健公司 3‑d表面的自动重新拼接
CN107644454A (zh) * 2017-08-25 2018-01-30 欧阳聪星 一种图像处理方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3675037A4 *

Also Published As

Publication number Publication date
TW201913576A (zh) 2019-04-01
TWI691933B (zh) 2020-04-21
EP3675037A4 (en) 2021-06-02
EP3675037A1 (en) 2020-07-01
CN107644454B (zh) 2020-02-18
CN107644454A (zh) 2018-01-30
US20200184723A1 (en) 2020-06-11
US10937238B2 (en) 2021-03-02

Similar Documents

Publication Publication Date Title
TWI691933B (zh) 一種圖像處理方法及裝置
AU2019284043B2 (en) Identification of areas of interest during intraoral scans
US11317999B2 (en) Augmented reality enhancements for dental practitioners
US20210073998A1 (en) Apparatuses and methods for three-dimensional dental segmentation using dental image data
US11517272B2 (en) Simulated orthodontic treatment via augmented visualization in real-time
ES2724115T3 (es) Interfaz gráfica de usuario para el marcado de márgenes asistido por ordenador sobre dentaduras
CN111784754B (zh) 基于计算机视觉的牙齿正畸方法、装置、设备及存储介质
TWI728374B (zh) 牙齒虛擬編輯方法、系統、電腦設備和儲存媒體
TWI712992B (zh) 一種圖像處理方法及裝置
KR20210099835A (ko) 파노라믹 영상 생성 방법 및 이를 위한 영상 처리장치
EP4328861A2 (en) Jaw movements data generation apparatus, data generation method, and data generation program
AU2022281999A1 (en) Method for acquiring a model of a dental arch
CN114664454A (zh) 一种口腔正畸过程中颌面部软组织三维模型的模拟方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18849208

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018849208

Country of ref document: EP

Effective date: 20200325