CN117292372A - Identification method and system suitable for 3D printing product, electronic equipment and medium - Google Patents

Identification method and system suitable for 3D printing product, electronic equipment and medium Download PDF

Info

Publication number
CN117292372A
CN117292372A CN202311171884.2A CN202311171884A CN117292372A CN 117292372 A CN117292372 A CN 117292372A CN 202311171884 A CN202311171884 A CN 202311171884A CN 117292372 A CN117292372 A CN 117292372A
Authority
CN
China
Prior art keywords
preset
image
model
hollowed
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311171884.2A
Other languages
Chinese (zh)
Inventor
田雷
田占丰
吴斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Meijishi Medical Equipment Co ltd
Original Assignee
Anhui Meijishi Medical Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Meijishi Medical Equipment Co ltd filed Critical Anhui Meijishi Medical Equipment Co ltd
Priority to CN202311171884.2A priority Critical patent/CN117292372A/en
Publication of CN117292372A publication Critical patent/CN117292372A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

An identification method, system, electronic equipment and medium suitable for 3D printing products relates to the technical field of 3D printing. The method comprises the following steps: obtaining a model image; identifying a preset background color target image corresponding to the model image, and dividing the preset background color target image into a plurality of single-area images according to a preset segmentation algorithm; dividing each single-area image into a plurality of grid images according to a preset dividing standard; detecting the shape characteristics of preset background colors in each grid image; and determining corresponding character information according to the preset background color shape characteristics and reading product information corresponding to the character information, thereby achieving the effect of improving the recognition accuracy of the 3D model character information.

Description

Identification method and system suitable for 3D printing product, electronic equipment and medium
Technical Field
The application relates to the technical field of 3D printing, in particular to an identification method, an identification system, electronic equipment and a medium suitable for 3D printing products.
Background
With the development of technology, the rapid prototyping technology for 3D printing is increasingly applied in the industrial field, with the popularization of the product manufacturing industry and the personalized production characteristics of 3D printing, clear information marks are required on each non-unified 3D printed product, only the marks of each product are clear, the batch application of the 3D printed products to a production line for automatic manufacturing is possible, and the singleness of the color of the 3D printed material is unfavorable for the information marks.
Currently, in the conventional 3D print recognition method, creating the identifier of the protrusion and the depression on the model body is generally achieved by adding an additional geometric body on the surface of the model, and then analyzing and comparing features in the image or the point cloud data, recognition is performed using technologies such as computer vision, pattern recognition, machine learning, and the like, for example, two-dimensional code information recognition.
However, in practical application, the traditional 3D printing recognition method is a non-chromatic aberration mark, and can be recognized only by using a visual analysis algorithm with higher cost, so that the character information recognition accuracy is not high. Thus, there is a need for improvement in the identification methods currently used for 3D printing.
Disclosure of Invention
The application provides a recognition method, a recognition system, electronic equipment and a medium suitable for 3D printing products, which have the effects of reducing visual recognition cost and improving character information recognition accuracy.
In a first aspect, the present application provides a method for identifying an applicable 3D printed product, comprising:
obtaining a model image;
identifying a preset background color target image corresponding to the model image, and dividing the preset background color target image into a plurality of single-area images according to a preset segmentation algorithm;
dividing each single-area image into a plurality of grid images according to a preset dividing standard;
detecting the shape characteristics of preset background colors in each grid image;
and determining corresponding character information according to the shape characteristics of the preset background color and reading product information corresponding to the character information.
By adopting the technical scheme, the system acquires the model image of the 3D printing model according to the camera equipment, then processes the model image through the preset segmentation algorithm, segments the model image into a plurality of single-area images, then divides the single-area image into a plurality of grid images according to the preset division standard, detects each grid image, and recognizes corresponding character information according to the preset background color shape characteristics in each grid image, and reads product information corresponding to the character information, so that the visual recognition cost can be reduced, and the character information recognition accuracy can be improved.
Optionally, responding to the hollowed-out font adding operation of the user, and calling the corresponding hollowed-out font in the preset font library; cutting the hollowed-out font edge area according to a preset edge angle threshold value, obtaining the cut-out hollowed-out font information, and importing the cut-out hollowed-out font information into a preset 3D model.
By adopting the technical scheme, the system responds to the hollowed-out font addition control instruction sent by the user at the terminal, the corresponding hollowed-out font is called from the preset font library, then the hollowed-out font edge area is cut according to the preset edge angle threshold value, the cut hollowed-out font information is imported into the preset 3D model, deformation of the graphic edge during printing due to the angle problem can be avoided according to the setting of the hollowed-out font edge angle, and the accuracy of hollowed-out font identification is improved.
Optionally, converting the preset 3D model into a model file in a preset standard file format, and importing the model file into preset slicing software; dividing a preset 3D model in the model file into a plurality of slice files through the preset slicing software, and generating printing path information corresponding to each slice file; and transmitting the plurality of slice files and the printing path information to a printer terminal, wherein the printer terminal is used for printing the preset 3D model according to the slice files and the printing path information.
By adopting the technical scheme, the system obtains the model file corresponding to the preset standard file format according to the preset 3D model, divides the model file into a plurality of slice files through the preset slice software prize, generates corresponding printing path information, and then sends the printing path information and the slice files to the printer terminal for model printing, so that the printing efficiency of the 3D model can be effectively improved.
Optionally, in response to the image recognition operation, acquiring a hollowed-out image of the 3D printed product through the visual camera device; converting the hollowed-out image into a gray image according to a preset graying algorithm; converting the gray level image into a binarized image according to a preset binarization algorithm; removing noise in the binarized image through a preset filtering algorithm to obtain a target image; and extracting the model image in the target image.
By adopting the technical scheme, the system responds to the image recognition operation, acquires the hollowed-out image of the 3D printing product according to the visual camera equipment, converts the hollowed-out image into a binary image according to the preset gray-scale algorithm and the preset binary algorithm, processes the binary image according to the preset filtering algorithm to obtain a target image after noise filtering, extracts the model image in the target image, can detect the model image in the image, and improves the accuracy of image recognition.
Optionally, obtaining a plurality of pixel point information of the model image through a preset image recognition algorithm; calculating a gradient value corresponding to the pixel point information; judging whether the gradient value is larger than a preset gradient threshold value or not; if yes, taking the pixel point corresponding to the gradient value as an edge pixel point; if not, the pixel point corresponding to the gradient value is used as a non-edge pixel point.
By adopting the technical scheme, the system determines the pixel points of the model image according to the preset image recognition algorithm, calculates the gradient value of each pixel point, compares the gradient value of each pixel point, takes the corresponding pixel point as an edge pixel point if the gradient value is larger than the preset gradient threshold value, takes the corresponding pixel point as a non-edge pixel point if the gradient value is smaller than the preset gradient threshold value, and can classify the pixel points in the model image by calculating the gradient value so as to determine the edge contour corresponding to the character, thereby improving the accuracy of character information recognition.
Optionally, judging whether the pixel distance between the non-edge pixel points is smaller than a preset pixel distance threshold; if yes, taking the two pixel points smaller than the preset pixel distance threshold value as adjacent pixel points; obtaining an edge contour according to the adjacent pixel points; and determining each single-area image in the model image according to the edge contour.
By adopting the technical scheme, the system compares the acquired pixel distances among the non-edge pixel points, if the pixel distances among the non-edge pixel points are smaller than the preset pixel distance threshold value, the pixel distances are used as adjacent pixel points, all the adjacent pixel points are obtained through comparison, the edge contour is determined according to the adjacent pixel points, the model image is divided into single-area images according to the pixel contour, the model image can be decomposed, a single character image is obtained, the data base is improved for the follow-up character information recognition, and the character information recognition accuracy is improved.
Optionally, reading the character information through an OCR visual recognition algorithm to obtain 3D model parameters corresponding to the character information in a database; and converting the 3D model parameters into files with preset editing text formats and sending the files to a user terminal.
By adopting the technical scheme, the system reads the character information corresponding to the model image according to the OCR visual recognition algorithm to obtain the 3D model parameters in the database corresponding to the character information, and then converts the 3D model parameters into the file with the preset editing text format to be sent to the user terminal, so that the accuracy of acquiring the 3D model parameters can be effectively improved, and the character information recognition efficiency can be improved.
In a second aspect of the present application, a system is provided for a method of identifying a 3D printed product.
The image acquisition module is used for acquiring a model image;
the image segmentation module is used for identifying a preset background color target image corresponding to the model image and dividing the preset background color target image into a plurality of single-area images according to a preset segmentation algorithm;
the image dividing module is used for dividing each single-area image into a plurality of grid images according to a preset dividing standard;
the image recognition module is used for detecting the shape characteristics of preset background colors in each grid image; if so, determining corresponding character information according to the preset background color shape characteristics and reading the character information, and providing the electronic equipment in the third aspect of the application.
A system for identifying a suitable 3D printing product comprises a memory, a processor and a program stored in the memory and capable of running on the processor, wherein the program can realize the suitable 3D printing product identification method when loaded and executed by the processor.
In a fourth aspect of the present application, a computer-readable storage medium is provided.
A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement a method of identifying an applicable 3D printed product.
In summary, one or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
1. according to the method, the model image of the 3D printing model is obtained through the camera equipment, then the model image is processed through the preset segmentation algorithm, the model image is segmented into a plurality of single-area images, then the single-area images are divided into a plurality of grid images according to the preset segmentation standard, each grid image is detected, corresponding character information is recognized according to the preset background color shape characteristics in each grid image, the system reads the product information corresponding to the character information, visual recognition cost can be reduced, and character information recognition accuracy can be improved.
2. This application reaches and avoids based on the fashioned characteristic of 3D printer through tailorring fretwork typeface edge angle, and the figure edge can lead to the effect of whole deformation because of printing material's shrink characteristic causes deformation if part such as right angle, acute angle appear.
3. According to the method and the device, character information is processed through an OCR (optical character recognition) algorithm, 3D model parameters of a 3D printed product are read, accuracy of obtaining the 3D model parameters can be improved, and a data base is provided for a user to analyze the 3D model.
Drawings
Fig. 1 is a schematic flow chart of an identification method applicable to a 3D printing product according to an embodiment of the present application.
Fig. 2 is a schematic image segmentation flow chart of an identification method applicable to a 3D printed product according to an embodiment of the present application.
Fig. 3 is a schematic system structure diagram of an identification method applicable to a 3D printing product according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an electronic device according to the disclosure in an embodiment of the present application.
Reference numerals illustrate: 301. an image acquisition module; 302. an image segmentation module; 303. an image dividing module; 304. an image recognition module; 400. an electronic device; 401. a processor; 402. a memory; 403. a user interface; 404. a network interface; 405. a communication bus.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments.
In the description of embodiments of the present application, words such as "for example" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described herein as "such as" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "or" for example "is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "plurality" means two or more. For example, a plurality of systems means two or more systems, and a plurality of screen terminals means two or more screen terminals. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In order to facilitate understanding of the methods and systems provided in the embodiments of the present application, a description of the background of the embodiments of the present application is provided before the description of the embodiments of the present application.
Currently, in the conventional 3D print recognition method, creating the identifier of the protrusion and the depression on the model body is generally achieved by adding an additional geometric body on the surface of the model, and then analyzing and comparing features in the image or the point cloud data, recognition is performed using technologies such as computer vision, pattern recognition, machine learning, and the like, for example, two-dimensional code information recognition.
The embodiment of the application discloses a recognition method suitable for a 3D printing product, which is characterized in that a model image on the 3D printing product is obtained through a visual camera device, the model image is divided into a plurality of grid images according to a preset division standard, the shape characteristics of a preset background color in the grid images are recognized so as to determine character information, and the character information is read, so that the recognition method is mainly used for solving the problems of high cost and inaccurate font recognition of traditional non-color difference recognition.
Those skilled in the art will appreciate that the problems associated with the prior art are solved by the foregoing background description, and a detailed description of the technical solutions in the embodiments of the present application is provided below, with reference to the drawings in the embodiments of the present application, where the described embodiments are only some embodiments of the present application, but not all embodiments.
Referring to fig. 1, a method for identifying a suitable 3D printing product includes S10 to S40, specifically including the steps of:
s10: a model image is acquired.
Specifically, in order to simplify the complicated work of manually checking the error of the 3D printing model, reduce the cost of a visual analysis algorithm when identifying the 3D model identification information and improve the identification accuracy, the system may acquire a hollowed-out image of the 3D printing product through a visual camera device with lower cost, then convert the hollowed-out image into a gray image through an existing gray-scale algorithm, further convert the gray image into a binary image according to the existing binary algorithm for subsequent identification, then filter noise in the binary image according to a preset filtering algorithm, for example, a mean filtering algorithm and a gaussian filtering algorithm, obtain a filtered binary image, extract a model image in the binary image, wherein the model image comprises the identification information of the 3D printing model, is composed of alphanumeric characters, and can acquire design parameters and number information of the 3D printing model by reading the representation information. For example, in the medical appliance industry, a dental mouthpiece is generally printed through a 3D printing technology, the error range of model printing in the printing process is small, the error is difficult to check manually, and by adding a character string on a 3D printing model, a system can acquire a model image on the dental mouthpiece model for identification according to a visual camera device after the printing of the dental mouthpiece model is completed, so that design parameters of the dental mouthpiece model, such as model size, material, thickness, production date, number and the like, are acquired.
It should be noted that, the character string is integrally formed and printed with hollowed-out fonts, and before the system acquires the model image, there is a process of adding hollowed-out font information into the 3D model for printing, and the specific steps include S01 to S02:
s01: and responding to the hollowed-out font adding operation of the user, and calling the corresponding hollowed-out font in the preset font library.
Illustratively, to prevent the hollowed-out print from causing the loss of the center portion of the font, all the print portions of the hollowed-out font are set to be integrally formed print, for example, the center portion is lost when the existing OCR fonts 6, 0, 4, 9 and 8 hollowed-out fonts are printed, so that the integrally formed print is adopted when the OCR fonts are printed, and the center of the print font is free from a separation portion. When the user performs the operation of adding the hollowed-out fonts in the 3D printing terminal, a control instruction is sent out, and the system automatically responds to the control instruction to search the hollowed-out font attributes such as the font names, the font sizes and the like which are required to be added by the user. Then, the system accesses a pre-established hollowed-out font database to search template data of the required hollowed-out fonts.
S02: cutting out the edge area of the hollowed-out fonts according to the preset edge angle threshold value, obtaining cut-out hollowed-out font information, and importing the cut-out hollowed-out font information into a preset 3D model.
Illustratively, when the 3D printer is formed, if the right angle, the acute angle, etc. of the graphic edge occurs, the graphic edge is deformed due to shrinkage characteristics of the printed material, resulting in overall deformation. In order to prevent the hollowed-out fonts from generating errors in the printing process, the system cuts out the edges of the hollowed-out fonts, and the cutting process specifically comprises the following steps: after the system acquires the template data of the hollowed-out fonts, the system extracts outline information of the hollowed-out fonts according to the template data, and sets a preset edge angle threshold as an edge beveling angle value, for example, the preset edge angle threshold is 135 degrees. Then, the system automatically identifies the edge area of the hollowed-out fonts according to the existing edge detection algorithm, cuts out the curved surface of the edge area of the hollowed-out fonts according to the threshold value of the edge bevel angle, so as to obtain hollowed-out font graphic data of the curved surface of the edge of the hollowed-out fonts with the threshold value of the preset edge angle, and then the system guides the hollowed-out font graphic data into a preset 3D model file, for example, the preset 3D model file is a dental mouthpiece 3D model file, so as to obtain a 3D model with hollowed-out fonts.
On the basis of the above embodiment, there is a printing operation on the 3D model, and the specific steps include:
illustratively, for example in medical device production, the process of printing a 3D model of a mouthpiece specifically includes: the system obtains a large number of patient dentition 3D scan data through existing digital dental techniques. And extracting the typical tooth form from the 3D scanning data through the prior AI technology, and establishing a standard digital tooth base. And then carrying out parameterization design on the standard dental database to generate parameterized digital models with various specifications. The parametric model is then converted into a model file of a preset standard format, such as an STL file, by an interpolation algorithm. The system sends the model file to a preset slicing software, e.g., simplefy 3D software. And generating a plurality of slice files and production paths through the preset slicing software, and sending the slice files and the production paths to the corresponding printer terminals after finishing. After receiving the model files, the printer terminal decompresses the model files and prints the model files according to the printing path information of each slice file in sequence to obtain the dental mouthpiece 3D model.
S20: and identifying a preset background color target image corresponding to the model image, and dividing the preset background color target image into a plurality of single-area images according to a preset segmentation algorithm.
Specifically, in order to extract each character in the model image from the whole, each character can be independently identified and processed, and character identification accuracy is improved. The system divides the model image into a plurality of single-area images according to a preset segmentation algorithm.
Referring to fig. 2, specific steps include S21 to S24:
s21: obtaining a plurality of pixel point information of a model image through a preset image recognition algorithm; and calculating a gradient value corresponding to each pixel point information.
Illustratively, to extract image features more accurately, the system calculates gradient values of the model image, the calculation process including: the system performs preprocessing according to the existing image processing algorithm to obtain a high-definition pixel matrix of the model image. Then, pixel classification is performed through a preset image recognition algorithm, for example, the preset image recognition algorithm is a convolutional neural network algorithm, and the preset image recognition algorithm learns color difference rules of characters and backgrounds in different scenes in advance. And classifying the character areas corresponding to each pixel of the character string area by the system according to the spatial distribution relation of the pixel values. After classification is completed, the system calculates the gradient value of each pixel point, and the brightness difference degree between the pixel point and the pixel points on the upper, lower, left and right sides of the pixel point is calculated through the existing Sobel operator, so that a gradient matrix is generated, and each element corresponds to the gradient value of one pixel point.
S22: judging whether the gradient value is larger than a preset gradient threshold value or not; if yes, taking the pixel point corresponding to the gradient value as an edge pixel point; if not, the pixel point corresponding to the gradient value is taken as a non-edge pixel point.
Illustratively, the system examines each element in the gradient matrix in turn, i.e., the gradient value of the corresponding pixel point. The gradient value is then compared to a preset gradient threshold set by a worker or automatically generated by the system based on historical gradient value data. If the gradient value of a certain pixel point exceeds a preset gradient threshold value, the system judges the pixel point as an edge area pixel, namely an edge pixel point. If the gradient value of a certain pixel point is smaller than a preset gradient threshold value, the system judges the point with the lower gradient value as the pixel point in the character. Through efficient and accurate gradient value comparison, the model image obtains a clear edge area, namely, the effect of restoring outline lines of each character shape in the character string is achieved, and a data basis is provided for the follow-up identification by utilizing character outlines.
S23: judging whether the pixel distance between the non-edge pixel points is smaller than a preset pixel distance threshold value or not; if yes, taking two pixel points smaller than the preset pixel distance threshold value as adjacent pixel points.
Illustratively, the system calculates and computes an actual pixel distance between the non-edge pixel points based on the determined non-edge pixel points. And simultaneously setting a preset pixel distance threshold. The system compares the distance from each non-edge pixel to the other non-edge pixels. If the distance between the two non-edge pixel points is smaller than the preset pixel distance threshold value, the system judges that the two non-edge pixel points are adjacent pixel points. If the distance between the two non-edge pixel points is greater than the preset pixel distance threshold, the system determines that the two pixel points exceeding the preset pixel distance threshold cannot be directly connected. After complete inspection, a connectivity map with adjacent points as nodes is formed between the edge points. And a foundation is laid for the subsequent restoration of the character outline by using minimum circumscribed circles, rectangles and other algorithms through calculating the pixel distance between the non-edge pixel points.
S24: obtaining an edge contour according to the adjacent pixel points; and determining each single-area image in the model image according to the edge contour.
The system, in an exemplary manner, reforms and obtains the complete edge contour line of the whole model image according to the obtained connection relation between the adjacent pixel points. And then, calling the existing connection analysis algorithm to process the contour line, counting the topological relation of each pixel point of the contour by the existing connection analysis algorithm, identifying possible break points and connection points, modeling according to the connection points, and automatically distinguishing the parts of each closed-loop independent contour. The individual outlines represent the shape of the individual character components within the model image. The system adds a number and a position reference to each contour, generates a single area image and uses the single area image as input data for subsequent OCR recognition.
S30: and dividing each single-area image into a plurality of grid images according to a preset dividing standard.
Specifically, after each single-region image is obtained, the system divides each single-region image into region images of 6×6 rectangular squares according to a preset division standard, for example, the preset division standard is that 6×6 rectangular squares are adopted.
S40: detecting the shape characteristics of preset background colors in each grid image; and determining corresponding character information according to the preset background color shape characteristics and reading product information corresponding to the character information.
Specifically, the system responds to the identification operation sent by the user, and sends a backlight control instruction to the light control system to control the light source to add backlight with different base colors from the 3D model material on the hollowed-out back surface of the 3D model. Then, the system detects the area image of each single area image corresponding to 6X 6 rectangular square according to the existing image detection algorithm, compares the area image with the shape characteristics of the preset background color, marks the area image corresponding to the single area image if the area image which is the same as the preset background color exists, and then inputs the marked area image as the model input characteristics into a pre-trained neural network model which is obtained by training the distribution characteristics of the area image and the corresponding character information. The character information corresponding to the single-area images can be obtained through the pre-trained neural network model, and the character information corresponding to each single-area image forms character string information. Then, the character string information is recognized by the existing OCR vision recognition algorithm, thereby obtaining entry information corresponding to the character string information in the product database. If the keywords in the entry information are stored in the database, the product data of the 3D model corresponding to the character string information, namely, the 3D model parameters, such as the 3D model printing size, the number, the production date and the like, are directly read. And then the system converts the product data into a file in a preset editing text format and sends the file to the user terminal.
The following are system embodiments of the present application that may be used to perform method embodiments of the present application. For details not disclosed in the platform embodiments of the present application, reference is made to the method embodiments of the present application.
Referring to fig. 3, a system for a method for identifying a 3D printed product according to an embodiment of the present application includes: an image acquisition module 301, an image segmentation module 302, an image division module 303, an image identification module 304, wherein: an image acquisition module 301, configured to acquire a model image;
the image segmentation module 302 is configured to identify a preset background color target image corresponding to the model image, and divide the preset background color target image into a plurality of single-area images according to a preset segmentation algorithm;
the image dividing module 303 is configured to divide each single-area image into a plurality of grid images according to a preset dividing standard;
an image recognition module 304, configured to detect a preset background color shape feature in each grid image; and determining corresponding character information according to the preset background color shape characteristics and reading product information corresponding to the character information.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
The application also discloses electronic equipment. Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to the disclosure in an embodiment of the present application. The electronic device 400 may include: at least one processor 401, at least one network interface 404, a user interface 403, a memory 402, at least one communication bus 405.
Wherein a communication bus 405 is used to enable connected communications between these components.
The user interface 403 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 403 may further include a standard wired interface and a standard wireless interface.
The network interface 404 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 401 may include one or more processing cores. The processor 401 connects the various parts within the entire server using various interfaces and lines, performs various functions of the server and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 402, and calling data stored in the memory 402. Alternatively, the processor 401 may be implemented in at least one hardware form of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 401 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a modem, etc. The CPU mainly processes an operating system, a user interface diagram, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 401 and may be implemented by a single chip.
The Memory 402 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 402 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 402 may be used to store instructions, programs, code sets, or instruction sets. The memory 402 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described various method embodiments, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 402 may also optionally be at least one storage device located remotely from the aforementioned processor 401. Referring to fig. 4, an operating system, a network communication module, a user interface module, and an application program applicable to an identification method of a 3D printing product may be included in a memory 402 as a computer storage medium.
In the electronic device 400 shown in fig. 4, the user interface 403 is mainly used as an interface for providing input for a user, and obtains data input by the user; and processor 401 may be used to invoke an application in memory 402 that stores a method of training a nutritional literature model, which when executed by one or more processors 401, causes electronic device 400 to perform the method as in one or more of the embodiments described above. It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided herein, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The above are merely exemplary embodiments of the present disclosure and are not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure.
This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with the scope and spirit of the disclosure being indicated by the claims.

Claims (10)

1. A method of identifying a suitable 3D printed product, comprising:
obtaining a model image;
identifying a preset background color target image corresponding to the model image, and dividing the preset background color target image into a plurality of single-area images according to a preset segmentation algorithm;
dividing each single-area image into a plurality of grid images according to a preset dividing standard;
detecting the shape characteristics of preset background colors in each grid image;
and determining corresponding character information according to the shape characteristics of the preset background color and reading product information corresponding to the character information.
2. The method for identifying a suitable 3D printed product according to claim 1, further comprising, prior to the obtaining the model image:
responding to the hollowed-out font adding operation of a user, and calling the corresponding hollowed-out font in a preset font library;
cutting the hollowed-out font edge area according to a preset edge angle threshold value, obtaining the cut-out hollowed-out font information, and importing the cut-out hollowed-out font information into a preset 3D model.
3. The method for identifying a suitable 3D printed product according to claim 2, wherein after the cut hollowed-out font information is obtained and imported into the preset 3D model, the method further comprises:
converting the preset 3D model into a model file in a preset standard file format, and importing the model file into preset slicing software;
dividing a preset 3D model in the model file into a plurality of slice files through the preset slicing software, and generating printing path information corresponding to each slice file;
and transmitting the plurality of slice files and the printing path information to a printer terminal, wherein the printer terminal is used for printing the preset 3D model according to the slice files and the printing path information.
4. The method for identifying a suitable 3D printed product according to claim 1, wherein the obtaining the model image comprises:
responding to the image identification operation, and acquiring a hollowed-out image of the 3D printing product through a visual camera device;
converting the hollowed-out image into a gray image according to a preset graying algorithm;
converting the gray level image into a binarized image according to a preset binarization algorithm;
removing noise in the binarized image through a preset filtering algorithm to obtain a target image;
and extracting the model image in the target image.
5. The method for identifying an applicable 3D printed product according to claim 4, further comprising, after the extracting the model image in the target image:
obtaining a plurality of pixel point information of the preset background color target image through a preset image recognition algorithm;
calculating a gradient value corresponding to the pixel point information;
judging whether the gradient value is larger than a preset gradient threshold value or not;
if yes, taking the pixel point corresponding to the gradient value as an edge pixel point;
if not, the pixel point corresponding to the gradient value is used as a non-edge pixel point.
6. The method for identifying a suitable 3D printed product according to claim 5, wherein after obtaining the edge pixel and the non-edge pixel, further comprising:
judging whether the pixel distance between the non-edge pixel points is smaller than a preset pixel distance threshold value or not;
if yes, taking the two pixel points smaller than the preset pixel distance threshold value as adjacent pixel points;
obtaining an edge contour according to the adjacent pixel points;
and determining the single-area image in the preset background color target image according to the edge profile.
7. The method for identifying a 3D printed product according to claim 1, wherein determining corresponding character information and reading product information corresponding to the character information according to the preset background color shape feature comprises:
reading the character information through an OCR visual recognition algorithm to obtain 3D model parameters corresponding to the character information in a database;
and converting the 3D model parameters into files with preset editing text formats and sending the files to a user terminal.
8. A system based on the method of identification of applicable 3D printed products according to claims 1-7, characterized in that it comprises:
an image acquisition module (301) for acquiring a model image;
the image segmentation module (302) is used for identifying a preset background color target image corresponding to the model image and dividing the preset background color target image into a plurality of single-area images according to a preset segmentation algorithm;
the image dividing module (303) is used for dividing each single-area image into a plurality of grid images according to a preset dividing standard;
an image recognition module (304) for detecting a preset background color shape feature in each grid image; and determining corresponding character information according to the shape characteristics of the preset background color and reading product information corresponding to the character information.
9. An electronic device comprising a processor (401), a memory (402), a user interface (403) and a network interface (404), the memory (402) being configured to store instructions, the user interface (403) and the network interface (404) being configured to communicate to other devices, the processor (401) being configured to execute the instructions stored in the memory (402) to cause the electronic device (400) to perform a method of identifying an applicable 3D printed product according to any of claims 1-7.
10. A computer-readable storage medium storing instructions that, when executed, perform the method steps of any one of claims 1-7 for identifying an applicable 3D printed product.
CN202311171884.2A 2023-09-12 2023-09-12 Identification method and system suitable for 3D printing product, electronic equipment and medium Pending CN117292372A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311171884.2A CN117292372A (en) 2023-09-12 2023-09-12 Identification method and system suitable for 3D printing product, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311171884.2A CN117292372A (en) 2023-09-12 2023-09-12 Identification method and system suitable for 3D printing product, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117292372A true CN117292372A (en) 2023-12-26

Family

ID=89238134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311171884.2A Pending CN117292372A (en) 2023-09-12 2023-09-12 Identification method and system suitable for 3D printing product, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117292372A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107031033A (en) * 2017-05-10 2017-08-11 山东大学 It is a kind of can 3D printing hollow out Quick Response Code model generating method and system
CN111325835A (en) * 2020-03-31 2020-06-23 浙江隐齿丽医学技术有限公司 Dental model preparation system and method and shell-shaped tooth appliance preparation method
CN112317962A (en) * 2020-10-16 2021-02-05 广州黑格智造信息科技有限公司 Marking system and method for invisible appliance production
US20210074061A1 (en) * 2019-09-05 2021-03-11 Align Technology, Inc. Artificially intelligent systems to manage virtual dental models using dental images
US20230141168A1 (en) * 2020-07-02 2023-05-11 Guangzhou Heygears Imc. Inc Production Method and System of Dental Instrument, Apparatus, and Medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107031033A (en) * 2017-05-10 2017-08-11 山东大学 It is a kind of can 3D printing hollow out Quick Response Code model generating method and system
US20210074061A1 (en) * 2019-09-05 2021-03-11 Align Technology, Inc. Artificially intelligent systems to manage virtual dental models using dental images
CN111325835A (en) * 2020-03-31 2020-06-23 浙江隐齿丽医学技术有限公司 Dental model preparation system and method and shell-shaped tooth appliance preparation method
US20230141168A1 (en) * 2020-07-02 2023-05-11 Guangzhou Heygears Imc. Inc Production Method and System of Dental Instrument, Apparatus, and Medium
CN112317962A (en) * 2020-10-16 2021-02-05 广州黑格智造信息科技有限公司 Marking system and method for invisible appliance production

Similar Documents

Publication Publication Date Title
CN110503054B (en) Text image processing method and device
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
JP2000181993A (en) Character recognition method and device
JP2014131277A (en) Document image compression method and application of the same to document authentication
KR19990072314A (en) Color image processing apparatus and pattern extracting apparatus
CN113569863B (en) Document checking method, system, electronic equipment and storage medium
CN110298353B (en) Character recognition method and system
CN113723330B (en) Method and system for understanding chart document information
CN113095267B (en) Data extraction method of statistical chart, electronic device and storage medium
JPH05225378A (en) Area dividing system for document image
JP2002288589A (en) Image processing method, image processor and computer readable recording medium recording program for executing the image processing method by computer
CN116168351A (en) Inspection method and device for power equipment
CN113392819B (en) Batch academic image automatic segmentation and labeling device and method
CN112699704B (en) Method, device, equipment and storage device for detecting bar code
CN113063802A (en) Printed label defect detection method and device
CN111199240A (en) Training method of bank card identification model, and bank card identification method and device
CN117292372A (en) Identification method and system suitable for 3D printing product, electronic equipment and medium
CN111144160A (en) Full-automatic material cutting method and device and computer readable storage medium
CN110070103A (en) The method and terminal device of identity card identification
CN115797939A (en) Two-stage italic character recognition method and device based on deep learning
CN102682308B (en) Imaging processing method and device
CN111524171B (en) Image processing method and device and electronic equipment
CN111723804B (en) Image-text separation device, image-text separation method and computer readable recording medium
CN110751158A (en) Digital identification method and device in therapeutic bed display and storage medium
CN114694147B (en) Method and device for dividing surrounding characters in elliptical pattern

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination