CN113989436A - Three-dimensional mesh tone reconstruction method based on HVS and random printer model - Google Patents

Three-dimensional mesh tone reconstruction method based on HVS and random printer model Download PDF

Info

Publication number
CN113989436A
CN113989436A CN202111268296.1A CN202111268296A CN113989436A CN 113989436 A CN113989436 A CN 113989436A CN 202111268296 A CN202111268296 A CN 202111268296A CN 113989436 A CN113989436 A CN 113989436A
Authority
CN
China
Prior art keywords
halftone
image
model
dimensional
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111268296.1A
Other languages
Chinese (zh)
Other versions
CN113989436B (en
Inventor
易尧华
张鸿瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202111268296.1A priority Critical patent/CN113989436B/en
Publication of CN113989436A publication Critical patent/CN113989436A/en
Application granted granted Critical
Publication of CN113989436B publication Critical patent/CN113989436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention provides a three-dimensional mesh tone reconstruction method based on an HVS (high-voltage sequence analysis) and a random printer model, which aims at the improvement of the existing three-dimensional error diffusion method, adopts a mode of combining a human eye vision model and the printer model, provides a spatial Hibert scanning path, realizes the improvement of the traditional three-dimensional error diffusion algorithm, and solves the problems of regular texture, edge transition discontinuity and the like caused by the deformation sensitivity of the classical three-dimensional mesh tone reconstruction method to a 3D printer, thereby achieving better visual effect. Finally, the image simulation provides a light projection algorithm based on the GPU, and meanwhile, the illumination model is applied to the algorithm, so that the problems of low sampling efficiency, poor drawing precision and the like of the traditional light projection algorithm are solved, the light projection speed is increased, the drawing effect of the algorithm is enhanced, and the method has a certain application value in the technical field of color digital imaging.

Description

Three-dimensional mesh tone reconstruction method based on HVS and random printer model
Technical Field
The invention relates to the technical field of color digital imaging, in particular to a three-dimensional mesh tone reconstruction method based on an HVS (high-voltage video system) and a random printer model, which is mainly used for solving the problems of regular textures, discontinuous edge transition and the like caused by the three-dimensional mesh tone reconstruction method on the deformation sensitivity of a 3D printer.
Background
The three-dimensional halftone technique is a method for processing three-dimensional input data in a 3D printing manufacturing process, and can be regarded as a three-dimensional expansion of a two-dimensional digital image halftone technique. In a 3D printing manufacturing process, input data generally exists in a 3D model composed of triangular meshes or in a volume data model composed of voxels, and the input data must be converted into rasterized binary data that can be output by a printer through a series of transformations. The method in which a volume data model is converted into three-dimensional halftone data that can be output by a printer is called a three-dimensional halftone technique.
Inkjet printers have been less studied by researchers because they can achieve relatively ideal hard dots and are more stable during printing than laser printers. However, the three-dimensional halftone technique is significantly different from the halftone technique of two-dimensional digital images in application scenarios in that the 3D printing process for the three-dimensional halftone lacks a substrate and ink droplets must be closely arranged and stacked into a product. Therefore, the planar digital image halftone technique cannot be expanded and applied in the field of three-dimensional halftone. At present, the three-dimensional error diffusion algorithm is widely applied, and although the three-dimensional error diffusion algorithm can more completely keep the tone information of the original image, the three-dimensional error diffusion algorithm has higher printing resolution. However, there are problems of regular texture, discontinuous edge transition, etc. in 3D printing, and in addition, the overlap effect of ink droplets makes the above phenomenon more noticeable, resulting in poor visual effect.
Disclosure of Invention
The invention aims to make up the defects of the prior art, provides a three-dimensional mesh adjusting and reconstructing method based on an HVS (high voltage sequence analysis) and a random printer model aiming at solving the problems of the existing three-dimensional error diffusion algorithm, aims to overcome the defects of the traditional three-dimensional mesh adjusting and reconstructing method, and improves the processing effect of the algorithm on continuously adjusted images, thereby achieving better visual effect.
In order to achieve the purpose, the invention is realized by the following technical scheme:
a three-dimensional halftone reconstruction method based on an HVS and a stochastic printer model comprises the following steps:
s1: preprocessing input three-dimensional continuous tone data to construct a three-dimensional discrete data set;
s2: introducing a printer model into a traditional three-dimensional error diffusion method, and taking the difference between an input value of a quantization function and a color value of output three-dimensional halftone data as a quantization error fed back in the three-dimensional error diffusion method in order to reduce a dot gain phenomenon;
s3: obtaining binary output b (x, y, z) by adopting a three-dimensional error diffusion method based on linear error enhancement through a printer model;
s4: respectively passing the obtained halftone image b (x, y, z) and the original image f (x, y, z) through a human eye visual model (HVS) to obtain
Figure BDA0003327718560000021
And
Figure BDA0003327718560000022
s5: after the eyes are attracted into the visual model, the vision difference of the eyes is obtained by using the halftone image and the original image
Figure BDA0003327718560000023
Can be expressed as
Figure BDA0003327718560000024
S6: calculating a feedback coefficient H (u, v) of visual difference according to the pixel gray scale characteristics of the current region, and compensating the visual difference of human eyes for each pixel of the original image through the feedback coefficient;
s7: performing three-dimensional error diffusion processing based on linear error enhancement on the compensated image again to obtain binary output;
s8: introducing Gaussian random noise into the constant threshold value T to obtain a modulation threshold value T ', and carrying out halftone quantization on the original image by using T';
s9: determining the feedback times by integrating the image processing effect and efficiency through experiments;
s10: the method adopts a ray projection algorithm based on the GPU to realize image simulation, and objectively evaluates parameters through image quality: the evaluation of the processing effect of the algorithm is realized by weighting the signal-to-noise ratio and the structural similarity.
Specifically, the step S2 is to implement the printer model simulation by using a random dot model, and includes the following steps:
s21: designing a slice testing template of a printer and scanning and outputting by using a scanner;
s22: transmitting the scanned output result to a computer to obtain output printed statistical data through image processing such as quantization, segmentation, pixel particle detection and the like;
s23: random horizontal dot displacement, wherein the displacement directions of adjacent rows are different;
s24: adding vertical dot shift, the output halftone image is the sum of the tone value in the output unit of the color 3D printer and the dot spread function of the ideal printer.
Further, the implementation of the three-dimensional error diffusion method based on linear error enhancement in step S3 includes the following sub-steps:
s31: presetting a false threshold quantizer input value u, and carrying out quantization processing by using the value;
s32: scanning the three-dimensional data set of the input model according to a spatial Hibert scanning path;
s33: respectively calculating real threshold value quantizer input values u (x, y, z) of each voxel point and a false threshold value quantizer input value u (x, y, z) linearly enhanced according to the difference value of the voxel point and the threshold gray valuefake(x,y,z);
S34: performing threshold processing by using an input value of a false threshold quantizer and outputting binary halftone data;
s35: calculating an error value through the output data of the halftone and the input value u (x, y, z) of the real threshold quantizer, distributing the error to neighborhood points according to a three-dimensional error diffusion filter, and processing the neighborhood points one by one until all input data sets are scanned to obtain an output result of the halftone.
Further, the specific implementation of the spatial Hibert scan path in S32 includes the following steps:
s321: starting from the top of the lower left corner of the uppermost slice of the input model bounding box, scanning according to a plane Hibert curve, and ensuring that each point in the slice is scanned, so that a scanning path of a single-layer slice can form a continuous broken line.
S322: and (3) taking the position right below the voxel point when the current slice scanning is finished as the scanning starting point of the next slice, enabling the end of the odd slice layer to be just connected with the starting point of the even slice layer, and repeating the plane Hibert curve scanning process until the input model bounding box is traversed.
Further, in step S4, a Gaussian function is used to replace the Nasanen function model in the human eye visual model (HVS);
since the HVS model has circular symmetry and the Gaussian function has many features suitable for halftone processing besides circular symmetry, and the inverse fourier transform of the filter is Gaussian, the Gaussian function is often used to replace the Nasanen function model, which can be expressed as:
Figure BDA0003327718560000031
wherein u and v are frequency domain coordinates, and sigma is the expansion degree of the Gaussian curve.
The halftone image passing through the human eye visual model can be expressed as:
Figure BDA0003327718560000032
the contone image may be represented as:
Figure BDA0003327718560000033
where IFFT represents inverse Fourier transform, FFT represents Fourier transform, HGausRepresenting a gaussian function.
Further, the specific implementation of S5 includes the following sub-steps:
s51: processing the continuously adjusted image by adopting a three-dimensional error diffusion method based on linear error enhancement, and performing linear false enhancement on an error signal before quantization processing;
s52: respectively passing the obtained binary output and input through a human eye visual model HVS;
s53: calculating the final feedback vision difference
Figure BDA0003327718560000041
Further, the GPU-based ray casting algorithm in step S10 is implemented by the following steps:
s101: pretreatment: measuring the opacity of each color of semitransparent material used by the 3D printer, and assigning the opacity to the voxel of the corresponding color channel;
s102: resampling: firstly, determining a sampling depth d, secondly, performing interpolation sampling, performing sampling at equal intervals according to the sampling depth d along the light emission direction, and obtaining the color value and the opacity at the sampling position by adjacent 8-point cubic linear interpolation;
s103: calculating diffuse reflected light and specular light: and performing light and shade rendering on each voxel by adopting a Blinn-Phong illumination model, and calculating xyz values of the sampling points through diffuse reflection light and specular light.
S104: color synthesis: the color synthesis is carried out by using a light absorption and emission model, and the numerical integral calculation formula of the color along the light projection path is as follows:
Figure BDA0003327718560000042
where C represents the color value resulting from the final blending of each ray, Ci,AiRespectively representing the color value and opacity of the three-dimensional data up-sampling point passed by the ray, and n represents that each ray is divided into n equal parts.
Further, the weighted signal-to-noise ratio and the structural similarity in step S10 are specifically calculated as follows:
(1) calculating the weighted signal-to-noise ratio of the processed halftone image:
the weighted signal-to-noise ratio is the ratio of the image evaluation signal energy to the average noise energy, and represents the restoration degree of the original image to the halftone image in the field of digital halftone processing. The larger the WSNR is, the smaller the difference between the original image and the halftone image is, the better the halftone image restoration effect is, and the calculation formula is as follows:
Figure BDA0003327718560000043
wherein f (x, y, z) and b (x, y, z) are voxel values of (x, y, z) coordinates corresponding to the input three-dimensional model data and the halftone processing result data respectively, M, N and L are the length, width and height of the bounding box of the input model, and HVS is the discrete Fourier transform of the visual sensitivity function of the human eye.
(2) Calculating the structural similarity of the processed halftone image:
the structural similarity is an evaluation method for measuring the similarity between two images. The calculation formula is as follows:
Figure BDA0003327718560000051
wherein epsilonfIs the average value of the input image, εbIs the average of the halftone images,
Figure BDA0003327718560000052
is the variance of the input image and,
Figure BDA0003327718560000053
is the variance, σ, of the halftone imagefσbIs the covariance of the input image and the output image, C1、C2Is approximately equal to 0 and is a constant, the calculation formula is respectively as follows:
Figure BDA0003327718560000054
Figure BDA0003327718560000055
Figure BDA0003327718560000056
where M and N are the width and height of the image, and f and b are the original image and halftone image, respectively. In the evaluation method of the three-dimensional halftone, the application method of the structure similarity evaluation index is to regard each layer of slices of an input model as an input image, regard the corresponding slices of an output halftone result as an output image, obtain the structure similarity of each layer of slices and average the structure similarity to obtain the structure similarity of the whole data.
According to the technical scheme, the invention has the beneficial effects that:
(1) the three-dimensional halftone technique is a method for processing three-dimensional input data in a 3D printing manufacturing process, and can be regarded as a three-dimensional expansion of a two-dimensional digital image halftone technique. The effect of the two prints was evaluated by a vision system. In addition, generally obtained prints are printed by a 3D printer, and thus the prints are subjected to a halftone process and a printing process before subjective judgment is performed. The visual difference between the front and back contains both the halftone processing and the print offset due to dot gain of the printer process. Therefore, the invention adopts the simulation printer model and the human eye vision model to measure the visual difference and feeds back the visual difference to the processed image, thereby obtaining the halftone image which accords with the human eye vision characteristic.
(2) The invention discusses a three-dimensional error diffusion algorithm based on a human eye vision model in detail, and the three-dimensional error diffusion algorithm is improved on the basis of the traditional three-dimensional error diffusion algorithm. The Gaussian function model approximate to the human eye visual model is used as a feedback basis, and a visual difference feedback system is established, so that the structural texture details of the printed matter are well protected, the processed halftone image is closer to the human eye visual characteristics, and the visual texture is reduced. And a random dot model is adopted to reduce the dot gain phenomenon which is easy to occur in the printing process of the halftone image.
(3) The spatial serpentine scanning path of the linear error enhanced three-dimensional error diffusion algorithm designed by the invention can not completely output the spatial Hibert curve scanning path of the halftone data for eliminating the regular texture, so that the phenomenon of the regular texture caused by the spatial serpentine scanning path is compensated to a great extent. The spatial Hibert filling curve of the invention can continuously traverse all points in all slices, and the output halftone result has the best visual effect although the generation algorithm is complex. The continuous scanning track generated by the method greatly shortens the forming time of the scanning path, improves the overall printing efficiency, and can meet the requirement of 3D printing on color high precision.
(4) The invention realizes the improvement of the traditional three-dimensional error diffusion algorithm by adopting the mode of combining the human eye vision model and the printer model, has certain innovativeness and obtains better processing effect. Has certain application value in the field of color digital imaging.
Drawings
FIG. 1 is a flow chart of three-dimensional error diffusion based on linear error enhancement;
FIG. 2 is a schematic block diagram of a three-dimensional error diffusion method of the present invention;
fig. 3 is a spatial Hibert scan path diagram of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
FIG. 1 is a flow chart of three-dimensional error diffusion based on linear error enhancement according to the present invention.
As shown in fig. 1, the method of this example includes the steps of:
scanning a three-dimensional data set of an input model according to a certain scanning path;
step two, respectively calculating a real threshold quantizer of each voxel pointInput value u (x, y, z) and pseudomorphic thresholding quantizer input value u which is linearly enhanced according to voxel point and threshold gray value differencefake(x,y,z);
Thirdly, using the input value u of the false threshold quantizerfake(x, y, z) performing threshold processing and outputting binary halftone data, calculating an error value through the halftone output data and a real threshold quantizer input value u (x, y, z), and distributing the error to neighborhood points according to a three-dimensional error diffusion filter;
and step four, processing the data point by voxel until all input data sets are scanned, and obtaining a halftone output result.
FIG. 2 is a schematic block diagram of a three-dimensional error diffusion algorithm based on an HVS and a printer model according to the present invention.
As shown in fig. 2, the method of this example includes the steps of:
firstly, preprocessing input three-dimensional continuous tone data to construct a three-dimensional discrete data set;
step two, introducing the printer model into a traditional three-dimensional error diffusion method, and taking the difference between the input value of a quantization function and the color value of output three-dimensional halftone data as a quantization error fed back in the three-dimensional error diffusion method to achieve the effect of reducing the point gain phenomenon;
step three, obtaining binary output b (x, y, z) by adopting a three-dimensional error diffusion method based on linear error enhancement through a printer model;
step four, the obtained halftone image b (x, y, z) and the original image f (x, y, z) are respectively subjected to a human eye visual model (HVS) to obtain the halftone image b (x, y, z) and the original image f (x, y, z) are subjected to the human eye visual model (HVS)
Figure BDA0003327718560000071
And
Figure BDA0003327718560000072
the specific substeps of step four are as follows:
step 4-1, processing the continuous adjustment image by adopting a three-dimensional error diffusion method, comparing the current pixel gray value with a threshold value to obtain a binary output, and subtracting the binary output from the pixel value of the original continuous adjustment image to obtain an error value so as to achieve the effect of error compensation;
and 4-2, respectively outputting and inputting the obtained binary values through a human eye visual model (HVS), wherein the halftone image passing through the human eye visual model can be expressed as:
Figure BDA0003327718560000073
the contone image may be represented as:
Figure BDA0003327718560000074
and 4-3, performing difference on input and output of the human eye vision model to obtain the vision difference which is finally used as feedback.
Fifthly, the halftone image after the response of the human eye vision model is differed from the original image to obtain the human eye vision difference
Figure BDA0003327718560000081
Can be expressed as
Figure BDA0003327718560000082
The step five comprises the following substeps:
at step 5-1, the current HVS model for halftone techniques is limited to cases where the contrast sensitive function is a linear time invariant filter. Currently, the study of CSF function of HVS model has just started, and there are mainly 4 functions applied in halftone domain: campbell function, Mannos function, Nasanen function, Daly function. For digital halftoning, the Nasanen function model has a good low-pass characteristic, and the halftone image produced by it has the best subjective visual effect, and is therefore generally preferred. The invention adopts a Nasanen function model to replace a human eye vision model, and because the human eye vision model has the characteristic of circular symmetry, the invention adopts a Gaussian function model to replace the Nasanen function model, which can be expressed as follows:
Figure BDA0003327718560000083
wherein u and v are frequency domain coordinates, and sigma is the expansion degree of the Gaussian curve.
Step 5-2, the human eye visual difference can be expressed as:
Figure BDA0003327718560000084
step six, calculating a feedback coefficient H (u, v) of visual difference according to the pixel gray scale characteristics of the current region;
seventhly, performing three-dimensional error diffusion processing based on linear error enhancement on the compensated image again to obtain binary output;
step eight, introducing Gaussian random noise into a constant threshold value T to obtain a modulated threshold value T ', and carrying out halftone quantization by using T';
step nine, integrating image processing effects and efficiency through experiments, determining the feedback times, and finally determining the feedback times to be the best three times through experiments;
step ten, realizing the simulation of the image by adopting the algorithm in the invention, and objectively evaluating parameters through the image quality: the evaluation of the processing effect of the algorithm is realized by weighting the signal-to-noise ratio and the structural similarity.
Fig. 3 is a graph of the space Hibert filling of the present invention. Derived from a planar curve, comprising the steps of:
s20: starting from the top of the lower left corner of the uppermost slice of the input model bounding box, scanning according to a plane Hibert curve, and ensuring that each point in the slice is scanned, so that a scanning path of a single-layer slice can form a continuous broken line.
S21: and taking the position right below the voxel point when the current slice scanning is finished as the scanning starting point of the next slice, enabling the end of the odd slice layer to be perfectly connected with the starting point of the even slice layer, and repeating the plane Hibert curve scanning process until the input model bounding box is traversed.
Specifically, the spatial Hibert scanning curve takes a plane Hibert as a main body, and continuously traverses all points in all slices from the top starting point, so that the output halftone result has the best visual effect.
The components used in the present invention are all common standard components or components known to those skilled in the art, and the structure and principle thereof are well known to those skilled in the art.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (8)

1. A three-dimensional halftone reconstruction method based on an HVS and a stochastic printer model is characterized by comprising the following steps:
s1: preprocessing input three-dimensional continuous tone data to construct a three-dimensional discrete data set;
s2: introducing a printer model into a traditional three-dimensional error diffusion method, and taking the difference between an input value of a quantization function and a color value of output three-dimensional halftone data as a quantization error fed back in the three-dimensional error diffusion method in order to reduce a dot gain phenomenon;
s3: obtaining binary output b (x, y, z) by adopting a three-dimensional error diffusion method based on linear error enhancement through a printer model;
s4: respectively passing the obtained halftone image b (x, y, z) and the original image f (x, y, z) through a human eye visual model HVS to obtain the halftone image
Figure FDA0003327718550000011
And
Figure FDA0003327718550000012
s5: after a human eye vision model is introduced, the human eye vision difference is obtained by utilizing the halftone image and the original image
Figure FDA0003327718550000013
Can be expressed as
Figure FDA0003327718550000014
S6: calculating a feedback coefficient H (u, v) of visual difference according to the pixel gray scale characteristics of the current region, and compensating the visual difference of human eyes for each pixel of the original image through the feedback coefficient;
s7: carrying out three-dimensional error diffusion processing on the compensated image again to obtain binary output;
s8: introducing Gaussian random noise into the constant threshold value T to obtain a modulation threshold value T ', and carrying out halftone quantization on the original image by using T';
s9: determining the feedback times according to the comprehensive image processing effect and efficiency;
s10: and realizing image simulation by adopting a ray projection algorithm based on the GPU.
2. The HVS and stochastic printer model-based three-dimensional halftone reconstruction method according to claim 1, wherein: the concrete implementation of the printer model in step S2 includes the following steps:
s21: designing a slice testing template of a printer and scanning and outputting by using a scanner;
s22: transmitting the scanned output result to a computer to obtain output printed statistical data through quantization, segmentation and pixel particle detection image processing;
s23: random horizontal dot displacement, wherein the displacement directions of adjacent rows are different;
s24: adding vertical dot shift, the output halftone image is the sum of the tone value in the output unit of the color 3D printer and the dot spread function of the ideal printer.
3. The HVS and stochastic printer model-based three-dimensional halftone reconstruction method according to claim 1, wherein: the three-dimensional error diffusion method based on linear error enhancement in the step S3 comprises the following steps:
s31: presetting a false threshold quantizer input value u, and carrying out quantization processing by using the value;
s32: scanning the three-dimensional data set of the input model according to a Hibert space scanning path;
s33: respectively calculating real threshold value quantizer input values u (x, y, z) of each voxel point and a false threshold value quantizer input value u (x, y, z) linearly enhanced according to the difference value of the voxel point and the threshold gray valuefake(x,y,z);
S34: performing threshold processing by using an input value of a false threshold quantizer and outputting binary halftone data;
s35: calculating an error value through the output data of the halftone and the input value u (x, y, z) of the real threshold quantizer, distributing the error to neighborhood points according to a three-dimensional error diffusion filter, and processing the neighborhood points one by one until all input data sets are scanned to obtain an output result of the halftone.
4. The HVS and stochastic printer model-based three-dimensional halftone reconstruction method according to claim 1, wherein: the concrete implementation of the Hibert spatial scanning path in step S32 includes the following steps:
s321: starting from the top of the lower left corner of the uppermost slice of the input model bounding box, scanning according to a plane Hibert curve to ensure that each point in the slice is scanned, so that a scanning path of a single-layer slice can form a continuous broken line;
s322: and (3) taking the position right below the voxel point when the current slice scanning is finished as the scanning starting point of the next slice, enabling the end of the odd slice layer to be just connected with the starting point of the even slice layer, and repeating the plane Hibert curve scanning process until the input model bounding box is traversed.
5. The HVS and stochastic printer model-based three-dimensional halftone reconstruction method according to claim 1, wherein: step S4 shows that the model of human eye vision has circular symmetry, and the Gaussian function has many features suitable for halftone processing besides circular symmetry, and the inverse fourier transform of the filter is Gaussian, so that the Gaussian function is used to replace the Nasanen function model, which can be expressed as:
Figure FDA0003327718550000021
wherein u and v are frequency domain coordinates, and sigma is the expansion degree of a Gaussian curve;
the halftone image passing through the human eye visual model can be expressed as:
Figure FDA0003327718550000022
the contone image may be represented as:
Figure FDA0003327718550000023
where IFFT represents inverse Fourier transform, FFT represents Fourier transform, HGausRepresenting a gaussian function.
6. The HVS and stochastic printer model-based three-dimensional halftone reconstruction method according to claim 1, wherein: step S5 includes the following steps;
s23: processing the continuously adjusted image by adopting a three-dimensional error diffusion method based on linear error enhancement, and introducing a parameter u to perform false enhancement on an error signal linearly before quantization processing;
s24: respectively applying a human eye visual model HVS to binary output and input;
s25: calculating the final feedback vision difference
Figure FDA0003327718550000031
7. The HVS and stochastic printer model-based three-dimensional halftone reconstruction method according to claim 1, wherein: the specific implementation of the step S10 includes the following steps;
s101: pretreatment: measuring the opacity of each color of semitransparent material used by the 3D printer, and assigning the opacity to the voxel of the corresponding color channel;
s102: resampling: firstly, determining a sampling depth d, secondly, performing interpolation sampling, performing sampling at equal intervals according to the sampling depth d along the light emission direction, and obtaining the color value and the opacity at the sampling position by adjacent 8-point cubic linear interpolation;
s103: calculating diffuse reflected light and specular light: performing light and shade rendering on each voxel by adopting a Blinn-Phong illumination model, and calculating the xyz value of the sampling point through diffuse reflection light and specular light;
s104: and color synthesis, wherein the color synthesis is carried out by using a light absorption and emission model, and the numerical integral calculation formula of the color along the light projection path is as follows:
Figure FDA0003327718550000032
where C represents the color value resulting from the final blending of each ray, Ci,AiRespectively representing the color value and opacity of the three-dimensional data up-sampling point passed by the ray, and n represents that each ray is divided into n equal parts.
8. The HVS and stochastic printer model-based three-dimensional halftone reconstruction method according to claim 1, wherein: further comprising a step S11 of objectively evaluating parameters by image quality: the signal-to-noise ratio and the structural similarity are weighted to evaluate the processing effect of the reconstruction method;
(1) calculating the weighted signal-to-noise ratio of the processed halftone image:
the weighted signal-to-noise ratio is the ratio of the image evaluation signal energy to the average noise energy, the reduction degree of the original image to the halftone image is represented in the digital halftone processing field, the larger the WSNR is, the smaller the difference between the halftone image and the original image is, and the better the halftone image reduction effect is, and the calculation formula is as follows:
Figure FDA0003327718550000041
wherein f (x, y, z) and b (x, y, z) are voxel values of (x, y, z) coordinates corresponding to the input three-dimensional model data and the halftone processing result data respectively, M, N and L are the length, width and height of an input model bounding box, and HVS is discrete Fourier transform of a human eye visual sensitivity function;
(2) calculating the structural similarity of the processed halftone image:
the structural similarity is an evaluation method for measuring the similarity between two images. The calculation formula is as follows:
Figure FDA0003327718550000042
wherein epsilonfIs the average value of the input image, εbIs the average of the halftone images,
Figure FDA0003327718550000043
is the variance of the input image and,
Figure FDA0003327718550000044
is the variance, σ, of the halftone imagefσbIs the covariance of the input image and the output image, C1、C2Is approximately equal to 0 and is a constant, the calculation formula is respectively as follows:
Figure FDA0003327718550000045
Figure FDA0003327718550000046
Figure FDA0003327718550000047
in the three-dimensional halftone evaluation method, the application method of the structural similarity evaluation index is to regard each layer of slices of an input model as input images, regard corresponding slices of output halftone results as output images, calculate the structural similarity of each layer of slices and take an average value to obtain the structural similarity of the whole data.
CN202111268296.1A 2021-10-29 2021-10-29 Three-dimensional mesh tone reconstruction method based on HVS and random printer model Active CN113989436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111268296.1A CN113989436B (en) 2021-10-29 2021-10-29 Three-dimensional mesh tone reconstruction method based on HVS and random printer model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111268296.1A CN113989436B (en) 2021-10-29 2021-10-29 Three-dimensional mesh tone reconstruction method based on HVS and random printer model

Publications (2)

Publication Number Publication Date
CN113989436A true CN113989436A (en) 2022-01-28
CN113989436B CN113989436B (en) 2024-07-26

Family

ID=79744036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111268296.1A Active CN113989436B (en) 2021-10-29 2021-10-29 Three-dimensional mesh tone reconstruction method based on HVS and random printer model

Country Status (1)

Country Link
CN (1) CN113989436B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998407A (en) * 2022-08-01 2022-09-02 湖南华城检测技术有限公司 Digital image three-dimensional texture reconstruction method based on Fourier transform
CN115499556A (en) * 2022-09-19 2022-12-20 浙江工业大学 Digital printing screening method based on machine learning iteration

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5781315A (en) * 1995-11-09 1998-07-14 Fuji Photo Film Co., Ltd. Image processing method for photographic printer
CN103793897A (en) * 2014-01-15 2014-05-14 昆明理工大学 Digital image halftone method based on small-wavelet-domain multi-scale information fusion
CN109493358A (en) * 2018-12-14 2019-03-19 中国船舶重工集团公司第七0七研究所 A kind of error feedback halftoning algorithm based on human vision model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5781315A (en) * 1995-11-09 1998-07-14 Fuji Photo Film Co., Ltd. Image processing method for photographic printer
CN103793897A (en) * 2014-01-15 2014-05-14 昆明理工大学 Digital image halftone method based on small-wavelet-domain multi-scale information fusion
CN109493358A (en) * 2018-12-14 2019-03-19 中国船舶重工集团公司第七0七研究所 A kind of error feedback halftoning algorithm based on human vision model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
易尧华;于晓庆: "基于图像阶调与人眼视觉模型的彩色误差扩散网目调方法", 《中国印刷与包装研究》, 15 June 2009 (2009-06-15) *
易尧华;王笑;何婧婧;杨锶齐;: "基于线性误差增强的彩色3D打印网目调重建方法", 数字印刷, no. 01, 10 February 2019 (2019-02-10) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998407A (en) * 2022-08-01 2022-09-02 湖南华城检测技术有限公司 Digital image three-dimensional texture reconstruction method based on Fourier transform
CN114998407B (en) * 2022-08-01 2022-11-08 湖南华城检测技术有限公司 Digital image three-dimensional texture reconstruction method based on Fourier transform
CN115499556A (en) * 2022-09-19 2022-12-20 浙江工业大学 Digital printing screening method based on machine learning iteration
CN115499556B (en) * 2022-09-19 2024-05-28 浙江工业大学 Digital printing screening method based on machine learning iteration

Also Published As

Publication number Publication date
CN113989436B (en) 2024-07-26

Similar Documents

Publication Publication Date Title
CN113989436B (en) Three-dimensional mesh tone reconstruction method based on HVS and random printer model
EP0320755B1 (en) Image processing system and method employing combined black and white and gray scale image data
CN106204447A (en) The super resolution ratio reconstruction method with convolutional neural networks is divided based on total variance
JP4765635B2 (en) High quality halftone processing
US6262745B1 (en) Digital halftoning using prioritized textures
DE69522277T2 (en) Method and device for reducing interference in images halftoned by error diffusion
DE69628771T2 (en) Analytical construction of halftone pixels for a printer with hyper resolution
US5471543A (en) Mixed screen frequencies with image segmentation
Son et al. Local learned dictionaries optimized to edge orientation for inverse halftoning
CN112017263B (en) Intelligent test paper generation method and system based on deep learning
CN103793897B (en) A kind of digital picture halftoning method based on wavelet multi-scale information fusion
Streit et al. Importance driven halftoning
Zhang et al. Image inverse halftoning and descreening: a review
CN115082296B (en) Image generation method based on wavelet domain image generation frame
DE69526158T2 (en) Method and device for image information processing using rasterization and error diffusion
CN114331875B (en) Image bleeding position prediction method in printing process based on countermeasure edge learning
JP2015115957A (en) Binary periodic to multibit aperiodic halftone and resolution conversion
JP2001052156A5 (en) Engraving-style halftone image generation method / device
Gooran et al. 3D surface structures and 3D halftoning
Li et al. Texture-aware error diffusion algorithm for multi-level digital halftoning
CN116148347A (en) Super-resolution imaging method for ultrasonic detection of internal defects of materials
KR100251551B1 (en) Non-casual error diffusion for digital image quantization
CN1173291C (en) Method for using computer to recreate image containing spot noise
JP2015149719A (en) Digital image halftone conversion with selective enhancement
CN117422927B (en) Mammary gland ultrasonic image classification method, system, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant