CN110293684A - Dressing Method of printing, apparatus and system based on three-dimensional printing technology - Google Patents

Dressing Method of printing, apparatus and system based on three-dimensional printing technology Download PDF

Info

Publication number
CN110293684A
CN110293684A CN201910477712.5A CN201910477712A CN110293684A CN 110293684 A CN110293684 A CN 110293684A CN 201910477712 A CN201910477712 A CN 201910477712A CN 110293684 A CN110293684 A CN 110293684A
Authority
CN
China
Prior art keywords
target area
skin
dressing
image
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910477712.5A
Other languages
Chinese (zh)
Inventor
袁晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Comexe Ikang Science And Technology Co Ltd
Original Assignee
Shenzhen Comexe Ikang Science And Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Comexe Ikang Science And Technology Co Ltd filed Critical Shenzhen Comexe Ikang Science And Technology Co Ltd
Priority to CN201910477712.5A priority Critical patent/CN110293684A/en
Publication of CN110293684A publication Critical patent/CN110293684A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing

Landscapes

  • Chemical & Material Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Materials Engineering (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a kind of dressing Method of printing based on three-dimensional printing technology comprising: obtain multiple target area images of multiple angles and the depth information of every target area image;Target area image is converted to obtain colour of skin binary map, target area profile is determined from colour of skin binary map, and extracts bony structure feature;Three-dimensionalreconstruction is carried out according to depth information, target area profile and bony structure feature, obtains target area threedimensional model, and skin condition information is extracted according to position of the target area profile in target area image;Dressing is printed on dressing substrate according to target area three dimensional model printing dressing substrate, and according to skin condition information.Meanwhile invention additionally discloses a kind of dressing printing equipment and systems.Technical solution of the present invention is applicable not only to beautifying skin, and is suitable for damage location medication, dressing needed for being printed according to the actual conditions to application site, and application effect is more preferable.

Description

Dressing Method of printing, apparatus and system based on three-dimensional printing technology
Technical field
The present invention relates to three-dimensional printing technical fields, and in particular to a kind of dressing method based on three-dimensional printing technology, dress It sets and system.
Background technique
In daily life, it is often necessary to dressing is used, and the purposes of dressing is mainly divided to two aspects, first is that being protected for skin It supports, second is that for Wound protection and release drug.In concrete application scene, such as when injury, need with comprising therapeutic agent Dressing wrapping;When cosmetology, need to paste the facial mask comprising Essence.In these scenes, the construction of dressing used is most It is the one layer of packing material that tile on there is sticky substrate, or the viscosity by increasing filler pastes it with human body It closes.However, the above-mentioned dressing referred to has the problem that the specification of dressing is consistent, it is only capable of being bonded people by the deformation of itself Body region, often because the problem of its size causes to paste not in place, and compactness is insufficient.
Summary of the invention
The main object of the present invention is to propose a kind of dressing Method of printing based on three-dimensional printing technology, it is intended to be solved existing Dressing structure cannot adapt on demand human body fitting position the problem of.
To achieve the above object, the present invention proposes a kind of dressing Method of printing based on three-dimensional printing technology, comprising:
Obtain multiple target area images of multiple angles and the depth information of every target area image;
The target area image is converted to obtain colour of skin binary map, determines mesh from the colour of skin binary map Region contour is marked, and extracts bony structure feature;
Three-dimensionalreconstruction is carried out according to the depth information, the target area profile and the bony structure feature, is obtained Target area threedimensional model, and skin shape is extracted according to position of the target area profile in the target area image State information;
It is applied according to the target area three dimensional model printing dressing substrate, and according to the skin condition information described Dressing is printed on material substrate.
Preferably, described the target area image to be converted to obtain colour of skin binary map and include:
The target area image is converted from RGB color to YUV color space, the RGB color conversion To the conversion formula of YUV color space are as follows:
Wherein, Y indicates brightness, and U, V are two chrominance components, indicates color difference;
The target area image is converted from RGB color to YIQ color space, the RGB color conversion To the conversion formula of YIQ color space are as follows:
Wherein, Y indicates brightness, and I, Q are two chromatic components;
The I component value range set in YIQ color space according to the target area image, in the YUV color Skin Color Information is extracted in space, and the Skin Color Information is converted into colour of skin binary map.
Preferably, the skin condition information includes the colour of skin and skin roughness, described according to the target area profile Extract skin condition information in position in the target area image
Position to be detected corresponding with the target area profile is extracted in the target area image;
The position to be detected is converted into hsv color space from RGB color, and to the hsv color space In S component and V component carry out binary conversion treatment, to obtain the binaryzation data of S component and V component;
The colour of skin is determined according to the binaryzation data of the S component and V component;
Morphological dilations processing carried out to the binaryzation data of the S component and V component, and using angular second moment, entropy, Contrast is weighted analysis, determines skin roughness in combination with the pore size of Canny operator extraction.
Preferably, the skin condition information further includes color spot amount, it is described according to the target area profile in the mesh Extract skin condition information in position in mark area image
Position to be detected corresponding with the target area profile is extracted in the target area image;
The position to be detected is subjected to gray processing and filtering processing, to obtain image P;
The image S at the position to be detected of described image P and gray processing is made into difference calculating, obtains color spot region, wherein color Spot region are as follows: area_seban=P-S;
Preset threshold process is carried out to the color spot region of acquisition, and calculates color spot region area accounting:
In formula, score is output as a result, area_seban (x, y) is the area for detecting color spot region, and (x, y) is inspection Location of pixels coordinate where in the color spot region measured;Area_mask (i, j) is the area of detection zone, and (i, j) is comprising color Location of pixels coordinate in the detection zone of spot;
Stretch processing is carried out to the color spot region area accounting, output color spot amount scoring:
It is scored according to the color spot amount and determines color spot amount.
Preferably, the bony structure feature of extracting from the colour of skin binary map includes:
Minuscule hole in the colour of skin binary map is eliminated using closing operation of mathematical morphology, and is disappeared using morphology opening operation Except in the colour of skin binary map very thin connection or very thin protrusion part;
The outer profile of white area is obtained based on the colour of skin binary map, and using preset contour feature parameter to wheel Exterior feature is filtered;
Convex closure defect analysis is carried out to filtered image outline, obtains the characteristic parameter of bony structure feature.
Preferably, described to include: according to the target area three dimensional model printing dressing substrate
According to the outer profile of the target area three dimensional model printing dressing substrate;
According to position of the bony structure feature in the target area threedimensional model, printed on the outer profile Support train of thought.
To achieve the above object, the present invention also proposes a kind of dressing printing equipment based on three-dimensional printing technology, comprising:
Image capture module, for obtaining multiple target area images and every target area image of multiple angles Depth information;
Conversion module, for being converted to the target area image to obtain colour of skin binary map;
Characteristic extracting module for determining target area profile from the colour of skin binary map, and extracts bony structure Feature;
Reconstructed module, for being carried out according to the depth information, the target area profile and the bony structure feature Three-dimensionalreconstruction obtains target area threedimensional model;
Skin condition extraction module, for being mentioned according to position of the target area profile in the target area image Take skin condition information;
Print module is used for according to the target area three dimensional model printing dressing substrate, and according to the skin shape State information prints dressing on the dressing substrate.
Preferably, the conversion module includes:
YUV converting unit, it is described for converting the target area image from RGB color to YUV color space RGB color is converted to the conversion formula of YUV color space are as follows:
Wherein, Y indicates brightness, and U, V are two chrominance components, indicates color difference;
YIQ converting unit, it is described for converting the target area image from RGB color to YIQ color space RGB color is converted to the conversion formula of YIQ color space are as follows:
Wherein, Y indicates brightness, and I, Q are two chromatic components;
Skin cluster unit, the I component value model for being set in YIQ color space according to the target area image It encloses, extracts Skin Color Information in the YUV color space;
Binarization unit, for the Skin Color Information to be converted to colour of skin binary map.
Preferably, the print module includes substrate print unit, and the substrate print unit is used for according to the target The outer profile of area three-dimensional model printing dressing substrate;And it is used for according to the bony structure feature in the target area three Position in dimension module, the printing support train of thought on the outer profile.
To achieve the above object, the present invention also proposes a kind of dressing print system based on three-dimensional printing technology, including figure As acquisition equipment and three-dimensional printing machine, described image acquisition equipment is for acquiring the image of human body and to collected figure As being handled, the three-dimensional printing machine is used to receive the print parameters of described image acquisition equipment output and is beaten according to described It prints parameter and prints dressing.
Dressing Method of printing provided by the present invention based on three-dimensional printing technology, obtains human body by image recognition technology Print parameters are determined to the data of the profile of application site, bony structure and skin condition, and according to these data, to be directed to Human body waits for that the different location of application site carries out subregion printing, so that the dressing obtained can be fitted closely with to application site, and And different dressing is used for different skin areas, meet otherness requirement.Therefore, the technical solution of above-mentioned offer is in reality Border is in application, be applicable not only to beautifying skin, and suitable for damage location medication, beat according to the actual conditions to application site Dressing needed for printing, application effect are more preferable.
Detailed description of the invention
Fig. 1 is the structural schematic diagram of one embodiment of dressing print system of the invention;
Fig. 2 is the flow diagram of one embodiment of dressing Method of printing of the invention;
Fig. 3 is the functional block diagram of one embodiment of dressing printing equipment of the invention.
Specific embodiment
The embodiment of the present invention is described more fully below, the example of embodiment is shown in the accompanying drawings, wherein phase from beginning to end Identical element or element with the same function are indicated with label.Embodiment below with reference to attached drawing description is exemplary , it is intended to it is used to explain the present invention, and is not considered as limiting the invention, based on the embodiments of the present invention, this field Those of ordinary skill's every other embodiment obtained without making creative work, belongs to protection of the present invention Range.
In order to solve the above technical problems, the present invention proposes a kind of dressing Method of printing based on three-dimensional printing technology, with root Wait for that the actual conditions of application site generate print parameters according to human body, so that the print service of customization is provided, so that dressing makes Guaranteed with effect.Meanwhile the hardware foundation for realizing this method process being provided according to the dressing Method of printing, i.e., the present invention is gone back It is proposed a kind of dressing print system based on three-dimensional printing technology.
Referring to Fig. 1, dressing print system 10 includes image capture device 11 and three-dimensional printing machine 12, and wherein Image Acquisition is set Standby 11 for acquiring the image of human body and handling acquired image, and three-dimensional printing machine 12 is for receiving image It acquires the print parameters that equipment 11 exports and prints dressing according to print parameters.Specifically, image capture device 11 can be meter The combination of calculation machine and camera is also possible to the intelligent terminal with camera function, such as mobile phone, tablet computer etc., has The function of the image of human body is acquired, and corresponding image processing software (APP) can be run, to acquired image It is handled;Three-dimensional printing machine 12, which has, generates thin layer for raw material, and volumetric body is obtained by way of being successively superimposed Function, the concrete composition and working principle of equipment can refer to the explanation of the prior art, and therefore not to repeat here.
Referring to fig. 2, which includes:
Step S10 obtains multiple target area images of multiple angles and the depth letter of every target area image Breath.
Specifically, multiple target area images of multiple angles are obtained, can more accurately reflect the true of target area Truth condition, for example by taking target area is whole face as an example, at least obtain the direct picture and left and right sides image of face.Make For a kind of possible implementation, target area image, and the target area are obtained by the camera connecting with computer Image is RGB image.Meanwhile as a kind of possible implementation, the mode for obtaining depth information is by structure light sensing Device obtains, and depth information is used to indicate the range information of depth image, and depth image refers to will be in from image acquisition device to scene Image of the distance (depth) of each point as pixel value, the geometric form of scenery visible surface can be directly reflected by depth information Shape.In the present embodiment, the specific implementation for obtaining depth information includes the following steps:
To target area projective structure light;
Shoot the structure light image modulated through target area;
The corresponding phase information of each pixel of demodulation structure light image is to obtain the corresponding depth of two dimension target area image Spend information.
In this example, depth image acquisition component includes structured light projector and structure light video camera head (not shown).Knot Structure light projector can be used for target area projective structure light, and structure light video camera head can be used for shooting the knot modulated through target area The corresponding phase information of each pixel of structure light image and demodulation structure light image is to obtain depth information.
Specifically, structured light projector is by after on the project structured light to target area of certain pattern, in target area Surface will form by the modulated structure light image in target area.Structure light video camera head shoot it is modulated after structure light image, Structure light image is demodulated to obtain depth information again.Wherein, the mode of structure light can be laser stripe, Gray code, Sine streak, non-homogeneous speckle etc..
Step S20 converts target area image to obtain colour of skin binary map.
In general, obtained target area image is the data of rgb format, and the data volume for including is too big, is not easy to Post-processing.Therefore, it by converting target area image to obtain handling convenient for later data, also, selects here All target area images are handled, using the data after weighted calculation as final result.
Specifically, in a better embodiment, target area image is converted to obtain colour of skin binary map and include:
Target area image is converted from RGB color to YUV color space, RGB color is converted to YUV color The conversion formula in space are as follows:
Wherein, Y indicates brightness, and U, V are two chrominance components, indicates color difference;
Target area image is converted from RGB color to YIQ color space, RGB color is converted to YIQ color The conversion formula in space are as follows:
Wherein, Y indicates brightness, and I, Q are two chromatic components;
The I component value range set in YIQ color space according to target area image, mentions in YUV color space Skin Color Information is taken, and Skin Color Information is converted into colour of skin binary map.
In the UV plane of YUV color space, the tone of the colour of skin includes human skin larger range between reddish yellow Tone range, therefore the image data Jing Guo color space conversion convenient for extract Skin Color Information.Further, in aforementioned base On, by the way that target area image is converted to YIQ color space from RGB color, the value range of Y is [0,255], I and Q Value range be [- 152,152].By a large amount of skin-color training, statistics show that the colour of skin looks for that brightness in YIQ color space Y is mainly distributed on [80,230], and I component is mainly distributed on [- 10,100], such as setting Imin=-15, Imax=100, in YUV I component condition is combined in the skin cluster result of color space, selected pixel is colour of skin point, so as to obtain more Accurate area of skin color.
Step S30 determines target area profile from colour of skin binary map, and extracts bony structure feature.
During the three-dimensional printing of dressing, target area profile determines the basic framework of dressing, and bony structure is special Sign determines the details of dressing printing.Here, bony structure feature is mainly to reflect the shape feature of skeleton on the skin. Colour of skin image by binary conversion treatment can clearly show that contour feature, and in order to obtain the higher bony structure of accuracy Feature, one kind, which is described in detail below, to be able to achieve process:
Minuscule hole in colour of skin binary map is eliminated using closing operation of mathematical morphology, and skin is eliminated using morphology opening operation Very thin connection or very thin protrusion part in color binary map;
Based on colour of skin binary map obtain white area outer profile, and using preset contour feature parameter to profile into Row filtering;
Convex closure defect analysis is carried out to filtered image outline, obtains the characteristic parameter of bony structure feature.
In abovementioned steps, preset contour feature parameter includes size, profile boundary rectangle length-width ratio, contour area With the ratio of convex closure area etc., and the characteristic parameter of bony structure feature includes position, length and angle information etc..
By taking face as an example, the distribution situation of face's bone can be accurately indicated, in the profile of target area with these bones The basis that is printed as dressing of distributed data, the dressing of sufficiently fitting human skin can be obtained.
Step S40 carries out three-dimensionalreconstruction according to depth information, target area profile and bony structure feature, obtains target Area three-dimensional model, and skin condition information is extracted according to position of the target area profile in target area image.
Specifically, three-dimensionalreconstruction is carried out according to depth information, target area profile and bony structure feature, assigns reference point Depth information and two-dimensional signal, reconstruct obtain target area threedimensional model, which is three-dimensional stereo model, Target area can sufficiently be restored.
According to the difference of application scenarios, reconstruct obtains the mode of target area threedimensional model including but not limited to lower section Formula:
As a kind of possible implementation, key point identification is carried out to each target area image, according to pixel The technologies such as matching, according to the plan range of the depth information of positioning key point and positioning key point in target area image, packet The x-axis distance and y-axis distance on two-dimensional space are included, the relative position of positioning key point in three dimensions is determined, according to positioning The relative position of key point in three dimensions connects adjacent positioning key point, generates target area three-dimensional framework.Wherein, Key point is the characteristic point on target area, by taking face as an example, it may include eyes, nose, forehead, on the corners of the mouth, the point on cheek Deng, positioning key point include point more relevant to user's facial contour, the positioning key point correspond to face on depth information it is bright Changed position point is shown, for example, the point on the point, canthus above supratip point, the wing of nose, point on the corners of the mouth etc., thus, Target area three-dimensional framework can be constructed based on the positioning key point.
As alternatively possible implementation, the higher target area image of clarity is filtered out as initial data, Positioning feature point is carried out to be established according to the angle and profile coarse using feature location result rough estimate target area angle Three-dimensional deformation model, and by characteristic point by translation, zoom operations be adjusted to three-dimensional deformation model on same scale, and It extracts and forms sparse three-dimensional deformation model with the coordinate information of characteristic point corresponding points.
In turn, according to target area angle rough estimate value and sparse three-dimensional deformation model, particle swarm optimization iteration is carried out Face three-dimensionalreconstruction, obtains 3-D geometric model, after obtaining 3-D geometric model, will input mesh using the method that texture is puted up Skin Color Information in mark area image is mapped to 3-D geometric model, obtains complete target area threedimensional model.
In the present embodiment, skin condition information includes the colour of skin and skin roughness, it is aforementioned according to target area profile in mesh Extract skin condition information in position in mark area image
Position to be detected corresponding with target area profile is extracted in target area image;
Position to be detected is converted into hsv color space from RGB color, and to the S component in hsv color space Binary conversion treatment is carried out with V component, to obtain the binaryzation data of S component and V component;
The colour of skin is determined according to the binaryzation data of S component and V component;
Morphological dilations processing is carried out to the binaryzation data of S component and V component, and uses angular second moment, entropy, comparison Degree is weighted analysis, determines skin roughness in combination with the pore size of Canny operator extraction.
According to the target area image that camera acquires, the space S of image and the space V are carried out using hsv color space The luminance information of image is separated from the color of picture, and then divided to light and shade region by binaryzation, then uses maximum kind Between variance method background and the foreground zone in image are separated.By both the above method, brightly painted image can be converted to Corresponding binaryzation black white image, so that the subsequent image for skin surface carries out pigment and oil content analysis.
By the data of gray value of image, the violent pixel region of grey scale change in image is found with Canny operator, this It is the fringe region of pore that, which there is great probability in the region of sample, just can extract the corresponding region area information of pore and distribution in this way Information.The degree of roughness of skin mainly by analyzing skin image textural characteristics, uses gray level co-occurrence matrixes at a distance of for δ =(joint probability distribution that two gray-scale pixels of Δ x, Δ y) occur is analyzed, and two pixel (x are shown1, y1)、 (x2, y2) Gray Correlation.Change frequent texture even if existing, the textural characteristics for reflecting image that can also quantify.
After being analyzed by last point it is found that converting HSV space for smoothed out image, wherein S indicates saturation degree, V table Show brightness.S is more saturated closer to 1 expression color, and V indicates that color is brighter closer to 1.Pass through simple logical operation, light tone generation Table oil content, dead color represent pigment.The area ratio of face is accounted for by it respectively, calculated numerical value corresponds to corresponding indication range, Determine the index of color characteristic.
Since biggish change will not occur in the texture train of thought short time of human skin, often with the variation of time, It is embodied in the variation of texture train of thought thickness.Then image is carried out the data after binaryzation by this programme, is carried out at morphological dilations Reason.After expansion, the region of texture pixel, thus available more parts of observation data can be expanded.If morphological dilations 2 It is secondary, then possess 2 width texture image datas, is compared with the image of expansion number corresponding in image library.Use angular second moment (ASM), entropy (ENT), contrast (CON) are weighted analysis, consider further that the pore size comprehensive analysis of Canny operator extraction, Finally obtain degree of roughness grade.
Wherein, the colour of skin can indicate that HSB value is bigger with HSB value, then the colour of skin is better (delicate), therefore can be set according to HSB Fixed numberical range, determines the grade of the colour of skin, and then is used as qualitative index, for example, colour of skin grade may is that it is very white, compare It is white, general white, more black, non-normally-black.And skin roughness can also determine grade, such as skin roughness according to calculated result Grade may is that very smooth, smooth, smoother, relatively rough, very coarse.
Further, skin condition information further includes color spot amount, it is above-mentioned according to target area profile in target area image In position extract skin condition information include:
Position to be detected corresponding with target area profile is extracted in target area image;
Position to be detected is subjected to gray processing and filtering processing, to obtain image P;
The image S at the position to be detected of image P and gray processing is made into difference calculating, obtains color spot region, wherein color spot area Domain are as follows: area_seban=P-S carries out gray processing processing to it, figure can be obtained after the image for extracting position to be detected As S, image P filtering link more than image S;
Preset threshold process is carried out to the color spot region of acquisition, and calculates color spot region area accounting:
In formula, score is output as a result, area_seban (x, y) is the area for detecting color spot region, and (x, y) is inspection Location of pixels coordinate where in the color spot region measured;Area_mask (i, j) is the area of detection zone, and (i, j) is comprising color Location of pixels coordinate in the detection zone of spot;
Stretch processing is carried out to color spot region area accounting, output color spot amount scoring:
It is scored according to color spot amount and determines color spot amount.
Skin area image pit as caused by color spot is obtained using the method in similar watershed first, then by calculating it The sum of sectional area acquires color spot area accounting and obtains color spot amount.In the present embodiment, it is (black to consider dark color spots for detection spot Head) and light spot (close with skin color, the types such as acne), it is empty that RGB color is switched into HIS color first Between, the projection on X and Y is carried out respectively to image using H (coloration), S (saturation degree) I (brightness) component to confirm spot in image Position Approximate, then carry out self-adapting detecting according to dark color spots and the characteristic of light spot, and morphologic corrosion is added And expansion algorithm, the result of detection is filtered and is corrected, to make assessment to color spot content.
Therefore, color spot grade can be divided according to resulting scoring, such as: without color spot, a small amount of color spot, more color spot, big Measure color spot, huge polychrome spot.
Step S50, according to target area three dimensional model printing dressing substrate, and according to skin condition information in dressing base Dressing is printed on material.
Based on above-mentioned multiple images processing links obtain as a result, can be according to target area three dimensional model printing dressing base Material.With facial mask example, dressing substrate can be the soft nonmetallic materials of the matter such as non-woven fabrics, nylon cloth, dressing substrate after molding It is consistent with target area threedimensional model shape, size, thus apply in specific human body, will not generate fold, arch upward, Phenomena such as stretching and squeezing ensure that good fitting basis.And the target area three-dimensional mould obtained through image recognition processing Type can accurately avoid the position, such as eyes, eyebrow, nostril and mouth etc. that do not need application, for another example be for applying Damage location, wound site can also be recognized accurately.In a kind of mode in the cards, is printed and applied by following steps Expect substrate:
According to the outer profile of target area three dimensional model printing dressing substrate;
According to position of the bony structure feature in the threedimensional model of target area, the printing support train of thought on outer profile.
In practical application, the printing raw material of support train of thought uses high resiliency gel, such as hydrogel, photo-crosslinking class elasticity egg White polypeptide matrices gel etc., and make that train of thought is supported to avoid bony structure, thus laminating degree when guaranteeing application.Dressing is main Using having medicative material, selected such as skin condition have whitening, moisturizing, it is compact, deoil, nti-freckle, anti-acne One or more beneficiating ingredients such as print.Have support train of thought dressing substrate on, dressing can be printed upon support train of thought it Between in the space that limits.
The data that human body waits for the profile of application site, bony structure and skin condition are obtained by image recognition technology, and Print parameters are determined according to these data, to wait for that the different location of application site carries out subregion printing for human body, so as to obtain The dressing obtained can be fitted closely with to application site, and use different dressing for different skin areas, meet difference Property require.Therefore, the technical solution of above-mentioned offer is in practical application, be applicable not only to beautifying skin, and be suitable for damage Position medication, dressing needed for being printed according to the actual conditions to application site, application effect are more preferable.
Further, it is also possible to carry out piecemeal processing to target area threedimensional model, such as according to preset standard by target area Domain threedimensional model is divided into several calibrated bolcks, and each calibrated bolck has the attribute information of itself, such as bony structure feature and skin Skin status information, to carry out the configuration of printed material according to the actual conditions of calibrated bolck and generate printing process.
Referring to Fig. 3, the present invention also provides a kind of dressing printing equipment based on three-dimensional printing technology, the dressing printing equipments 20 include:
Image capture module 21, for obtaining multiple target area images and every target area figure of multiple angles The depth information of picture;
Conversion module 22, for being converted to the target area image to obtain colour of skin binary map;
Characteristic extracting module 23, for determining target area profile from colour of skin binary map and extracting bony structure feature;
Reconstructed module 24 is obtained for carrying out three-dimensionalreconstruction according to depth information, target area profile and bony structure feature To target area threedimensional model;
Skin condition extraction module 25, for extracting skin according to position of the target area profile in target area image Status information;
Print module 26 is used for according to target area three dimensional model printing dressing substrate, and according to dressing print parameters Dressing is printed on dressing substrate.
Specifically, conversion module includes:
YUV converting unit, for converting target area image from RGB color to YUV color space, RGB color It converts to the conversion formula of YUV color space in space are as follows:
Wherein, Y indicates brightness, and U, V are two chrominance components, indicates color difference;
YIQ converting unit, for converting target area image from RGB color to YIQ color space, RGB color It converts to the conversion formula of YIQ color space in space are as follows:
Wherein, Y indicates brightness, and I, Q are two chromatic components;
Skin cluster unit, the I component value range for being set in YIQ color space according to target area image, Skin Color Information is extracted in YUV color space;
Binarization unit, for Skin Color Information to be converted to colour of skin binary map.
Specifically, print module includes substrate print unit, and substrate print unit is used for according to target area threedimensional model Print the outer profile of dressing substrate;And for the position according to bony structure feature in the threedimensional model of target area, outside Printing support train of thought on profile.
Each functional module is played the role of in above all embodiments of dressing printing equipment working principle and play can Referring to the realization process of above-mentioned each embodiment of dressing Method of printing, therefore not to repeat here.
The present invention also provides a kind of computer readable storage mediums, are stored thereon with computer program, and the program is processed Device realizes the realization process in dressing Method of printing shown in FIG. 1 and other dressing Method of printing embodiments when executing, herein It does not repeat.
Wherein, the computer readable storage medium, as read-only memory (Read-Only Memory, abbreviation ROM), Random access memory (Random Access Memory, abbreviation RAM), magnetic or disk etc..
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
Technical solution of the present invention substantially the part that contributes to existing technology or the technical solution in other words Part can be embodied in the form of software products, which is stored in a storage medium, if including Dry instruction is used so that a computer equipment (can be personal computer, server or the network equipment etc.) executes this hair The all or part of the steps of bright each embodiment the method.And storage medium above-mentioned include: USB flash disk, mobile hard disk, ROM, The various media that can store program code such as RAM, magnetic or disk.
Above is only part or preferred embodiment of the invention, therefore either text or attached drawing cannot all limit this The range of protection is invented to be made under all designs with an entirety of the invention using description of the invention and accompanying drawing content Equivalent structure transformation, or directly/be used in other related technical areas indirectly and be included in the scope of protection of the invention.

Claims (10)

1. a kind of dressing Method of printing based on three-dimensional printing technology characterized by comprising
Obtain multiple target area images of multiple angles and the depth information of every target area image;
The target area image is converted to obtain colour of skin binary map, determines target area from the colour of skin binary map Domain profile, and extract bony structure feature;
Three-dimensionalreconstruction is carried out according to the depth information, the target area profile and the bony structure feature, obtains target Area three-dimensional model, and skin condition letter is extracted according to position of the target area profile in the target area image Breath;
According to the target area three dimensional model printing dressing substrate, and according to the skin condition information in the dressing base Dressing is printed on material.
2. dressing Method of printing according to claim 1, which is characterized in that described to turn to the target area image It changes to obtain colour of skin binary map and include:
The target area image is converted from RGB color to YUV color space, the RGB color is converted to YUV The conversion formula of color space are as follows:
Wherein, Y indicates brightness, and U, V are two chrominance components, indicates color difference;
The target area image is converted from RGB color to YIQ color space, the RGB color is converted to YIQ The conversion formula of color space are as follows:
Wherein, Y indicates brightness, and I, Q are two chromatic components;
The I component value range set in YIQ color space according to the target area image, in the YUV color space Middle extraction Skin Color Information, and the Skin Color Information is converted into colour of skin binary map.
3. dressing Method of printing according to claim 1, which is characterized in that the skin condition information includes the colour of skin and skin Skin condition packet is extracted in skin roughness, the position according to the target area profile in the target area image It includes:
Position to be detected corresponding with the target area profile is extracted in the target area image;
The position to be detected is converted into hsv color space from RGB color, and to the S in the hsv color space Component and V component carry out binary conversion treatment, to obtain the binaryzation data of S component and V component;
The colour of skin is determined according to the binaryzation data of the S component and V component;
Morphological dilations processing is carried out to the binaryzation data of the S component and V component, and uses angular second moment, entropy, comparison Degree is weighted analysis, determines skin roughness in combination with the pore size of Canny operator extraction.
4. dressing Method of printing according to claim 1 or 3, which is characterized in that the skin condition information further includes color Spot amount, the position according to the target area profile in the target area image extract skin condition information and include:
Position to be detected corresponding with the target area profile is extracted in the target area image;
The position to be detected is subjected to gray processing and filtering processing, to obtain image P;
The image S at the position to be detected of described image P and gray processing is made into difference calculating, obtains color spot region, wherein color spot area Domain are as follows: area_seban=P-S;
Preset threshold process is carried out to the color spot region of acquisition, and calculates color spot region area accounting:
In formula, score is output as a result, area_seban (x, y) is the area for detecting color spot region, and (x, y) is to detect Color spot region in where location of pixels coordinate;Area_mask (i, j) is the area of detection zone, and (i, j) is comprising color spot Location of pixels coordinate in detection zone;
Stretch processing is carried out to the color spot region area accounting, output color spot amount scoring:
It is scored according to the color spot amount and determines color spot amount.
5. dressing Method of printing according to claim 1, which is characterized in that described to extract bone from the colour of skin binary map Property structure feature includes:
Minuscule hole in the colour of skin binary map is eliminated using closing operation of mathematical morphology, and institute is eliminated using morphology opening operation State the very thin connection in colour of skin binary map or very thin protrusion part;
Based on the colour of skin binary map obtain white area outer profile, and using preset contour feature parameter to profile into Row filtering;
Convex closure defect analysis is carried out to filtered image outline, obtains the characteristic parameter of bony structure feature.
6. dressing Method of printing according to claim 1, which is characterized in that described according to the target area threedimensional model Printing dressing substrate includes:
According to the outer profile of the target area three dimensional model printing dressing substrate;
According to position of the bony structure feature in the target area threedimensional model, support is printed on the outer profile Train of thought.
7. a kind of dressing printing equipment based on three-dimensional printing technology characterized by comprising
Image capture module, for obtaining multiple target area images of multiple angles and the depth of every target area image Spend information;
Conversion module, for being converted to the target area image to obtain colour of skin binary map;
Characteristic extracting module for determining target area profile from the colour of skin binary map, and extracts bony structure feature;
Reconstructed module, it is three-dimensional for being carried out according to the depth information, the target area profile and the bony structure feature Reconstruct, obtains target area threedimensional model;
Skin condition extraction module, for extracting skin according to position of the target area profile in the target area image Skin status information;
Print module, for believing according to the target area three dimensional model printing dressing substrate, and according to the skin condition Breath prints dressing on the dressing substrate.
8. dressing printing equipment according to claim 7, which is characterized in that the conversion module includes:
YUV converting unit, for converting the target area image from RGB color to YUV color space, the RGB Color space conversion to YUV color space conversion formula are as follows:
Wherein, Y indicates brightness, and U, V are two chrominance components, indicates color difference;
YIQ converting unit, for converting the target area image from RGB color to YIQ color space, the RGB Color space conversion to YIQ color space conversion formula are as follows:
Wherein, Y indicates brightness, and I, Q are two chromatic components;
Skin cluster unit, the I component value range for being set in YIQ color space according to the target area image, Skin Color Information is extracted in the YUV color space;
Binarization unit, for the Skin Color Information to be converted to colour of skin binary map.
9. dressing printing equipment according to claim 7, which is characterized in that the print module includes substrate printing list Member, the substrate print unit are used for the outer profile according to the target area three dimensional model printing dressing substrate;And it is used for According to position of the bony structure feature in the target area threedimensional model, the printing support arteries and veins on the outer profile Network.
10. a kind of dressing print system based on three-dimensional printing technology, which is characterized in that beaten including image capture device and solid Print machine, described image acquisition equipment is used to acquire the image of human body and handles acquired image, described vertical Body printer is used to receive the print parameters of described image acquisition equipment output and prints dressing according to the print parameters.
CN201910477712.5A 2019-06-03 2019-06-03 Dressing Method of printing, apparatus and system based on three-dimensional printing technology Pending CN110293684A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910477712.5A CN110293684A (en) 2019-06-03 2019-06-03 Dressing Method of printing, apparatus and system based on three-dimensional printing technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910477712.5A CN110293684A (en) 2019-06-03 2019-06-03 Dressing Method of printing, apparatus and system based on three-dimensional printing technology

Publications (1)

Publication Number Publication Date
CN110293684A true CN110293684A (en) 2019-10-01

Family

ID=68027512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910477712.5A Pending CN110293684A (en) 2019-06-03 2019-06-03 Dressing Method of printing, apparatus and system based on three-dimensional printing technology

Country Status (1)

Country Link
CN (1) CN110293684A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862080A (en) * 2020-07-31 2020-10-30 易思维(杭州)科技有限公司 Deep learning defect identification method based on multi-feature fusion
CN117067796A (en) * 2023-10-17 2023-11-17 原子高科股份有限公司 Method and system for manufacturing radionuclide applicator, applicator and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932847A (en) * 2006-10-12 2007-03-21 上海交通大学 Method for detecting colour image human face under complex background
CN101866497A (en) * 2010-06-18 2010-10-20 北京交通大学 Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system
CN104599314A (en) * 2014-06-12 2015-05-06 深圳奥比中光科技有限公司 Three-dimensional model reconstruction method and system
CN104997645A (en) * 2015-09-01 2015-10-28 河北润达环保科技有限公司 Method for making makeup mask by 3D (three dimensional) printing
CN106529429A (en) * 2016-10-27 2017-03-22 中国计量大学 Image recognition-based facial skin analysis system
CN108038438A (en) * 2017-12-06 2018-05-15 广东世纪晟科技有限公司 Multi-source face image joint feature extraction method based on singular value decomposition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932847A (en) * 2006-10-12 2007-03-21 上海交通大学 Method for detecting colour image human face under complex background
CN101866497A (en) * 2010-06-18 2010-10-20 北京交通大学 Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system
CN104599314A (en) * 2014-06-12 2015-05-06 深圳奥比中光科技有限公司 Three-dimensional model reconstruction method and system
CN104997645A (en) * 2015-09-01 2015-10-28 河北润达环保科技有限公司 Method for making makeup mask by 3D (three dimensional) printing
CN106529429A (en) * 2016-10-27 2017-03-22 中国计量大学 Image recognition-based facial skin analysis system
CN108038438A (en) * 2017-12-06 2018-05-15 广东世纪晟科技有限公司 Multi-source face image joint feature extraction method based on singular value decomposition

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862080A (en) * 2020-07-31 2020-10-30 易思维(杭州)科技有限公司 Deep learning defect identification method based on multi-feature fusion
CN111862080B (en) * 2020-07-31 2021-05-18 易思维(杭州)科技有限公司 Deep learning defect identification method based on multi-feature fusion
CN117067796A (en) * 2023-10-17 2023-11-17 原子高科股份有限公司 Method and system for manufacturing radionuclide applicator, applicator and electronic equipment
CN117067796B (en) * 2023-10-17 2024-01-16 原子高科股份有限公司 Method and system for manufacturing radionuclide applicator, applicator and electronic equipment

Similar Documents

Publication Publication Date Title
CN108765273B (en) Virtual face-lifting method and device for face photographing
CN108447017B (en) Face virtual face-lifting method and device
CN107358648B (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
EP2718902B1 (en) Generation of avatar reflecting player appearance
KR100682889B1 (en) Method and Apparatus for image-based photorealistic 3D face modeling
CN109675315B (en) Game role model generation method and device, processor and terminal
CA2678551C (en) Method and apparatus for simulation of facial skin aging and de-aging
CN108764180A (en) Face identification method, device, electronic equipment and readable storage medium storing program for executing
KR100930994B1 (en) Method and apparatus for generating 3D image model, image recognition method and apparatus using same, and recording medium recording program for performing the methods
CN104299011A (en) Skin type and skin problem identification and detection method based on facial image identification
CN108682050B (en) Three-dimensional model-based beautifying method and device
CN105844242A (en) Method for detecting skin color in image
CN107479801A (en) Displaying method of terminal, device and terminal based on user's expression
CN108876709A (en) Method for beautifying faces, device, electronic equipment and readable storage medium storing program for executing
CN108323203A (en) A kind of method, apparatus and intelligent terminal quantitatively detecting face skin quality parameter
CN107480615B (en) Beauty treatment method and device and mobile equipment
CN106650606A (en) Matching and processing method of face image and face image model construction system
JP2017009598A (en) Analysis method for beauty treatment effect
CN110293684A (en) Dressing Method of printing, apparatus and system based on three-dimensional printing technology
CN107509043A (en) Image processing method and device
JP2004030007A (en) Makeup simulation apparatus, makeup simulation method, makeup simulation program and recording medium with program recorded thereon
US11120578B2 (en) Method of color matching using reference indicators
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
CN110648336A (en) Method and device for dividing tongue texture and tongue coating
Bogo et al. Automated detection of new or evolving melanocytic lesions using a 3D body model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191001