WO2023214748A1 - Method and device for measuring color in image, and computer-readable medium - Google Patents

Method and device for measuring color in image, and computer-readable medium Download PDF

Info

Publication number
WO2023214748A1
WO2023214748A1 PCT/KR2023/005838 KR2023005838W WO2023214748A1 WO 2023214748 A1 WO2023214748 A1 WO 2023214748A1 KR 2023005838 W KR2023005838 W KR 2023005838W WO 2023214748 A1 WO2023214748 A1 WO 2023214748A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
color
image
patch
light
Prior art date
Application number
PCT/KR2023/005838
Other languages
French (fr)
Korean (ko)
Inventor
김영민
이준호
장호준
양용진
박명삼
김은정
김정은
Original Assignee
서울대학교 산학협력단
코스맥스 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 서울대학교 산학협력단, 코스맥스 주식회사 filed Critical 서울대학교 산학협력단
Publication of WO2023214748A1 publication Critical patent/WO2023214748A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/1032Determining colour for diagnostic purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present invention relates to a method, device, and computer-readable medium for skin color measurement in an image, which derives shape information and light information from a captured image and derives reflectance information by subtracting the shape information and light information from the image. And, based on the difference between the photographed color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color parameter conversion means is used. It relates to a colorimetric method, device, and computer-readable medium in an image that derives colorimetric color information, which is the unique color of the face, excluding the influence of lighting, by applying it to the reflectance information of the face.
  • Personal color is a color that harmonizes with the body color of an individual to make the individual look lively and energetic. Makeup methods and clothing colors are different depending on the personal color. If you know your personal color, you can create an effective look according to the color. possible.
  • Korean Patent Publication No. 10-2018-0082172 discloses a method and device for performing colorimetry on a user terminal.
  • Prior Patent 1 noted that the conventional user terminal-based skin analysis technology has the limitation of not being able to accurately analyze the user's skin because it does not take into account the lighting characteristics in the space where the user's skin is photographed.
  • technology has been disclosed to perform colorimetry or skin analysis by analyzing the characteristics of lighting, there is a problem in that it only considers the characteristics of the lighting and does not take into account the characteristics of the camera and the cognitive characteristics of human vision.
  • a color patch with the colorimetric reference color printed is used to minimize colorimetric errors due to the external environment when performing colorimetry.
  • the present invention relates to a method, device, and computer-readable medium for skin color measurement in an image, which derives shape information and light information from a captured image and derives reflectance information by subtracting the shape information and light information from the image. And, based on the difference between the photographed color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color parameter conversion means is used. It relates to a colorimetric method, device, and computer-readable medium in an image that derives colorimetric color information, which is the unique color of the face, excluding the influence of lighting, by applying it to the reflectance information of the face.
  • an embodiment of the present invention is a colorimetry method performed in a computing system including one or more processors and one or more memories, and is a colorimetric method that calculates a facial image for a face area from an original image and a patch area for the patch area.
  • An image derivation step of deriving a patch image The face image is input to an inference model including one or more artificial neural networks to derive face shape information and light information including the intensity and direction of light in the face image, and the face shape information for the face image and a first subtraction application step of applying the subtraction to the light information to derive reflectance information of the face that minimizes the effect of color change due to the shape of the face to which the light is irradiated.
  • a patch image containing one or more reference color areas is input into an inference model including one or more artificial neural networks, the shape information of the patch is derived, and the shape information of the patch and the light information are subtracted and applied to the patch image, A second subtraction application step of deriving reflectance information of the patch that minimizes the effect of color change due to the shape of the patch to which light is irradiated; And based on the difference between the color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color information of the measurement area in the reflectance information of the face A colorimetry method is provided, including a face color derivation step of deriving colorimetric color information of the measurement area by applying the color parameter conversion means to the color parameter conversion means.
  • the shape information includes 3D information according to the actual shape derived from 2D information on the image about the object included in the image
  • the light information includes information about the object included in the image. It includes the intensity and direction of the irradiated light
  • the reflectance information is color information for each pixel that removes the influence of shadows that appear on the shape according to the intensity and direction of the light when light is irradiated to the shape of the object included in the image. It can be included.
  • the patch includes a plurality of reference color areas, and each reference color area has the same shape and area and may be arranged separately from each other.
  • the inference model includes a common model, a surface normal vector model, a light information inference model, and a reflectance model
  • the common model includes one or more artificial neural networks, and determines the surface from the input image.
  • Feature information that is commonly input to the normal vector model, the light information inference model, and the reflectance model is derived
  • the surface normal vector model includes one or more artificial neural networks, and from the feature information in the image that is 2D information
  • the shape information which is 3D structure information containing the actual shape, is derived
  • the light information inference model includes one or more artificial neural networks, and the light information includes the intensity of light and the direction of light in the image from the feature information.
  • the reflectance model includes one or more artificial neural networks, and reflectance information, which is color information for each pixel, can be derived from the feature information.
  • the first subtraction application step includes a facial feature information deriving step of deriving facial feature information by inputting a facial image into a common model including one or more artificial neural networks; A facial shape derivation step of deriving facial shape information by inputting the facial feature information into a surface normal vector model including one or more artificial neural networks; A light information derivation step of inputting the facial feature information into a light information inference model including one or more artificial neural networks to derive light information including the intensity and direction of light in the face image; And a facial reflectance derivation step of applying the shape information and light information of the face to the face image to derive reflectance information of the face reflecting only the color of the face.
  • the second subtraction application step includes applying the patch image.
  • the colorimetric method further includes a model learning step, wherein the model learning step inputs a learning image into the common model and inputs learning feature information of the common model into the surface normal vector model.
  • the face color derivation step includes a conversion means derivation step, and the conversion means derivation step includes the conversion of a preset color space for each of a plurality of reference color regions in the reflectance information of the patch.
  • a color parameter extraction step of extracting each first color parameter information; Between a second color parameter derived by applying a color parameter conversion means including a plurality of correction values to a plurality of first color parameter information and a preset third color parameter corresponding to the ground truth for the plurality of reference colors A correction value determination step of determining the correction value so that the color difference value derived by the first color difference algorithm satisfies a preset standard; and a fourth color parameter derived by applying a color parameter conversion means including a plurality of correction values to the plurality of first color parameter information and a preset third color parameter corresponding to the ground truth for the plurality of reference colors. It may include a correction value changing step of changing some or all of the correction values in a direction that reduces the color difference value derived by
  • the color difference value by the first color difference algorithm is linear with respect to the color parameter
  • the color difference value by the second color difference algorithm is non-linear with respect to the color parameter of the first color.
  • the difference algorithm and the second color difference algorithm are different from each other, and the color parameter conversion means is implemented as a matrix including a plurality of correction values, and the number of columns or rows of the matrix is the number of elements of the first color parameter.
  • the second color parameter may be derived by matrix multiplying the matrix with respect to the first color parameter
  • the fourth color parameter may be derived by matrix multiplying the matrix with respect to the first color parameter.
  • one embodiment of the present invention is a colorimetric device implemented in a computing system including one or more processors and one or more memories.
  • the colorimetric device includes a facial image for the facial area from an original image and An image derivation step of deriving a patch image for the patch area;
  • the face image is input to an inference model including one or more artificial neural networks to derive face shape information and light information including the intensity and direction of light in the face image, and the face shape information for the face image and a first subtraction application step of applying the subtraction to the light information to derive reflectance information of the face that minimizes the effect of color change due to the shape of the face to which the light is irradiated.
  • a patch image containing one or more reference color areas is input into an inference model including one or more artificial neural networks, the shape information of the patch is derived, and the shape information of the patch and the light information are subtracted and applied to the patch image, A second subtraction application step of deriving reflectance information of the patch that minimizes the effect of color change due to the shape of the patch to which light is irradiated; And based on the difference between the color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color information of the measurement area in the reflectance information of the face A colorimetry device is provided that performs a facial color derivation step of deriving colorimetric color information of the measurement area by applying the color parameter conversion means to the color parameter conversion means.
  • one embodiment of the present invention is a computer program stored in a computer-readable medium, including a plurality of instructions executed by one or more processors, wherein the computer program is configured to generate a face from an original image.
  • An image derivation step of deriving a face image for the area and a patch image for the patch area;
  • the face image is input to an inference model including one or more artificial neural networks to derive face shape information and light information including the intensity and direction of light in the face image, and the face shape information for the face image and a first subtraction application step of applying the subtraction to the light information to derive reflectance information of the face that minimizes the effect of color change due to the shape of the face to which the light is irradiated.
  • a patch image containing one or more reference color areas is input into an inference model including one or more artificial neural networks, the shape information of the patch is derived, and the shape information of the patch and the light information are subtracted and applied to the patch image,
  • a second subtraction application step of deriving reflectance information of the patch that minimizes the effect of color change due to the shape of the patch to which light is irradiated And based on the difference between the color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color information of the measurement area in the reflectance information of the face
  • a facial color derivation step of deriving colorimetric color information of the measurement area by applying the color parameter conversion means to a computer program is provided.
  • the colorimetry method can be easily performed even with a personal mobile terminal such as a smartphone.
  • the effect of deriving the color information of the face is achieved by comparing the patch and the face in the same environment. It can be performed.
  • shape information is derived for each face and patch, it is possible to derive accurate reflectance information by compensating for differences in each characteristic and state in the image.
  • reflectance information that removes the influence of a shadow created by light irradiated on the shape can be derived from the image.
  • the reflectance information is derived by subtracting the shape information and light information that affect the reflectance information from the image, thereby achieving the effect of deriving more accurate reflectance information than when the reflectance information is directly derived from the image. It can be performed.
  • the patch since the patch includes a plurality of reference colors, it can exhibit the effect of responding to various facial colors.
  • color parameters are determined based on the color information of each reference color (reference color in the reflectance information of the patch) in the image and the ground truth. It can be effective in deriving conversion means and deriving color difference information.
  • a learning image is input in the model learning step, and the derived learning shape information, learning light information, and learning reflectance information are combined and compared with the learning image, thereby deriving a clear output value (learning image).
  • This can have the effect of learning detailed parameter values or filter information of the artificial neural network.
  • colorimetry reflects color difference information in a non-linear color space that has characteristics that are robust to external environments such as camera settings, performance, and lighting, and can more accurately implement the color difference perceived by the human eye.
  • Methods, devices, and computer-readable media may be provided.
  • a personal mobile terminal such as a smartphone.
  • non-linearly defined color differences can be approximated and optimized through repetitive calculations in the color space, thereby achieving the effect of simultaneously promoting computational efficiency and colorimetric accuracy. there is.
  • Figure 1 schematically shows a colorimetry method according to an embodiment of the present invention.
  • Figure 2 schematically shows the environment of a computing system in which a colorimetric method according to embodiments of the present invention is performed.
  • Figure 3 schematically shows shape information, light information, and reflectance information according to an embodiment of the present invention.
  • Figure 4 schematically shows a patch according to an embodiment of the present invention.
  • Figure 5 schematically shows a plurality of inference models according to an embodiment of the present invention.
  • Figure 6 schematically shows detailed steps of the first subtraction application step and the second subtraction application step according to an embodiment of the present invention.
  • Figure 7 schematically shows the model learning step according to an embodiment of the present invention.
  • Figure 8 schematically shows an example of a color difference region defined in the color space of CIE76.
  • Figure 9 schematically shows an example of a color difference region defined in the color spaces of CIE76, CIE94, and CIE2000.
  • Figure 10 schematically shows the process of deriving a color difference value that can be linearly defined.
  • Figure 11 schematically shows the detailed steps of the conversion means derivation step and the internal components of the colorimetric device according to embodiments of the present invention.
  • Figure 12 schematically shows the detailed process of the correction value determination step according to embodiments of the present invention.
  • Figure 13 schematically shows the overall processes of the conversion means derivation step according to embodiments of the present invention.
  • Figure 14 schematically shows the operation of the color parameter conversion means according to embodiments of the present invention.
  • FIG. 15 schematically shows the detailed process in the correction value determination step according to embodiments of the present invention.
  • Figure 16 schematically shows the detailed process of the correction value changing step according to embodiments of the present invention.
  • FIG 17 schematically shows detailed steps of the correction value changing step according to embodiments of the present invention.
  • Figure 18 schematically shows the internal configuration of a computing device according to an embodiment of the present invention.
  • first, second, etc. may be used to describe various components, but the components are not limited by the terms. The above terms are used only for the purpose of distinguishing one component from another.
  • a first component may be named a second component, and similarly, the second component may also be named a first component without departing from the scope of the present invention.
  • the term and/or includes any of a plurality of related stated items or a combination of a plurality of related stated items.
  • Objects have unique colors.
  • colors that are different from those seen in the actual object with the eyes may be derived from the image.
  • the color that appears when a color is expressed in an image can change depending on the inherent reflectance, light information (direction, intensity and color of light, etc.) according to surrounding lighting, and the sensitivity of the camera sensor. Therefore, it is possible to use an expensive colorimeter to remove the effects of the lighting and sensors and find the exact color unique to the object.
  • a patch having a reference color that is relatively inexpensive and can be easily utilized is used. A method of finding the original color by photographing the object to be colorimetric is disclosed.
  • Figure 1 schematically shows a colorimetry method according to an embodiment of the present invention.
  • Figure 1 (A) corresponds to a drawing schematically showing the original image.
  • a patch which is a specific physical object with a reference color, and an object to be colorimetric are photographed together with a camera.
  • the camera may be the camera itself, or may correspond to a camera built into a smartphone, etc.
  • An original image can be acquired with the camera, and the original image can include an object and a patch to be colorimetric.
  • the object and patch to be colorimetric can be acquired without overlapping.
  • the patch may include one or more reference colors, and the object to be colorimetric may include various objects such as faces, hands, animals, and objects, but the description below will focus on faces.
  • the original image may include a face area including a face and a patch area including a patch, and the face area may include a measurement area for colorimetry. That is, the measurement area may correspond to part or the entire face area, and may be determined by the user.
  • the face image derived from the original image may be an image including the face area
  • the patch image derived from the original image may be an image containing the patch area
  • Figure 1(B) corresponds to a diagram schematically showing the steps of the colorimetric method.
  • FIG. 1 (B) it is a colorimetry method performed in a computing system including one or more processors and one or more memories, which derives a face image for the face area and a patch image for the patch area from the original image.
  • Image derivation step (S3000) The face image is input to an inference model including one or more artificial neural networks to derive face shape information and light information including the intensity and direction of light in the face image, and the face shape information for the face image and a first subtraction application step (S3100) of subtracting the light information to derive reflectance information of the face that minimizes the effect of color change due to the shape of the face to which the light is irradiated;
  • a patch image containing one or more reference color areas is input into an inference model including one or more artificial neural networks, the shape information of the patch is derived, and the shape information of the patch and the light information are subtracted and applied to the patch image,
  • the colorimetric method may include an image derivation step (S3000) of deriving a face image for the face area and a patch image for the patch area from the original image.
  • the original image in Figure 1 (A) may include a face area for a face and a patch area for a patch, and the face area is an area where only the face is extracted from the original image and may be included in the face image,
  • the patch area is an area where only patches are extracted from the original image and may be included in the patch image.
  • the colorimetric method can input the face image into an inference model (2000) including one or more artificial neural networks to derive face shape information and light information from the face image, and add the face shape information and light information to the face image. It may include a first subtraction application step (S3100) in which light information is subtracted to derive reflectance information of the face.
  • S3100 first subtraction application step
  • the inference model 2000 in the first subtraction application step may include a common model 2100, a surface normal vector model 2200, and a light information inference model 2300, each of which It may contain one or more artificial neural networks.
  • the common model 2100 can derive feature information from an image, and the feature information can be input to the surface normal vector model 2200 and the light information inference model 2300 to derive shape information and light information.
  • facial feature information is derived by inputting the facial image into the common model 2100, and inputting the facial feature information into the surface normal vector model 2200 and the light information inference model 2300 to obtain facial shape information. and light information can be derived.
  • the shape information of the face may be the shape of the face area, and the shape may mean 3D information including the actual shape, and the light information may be the direction and direction of light included in the original image or the face image. May include century. Details about the plurality of inference models, shape information, and light information described above will be described later.
  • reflectance information of the face can be derived from the face image based on the shape information of the face and the light information.
  • the face image includes the shape information of the face, the light information, and the reflectance information of the face.
  • the shape information and light information of the face are extracted from the face image and the reflectance information of the face is derived by subtracting the shape information and light information of the face from the face image, so that the face It can have the effect of deriving more accurate reflectance information than the reflectance information of the face derived directly from the image.
  • the colorimetric method can input the patch image into an inference model including one or more artificial neural networks to derive the shape information of the patch from the patch image, and add the shape information of the patch and the light information to the patch image. It may include a second subtraction application step (S3200) in which reflectance information of the patch is derived by applying subtraction.
  • S3200 second subtraction application step
  • the inference model in the second subtraction application step may include a common model 2100 and a surface normal vector model 2200, and each may include one or more artificial neural networks.
  • the common model 2100 can derive feature information from an image, and the feature information can be input to the surface normal vector model 2200 to derive shape information. That is, the patch image can be input into the common model 2100 to derive patch feature information, and the patch feature information can be input into the surface normal vector model 2200 to derive patch shape information.
  • the shape information of the patch may be the shape of the patch area, and the shape may mean 3D information including the actual shape.
  • reflectance information of the patch can be derived from the patch image based on the shape information of the patch and the light information (derived from the face image).
  • the patch image includes the shape information of the patch, the light information, and the reflectance information of the patch.
  • the shape information of the patch and the light information from the face image are extracted from the patch image, and the shape information of the patch and the light information are subtracted from the patch image to derive the reflectance information of the patch. , it can have the effect of deriving more accurate reflectance information than the reflectance information of the patch derived directly from the patch image.
  • the plurality of inference models (common model 2100 and surface normal vector model 2200) in the first subtraction application step (S3100) and the second subtraction application step (S3200) are each It may be the same inference model.
  • the plurality of inference models (common model 2100 and surface normal vector model 2200) in the first subtraction application step (S3100) and the second subtraction application step (S3200) are each These may be different inference models.
  • a color parameter conversion means is derived based on the reflectance information of the patch, the reflectance information of the face, and the preset ground truth of the patch derived in the above-described step, and the color parameter conversion means is converted to the reflectance information of the face.
  • colorimetric color information of the measured part of the face can be derived. Details will be described later in Figure 1 (C).
  • Figure 1(C) corresponds to a diagram schematically showing the face color derivation step (S3300).
  • (C) of Figure 1 it is a colorimetric method performed in a computing system including one or more processors and one or more memories, wherein the color information of the reference color area in the reflectance information of the patch and the ground truth of the reference color are used. Based on the difference between the color information, a color parameter conversion means is derived, and the color parameter conversion means is applied to the color information of the measurement area in the reflectance information of the face to derive colorimetric color information of the measurement area. It may include a face color derivation step (S3300).
  • the patch includes one or more reference colors
  • the reflectance information of the patch includes color information for each of the one or more reference color areas.
  • the color information for each of one or more reference color areas of the reflectance information of the patch may be color information from the reflectance information of the patch derived from the patch area in the original image.
  • color information of a preset ground truth is stored in the computing system, and the ground truth may correspond to a color parameter that is the true value of the color.
  • a color parameter conversion means can be derived based on the difference between the color information of the reference color area and the color information of the ground truth of the reference color. The difference may be a color difference value derived by applying the corresponding algorithm to the color parameters of each of the color information of the reference color area and the color information of the ground truth of the reference color.
  • the color parameter conversion means can be applied to the color information of the measurement area of the face reflectance information to derive colorimetric color information of the measurement area.
  • the color parameter conversion means includes a correction value of 1 or more, and applying the color parameter conversion means may mean applying the correction value.
  • the correction value color information in the reference color area and the reference color
  • the difference value between the color information of the ground truth may correspond. The derivation and details of the color parameter conversion means will be described later.
  • the derived colorimetric color information of the measurement area is color information obtained by measuring the measurement area of the face area in the present invention, and is based on the true value (ground truth) or the true value (ground truth) of the actual facial area corresponding to the measurement area, not on the image. It is desirable that it corresponds to nearby color information.
  • the color information of the measurement part of the face reflectance information may mean the color information of the reflectance information on the face image for the measurement part (extracted for the corresponding measurement part from the reflectance information of the face).
  • the color parameter conversion means will be the difference value itself between the color information of the reference color area and the color information of the ground truth. (i.e., the correction value of the color parameter conversion means may correspond to the difference value), and the difference value is applied to the color information of the measurement area in the reflectance information of the face to obtain colorimetric color information of the measurement area. can be derived.
  • the correction value of the color parameter conversion means may be implemented as a matrix including a plurality of correction values.
  • the color parameter conversion means may be implemented as a matrix including a plurality of correction values.
  • the face color derivation step (S3300) can be performed by deriving one reference color that has the smallest color difference from the color information of the measurement area in the reflectance information of the face among a plurality of reference colors. .
  • the correction value of the color parameter conversion means may include a difference value between the one reference color and the corresponding ground truth color information.
  • the color parameter conversion means by deriving the color parameter conversion means and correcting color information, it is possible to achieve the effect of performing color measurement by considering various factors.
  • Figure 2 schematically shows the environment of a computing system in which a colorimetric method according to embodiments of the present invention is performed.
  • the original image of the part and the patch that the user wants to colorize is simultaneously acquired by the camera module built into the user terminal, and the method described above and below is used inside the user terminal. This allows color information to be derived through accurate colorimetry.
  • an original image is obtained by simultaneously photographing the part and patch that the user wants to colorize using a camera module built into the user terminal, and the obtained original image is transmitted to the server system. Later, color information can be derived through accurate colorimetry within the server system by the method described above and later.
  • the colorimetric method according to an embodiment of the present invention may be implemented in various types of systems in addition to environments such as (A) and (B) of Figures 2, but in common, a specific physical object on which the colorimetric target and reference color are displayed (e.g. For example, it can be said that any method that analyzes the image for a patch) in a computing system is applicable.
  • FIG. 1 shows a form in which the object to be colorimetric (for example, human skin color) and a physical object to which a reference color is assigned are captured as one image, but the present invention is not limited to this, and the object to be colorimetric is It also includes an embodiment in which a physical object to which a reference color (for example, a person's skin color) and a reference color are assigned are acquired as a plurality of separate images, and a colorimetric method described later is performed. When the plurality of images are acquired, light information corresponding to each image can be derived.
  • a reference color for example, a person's skin color
  • Figure 3 schematically shows shape information, light information, and reflectance information according to an embodiment of the present invention.
  • the shape information includes 3D information according to the actual shape derived from 2D information on the image about the object included in the image, and the light information is related to the object included in the image. It includes the intensity and direction of the irradiated light and the shadow, and the reflectance information is for each pixel that removes the influence of the shadow that occurs on the shape according to the intensity and direction of the light when light is irradiated to the shape of the object included in the image. Color information may be included.
  • the image includes the shape of the object included in the image, the color of the object, and the intensity and direction of light irradiating the object.
  • the color of the object may change due to the performance of the camera that acquires the image, the intensity and direction of the light, the shadow caused by the light irradiated to the shape, etc.
  • the plaster image itself for sketching is white, but the color changes due to the shadow caused by the light irradiated on the plaster image.
  • shape information and light information can be subtracted from the image.
  • the shape information is information excluding color and light from the image, may correspond to 3D information about the shape of an object included in the image, and may not include color information of the object.
  • the object that actually exists is 3D, but if the object is acquired as an image through a camera, the object may be included in the image as 2D information, and the shape of the object included in the image is 2D information.
  • shape information which is 3D information that is the shape of the object that actually exists, can be derived from the 2D information shape of the object included in the image.
  • the shape information may be the plaster statue itself that is not affected by light, or it may be said to be 3D modeling that is not affected by light except for the color (reflectance information) displayed on the 3D modeling program.
  • the shape information may be information containing only the shape of the object in the image, excluding reflectance information and light information.
  • the light information may mean the intensity and direction of light irradiated to the object in the image
  • a shadow may be created on the shape of the object by the light information, and the angle of the shadow depending on the direction of the light,
  • the shape, area, etc. may change, and the light and dark of the shadow may change depending on the intensity of the light. Accordingly, a shadow may be formed according to the light information and the color of the object may change.
  • the light information contained in the image can be derived by inputting the image into one or more inference models.
  • the face image and patch image are extracted from the original image, even if the light information is derived from the face image, the effect of deriving the light information from the original image can be achieved. Therefore, applying light information derived from the face image to the patch image can produce the same effect as applying light information derived from the patch image.
  • a face image and a patch image are derived from the original image, and the light information is derived from the face image,
  • the reflectance information may be color information that is not changed by the light information included in the image and the shadow created by the shape to which the light is irradiated. By subtracting the shape information and the light information from the image, the degree of reflectance can be derived.
  • the image includes light information on the image, shape information about the shape, and reflectance information, and when the shape information and light information are combined, a shadow due to light is created on the shape of the object, and the shape information, When the light information and the reflectance information are combined, the image can be created with the color of the reflectance information changed by the shadow.
  • the reflectance information derived from the image may include color changes due to camera performance. Therefore, according to an embodiment of the present invention, the colorimetric color information of the measured area is derived based on the color information of the ground truth of the preset patch and the reflectance information of the patch derived from the image, so that the reflectance information changed by the performance of the camera is derived. Through correction, it can be effective in deriving color information close to the true value.
  • Figure 4 schematically shows a patch according to an embodiment of the present invention.
  • the patch includes a plurality of reference color areas, and each reference color area has the same shape and area and may be arranged separately from each other.
  • the ground truth of each reference color area may be preset and stored in the computing system, and the location or arrangement of each reference color may be stored in the computing system.
  • Each reference color area of the patch according to an embodiment of the present invention has the same shape and area and is arranged separately from each other, making it easy to derive the patch area from the inference model and obtain color information of each reference color area. It is possible to achieve a certain effect.
  • Figure 5 schematically shows a plurality of inference models according to an embodiment of the present invention.
  • the inference model includes a common model 2100, a surface normal vector model 2200, a light information inference model 2300, and a reflectance model 2400
  • the common model 2100 is , includes one or more artificial neural networks, and derives feature information commonly input to the surface normal vector model (2200), the light information inference model (2300), and the reflectance model (2400) from the input image
  • the surface normal vector model 2200 includes one or more artificial neural networks and derives shape information, which is 3D information, which is the actual shape, from the image, which is 2D information, from the feature information.
  • the light information inference model 2300 has 1 It includes the above artificial neural network, and derives light information including the intensity of light and the direction of light in the image from the feature information, and the reflectance model 2400 includes one or more artificial neural networks, and derives light information including the intensity of light and the direction of light in the image from the feature information. Reflectance information, which is color information for each pixel, can be derived.
  • the inference model in the present invention includes the common model 2100, the surface normal vector model 2200, the light information inference model 2300, and the reflectance model 2400, each of which is a convolutional neural network. It can be implemented in a form that includes one or more of artificial neural network models such as CNN, capsule network (CapsNet), and rule-based feature information extraction models, and preferably includes SFS NET and RI_render (3ddfa). You can.
  • the common model 2100 derives feature information from an input image, and the feature information may be an output value from which the shape information, light information, and reflectance information can be derived by a plurality of inference models.
  • the image may include all images in the present invention, and preferably may include the original image, a face image and patch image derived from the original image, and a learning image.
  • the common model 2100 can derive facial feature information when the input image is a face image, derive patch feature information when the input image is a patch image, and derive learning feature information when it is a learning image.
  • the feature information may be commonly input to the surface normal vector model (2200), the light information inference model (2300), and the reflectance model (2400).
  • the surface normal vector model 2200 can receive the feature information from the common model 2100 and derive shape information. If the feature information is facial feature information, face shape information can be derived, if the feature information is patch feature information, patch shape information can be derived, and if the feature information is learning feature information, learning shape information can be derived.
  • the light information inference model 2300 can receive the feature information from the common model 2100 and derive light information. If the feature information is facial feature information, light information can be derived, and if the feature information is learning feature information, learning light information can be derived.
  • the light information is derived by inputting the facial feature information into the light information inference model 2300, and is used in both the first subtraction application step (S3100) and the second subtraction application step. You can.
  • facial light information is derived by inputting the facial feature information into the light information inference model 2300, and inputting the patch feature information into the light information inference model 2300 to obtain patch light information.
  • the reflectance model 2400 can receive the feature information from the common model 2100 and derive reflectance information. If the feature information is learning feature information, learning reflectance information can be derived.
  • the reflectance model 2400 is used in the model learning step to have the effect of learning the common model 2100, the surface normal vector model 2200, and the light information inference model 2300. You can.
  • the colorimetric method further includes a reflectance inference step, wherein the reflectance inference step includes inputting a face image into a reflectance model 2400 to derive reflectance information of the face; Inputting the patch image into the reflectance model 2400 to derive reflectance information of the patch; And based on the difference between the color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color information of the measurement area in the reflectance information of the face It may include the step of deriving colorimetric color information of the measurement area by applying the color parameter conversion means.
  • the common model 2100, the surface normal vector model 2200, and the light information inference model 2300 may be used in the first subtraction application step, and the common model 2100 and the surface normal vector model (2200) can be used in the second subtraction application step, and the common model (2100), the surface normal vector model (2200), the light information inference model (2300), and the reflectance model (2400) are models to be described later. It can be used in the learning phase.
  • the model used in each step may be the same model, or it may be a different model optimized for each step.
  • the common model 2100 derives facial feature information in the first subtraction application step, derives patch feature information in the second subtraction application step, and derives learning feature information in the model learning step.
  • the common model 2100 used in the first subtraction application step and the common model 2100 used in the second subtraction application step may be the same or different models. Therefore, in the case of different models, each model can be trained in the model learning step.
  • Figure 6 schematically shows the detailed steps of the first subtraction application step (S3100) and the second subtraction application step (S3200) according to an embodiment of the present invention.
  • Figure 6 (A) corresponds to a diagram showing detailed steps of the first subtraction application step (S3100).
  • the first subtraction application step is a facial feature information derivation step ( S3110);
  • facial feature information is derived by inputting a resized face image by cutting out only the face area from the original image in (A) of FIG. 1 into the common model (2100) to derive facial feature information. It may include an extraction step (S3110).
  • the facial feature information is in the form of an output value of the common model 2100, and can be derived from face shape information and light information.
  • the first subtraction application step (S3100) may include a face shape derivation step (S3120) of inputting the facial feature information into the surface normal vector model (2200) to derive the shape information of the face.
  • the shape information of the face is shape information about the face, and may be information derived by excluding parts that do not include skin, such as hair, from the face image.
  • the first subtraction application step (S3100) may include a light information derivation step (S3130) in which light information is derived by inputting the facial feature information into the light information inference model (2300).
  • the light information is not only used to derive reflectance information of the face from the patch image, but can also be used in the second subtraction application step (S3200) to derive reflectance information of the patch.
  • the first subtraction application step (S3100) applies the shape information and light information of the face derived from the face shape derivation step (S3120) and the light information derivation step (S3130) to the face image to obtain reflectance information of the face. It may include a facial reflectance derivation step (S3140). Specifically, by subtracting and applying the shape information of the face and the light information from the face image, reflectance information of the face that reflects only the color of the face can be derived.
  • Figure 6(B) corresponds to a diagram showing detailed steps of the second subtraction application step (S3200).
  • the second subtraction application step (S3200) inputs the patch image into a common model 2100 including one or more artificial neural networks to derive patch feature information.
  • It may include a patch reflectance derivation step (S3230) of deriving reflectance information of the patch in which only the color of the patch is reflected by applying the shape information of the patch and the light information to the patch image.
  • patch feature information is derived by inputting a resized patch image by cutting out only the patch area from the original image in (A) of FIG. 1 into the common model (2100).
  • An extraction step may be included.
  • the patch characteristic information is in the form of an output value of the common model 2100 and can be derived as the shape information of the patch.
  • the second subtraction application step (S3200) may include a patch shape derivation step (S3220) of deriving the shape information of the patch by inputting the patch feature information into the surface normal vector model (2200).
  • the shape information of the patch is shape information about the patch and may include shape information about a plurality of reference color areas.
  • the second subtraction application step (S3200) uses the shape information and light information of the patch derived in the light information derivation step (S3130) included in the patch shape derivation step (S3220) and the first subtraction application step (S3100). It may include a patch reflectance derivation step (S3230) of deriving reflectance information of the patch by applying it to the patch image. Specifically, the reflectance information of the patch reflecting only the color of the patch can be derived by subtracting the shape information and the light information of the patch from the patch image.
  • Figure 7 schematically shows the model learning step according to an embodiment of the present invention.
  • the colorimetric method further includes a model learning step
  • learning shape information is derived by inputting a learning image into the common model (2100) and inputting learning feature information of the common model (2100) into the surface normal vector model (2200) to derive learning shape information.
  • a predicted learning image derivation step of deriving a predicted learning image by combining the learning shape information, learning light information, and learning reflectance information, wherein the common Detailed parameter values or filter information of the model 2100, the surface normal vector model 2200, the light information inference model 2300, and the reflectance model 2400 can be learned.
  • the model learning step may aim to train a plurality of inference models included in the present invention to derive a predicted learning image derived based on the input learning image as identical as possible to the learning image
  • the plurality of inference models may include a common model (2100), a surface normal vector model (2200), a light information inference model (2300), and a reflectance model (2400).
  • the plurality of inference models may be the same as the model included in the present invention, and specifically, the common model 2100 is an inference model for extracting facial feature information and patch feature information in the present invention, and the surface normal vector model.
  • 2200 may be an inference model for extracting face shape information and patch shape information in the present invention, and the light information inference model 2300 may be an inference model for extracting light information in the present invention.
  • the common model 2100 may be an inference model that can derive feature information including shape information, light information, and reflectance information from an image.
  • learning feature information can be derived as an output value of the common model 2100.
  • the learning feature information may include learning shape information, learning light information, and learning reflectance information of the learning image, and the form of the learning feature information includes the type of the common model 2100 and the information included in the common model 2100. It can be determined according to artificial neural networks, etc.
  • the learning feature information may be input to the plurality of inference models. That is, the learning feature information can be input to the surface normal vector model (2200), the light information inference model (2300), and the reflectance model (2400).
  • the colorimetric method in the present invention may include a learning shape information derivation step of deriving learning shape information by inputting learning feature information derived by inputting a learning image into the common model 2100 into the surface normal vector model 2200. You can.
  • the surface normal vector model 2200 may be an inference model that can derive shape information from the feature information.
  • the shape information is shape information about the object in the image input to the common model 2100, and may be 3D information that is the actual shape of the object.
  • the shape information may be information about the actual shape that is not affected by light and therefore does not include a shadow or color (reflectance). For example, it may correspond to uncolored 3D modeling (shape) in which no shadow is formed even when viewed from all directions because information about light is not entered into the program.
  • the colorimetric method in the present invention may include a learning light information derivation step of inputting learning feature information derived by inputting a learning image into the common model 2100 to the light information inference model 2300 to derive learning light information. You can.
  • the light information inference model 2300 may be an inference model that can derive light information from the feature information.
  • the light information is light information being irradiated to the shape included in the image input by the common model 2100, and may include light intensity and light direction. A shadow may be created on a shape in the image by light information in the image.
  • the colorimetric method in the present invention may include a learning reflectance information derivation step of deriving learning reflectance information by inputting the image into the reflectance model 2400.
  • the reflectance model 2400 may be an inference model that can derive reflectance information from the feature information.
  • the reflectance information is color information about the corresponding shape in the image, and may be unchanged color information that is not affected by light information and/or the shadow.
  • the colorimetric method in the present invention may include a predicted learning image derivation step of deriving a predicted learning image by combining the learning shape information, the learning light information, and the learning reflectance information derived by each inference model.
  • the model learning step repeats the above-described steps to reduce or minimize the difference between the prediction learning image and the learning image by using the common model 2100 and the surface normal vector model 2200.
  • the light information inference model 2300, and the reflectance model 2400 can have the effect of learning detailed parameter values or filter information.
  • Figure 8 schematically shows an example of a color difference region defined in the color space of CIE76.
  • CIE76 corresponds to a formula that determines color difference using the CIELAB coordinate set.
  • the color difference by CIE76 can be expressed by the following equation.
  • the first color has color parameters of (L * 1 , a * 1 , b * 1 ) in the Lab color system
  • the second color has color parameters of (L * 2 , a * 2 , b * 2 ).
  • the color difference between the first color and the second color can be expressed as ⁇ E * ab above, and the color difference has a scalar value
  • the color difference defined in the color space of CIE76 has a linear relationship with detailed color parameters.
  • Figure 9 schematically shows an example of a color difference region defined in the color spaces of CIE76, CIE94, and CIE2000.
  • CIE2000 reflects human cognitive characteristics more accurately, it is difficult to express detailed color parameters in a single formula like in CIE76, and the concept of boundary exists, so the calculation of color difference varies for each area. This is because the formula itself also has non-linear elements.
  • Color difference calculation according to the color space reflecting this point may be non-linear, such as CIE2000, and the colorimetric method according to embodiments of the present invention, which will be described later, performs colorimetry in consideration of such non-linear color difference calculation, thereby allowing a person's actual perception. More accurate colorimetry can be performed.
  • Figure 10 schematically shows the process of deriving a color difference value that can be linearly defined.
  • the color difference value in CIE76 can be expressed as the following equation, as described above, and can be expressed as a distance in three-dimensional or multi-dimensional space.
  • the first color has color parameters of (L * 1 , a * 1 , b * 1 ) in the Lab color system
  • the second color has color parameters of (L * 2 , a * 2 , b * 2 ).
  • the color difference between the first color and the second color can be expressed as ⁇ E * ab above, and the color difference has a scalar value
  • Figure 10 corresponds to an example of the color difference for color parameters (corresponding to the X, Y, and Z coordinate systems in Figure 10) in the case where such color parameters and color difference values have a linear relationship .
  • the meaning of “linear” in the present invention should be interpreted in a broad sense, including cases where the color difference itself or other indirect numerical information such as a transformation matrix related to the color difference can be solved by linear algebraic methods.
  • the color difference consisting of the calculated value of (a-b) may be included.
  • Figure 11 schematically shows the detailed steps of the conversion means derivation step and the internal components of the colorimetric device according to embodiments of the present invention.
  • the colorimetric method according to embodiments of the present invention is performed in a computing system including one or more processors and one or more memories, and each color in a preset color space for each of a plurality of reference color areas in the reflectance information of the patch is used.
  • the color difference value obtained by the first color difference algorithm is linear with respect to the color parameter. This is to determine the initial correction value.
  • linear color difference value for the color parameter can be expressed by the following equations.
  • the color difference value by the second color difference algorithm is derived in a different form from the color difference value by the first color difference algorithm.
  • the color difference value obtained by the second color difference algorithm is non-linear with respect to the color parameter.
  • the color difference value in CIEDE94 can be defined as follows.
  • the first color has color parameters of (L * 1 , a * 1 , b * 1 ) in the Lab color system
  • the second color has color parameters of (L * 2 , a * 2 , b * 2 ).
  • the color difference between the first color and the second color can be expressed as ⁇ E * 94 above, and the color difference has a scalar value.
  • k L , K 1 , and K 2 correspond to constant values according to the application field of the corresponding color difference, and may have the following values depending on the field. These values may vary depending on the intended use of the color difference, and as an example, may follow Table 1 below.
  • the color difference value by the first color difference algorithm is determined by an equation including at least one value (nth element of the first color parameter - nth element of the second color parameter).
  • the color difference algorithm can be said to be linear.
  • the color difference value according to the second color difference algorithm includes a color difference value according to the CIEDE2000 standard.
  • the color difference equation based on the second color difference algorithm may correspond to a color difference value that can be calculated non-linearly.
  • the color difference value defined by CIEDE2000 can be defined by the following equation.
  • the correction value changing step (S300) includes a fourth color parameter derived by applying a color parameter conversion means including a plurality of correction values to the plurality of first color parameter information and the plurality of reference colors.
  • the sum of the color difference values derived by the second color difference algorithm between the preset third color parameters corresponding to the ground truth for is repeatedly performed to converge according to the preset standard.
  • the color parameter conversion means is determined in step S400, and when colorimetry is performed, the color parameters (by applying color parameter conversion means to the color information of the measurement area in the reflectance information of the face, accurate colorimetric results (color information of the measurement area) can be derived.
  • the color parameter extraction unit 1100, correction value determination unit 1200, correction value change unit 1300, color parameter conversion means determination unit 1400, and colorimetry unit 1500 of FIG. 11(B) are each described above. Perform steps S100, S200, S300, S400, and S500. Redundant explanations regarding this will be omitted.
  • Figure 12 schematically shows the detailed process of the correction value determination step according to embodiments of the present invention.
  • the correction value determination step (S200) is based on the second color parameter derived by applying a color parameter conversion means including a plurality of correction values to the plurality of first color parameter information and the ground truth for the plurality of reference colors.
  • the correction value is determined so that the color difference value derived by the first color difference algorithm between the set third color parameters satisfies a preset standard.
  • a plurality of reference colors are captured in the patch image of FIG. 12.
  • the reference color may correspond to the reference color area displayed on the patch described with reference to FIG. 4.
  • the first color parameter is extracted for each reference color captured in the patch image.
  • a first color parameter is extracted for each of the plurality of reference colors.
  • the description is based on an example defined by three detailed color parameters based on the Lab color coordinate system.
  • the second color parameter can be derived using a color parameter conversion means including a plurality of correction values.
  • the color parameter conversion means corresponds to a means of correcting when the color is photographed differently due to external conditions such as lighting, camera settings, camera performance, and surrounding environment, and in embodiments of the present invention, it is implemented by a plurality of correction values. It may be, and if the reference color is one, it may be the color difference value itself between the color information of the ground truth and the second color parameter.
  • the addition or subtraction value for L, the addition or subtraction value for a, or the addition or subtraction value for b may correspond to this, or L, a, and b may be converted into a vector string.
  • the internal elements of the matrix that are handled and perform matrix operations on them may correspond to correction values.
  • the computing system stores color parameters corresponding to the groundtruth of the reference color.
  • the color parameter conversion means can be primarily determined by using a patch containing a preset reference color and setting the color parameter conversion means in a direction to minimize the color difference between the photographed value of the reference color and the true value.
  • the correction value may be determined to minimize the sum of color difference values derived by the first color difference algorithm for a plurality of reference colors.
  • the first color difference algorithm corresponds to an algorithm that can find a correction value that minimizes the sum of color difference values using a linear algebraic method, and typically the variables are (nth element of the first color parameter - CIEDE76, which exists only in a formula of the nth element of a two-color parameter, may correspond to this.
  • Figure 13 schematically shows the overall processes of the conversion means derivation step according to embodiments of the present invention.
  • a color parameter conversion means for color correction is determined using information about a reference color.
  • colors can be expressed in three-dimensional space, and the color difference corresponds to the distance between two points that refer to two colors in three-dimensional space.
  • Original color ( ) may correspond to the third color parameter, which corresponds to the color parameter corresponding to the ground truth for the reference color.
  • T may correspond to a color parameter conversion means.
  • This method can be solved when the color space and distance are well defined, for example, linearly.
  • colors in color spaces such as sRGB and CIE76, colors can be expressed as three-dimensional vectors in Euclidean space, and in this case, the optimization problem of T can be solved using general linear algebra. In this case, additional processes such as tone mapping or gamut mapping may be added.
  • step S1000 information on T corresponding to the color parameter conversion means, that is, a correction value, is determined in the first color difference algorithm.
  • the distance function corresponding to the color difference value cannot be expressed as a point distance in space, and is a complex function representing the distance between two points. It can be obtained with In cases like this, it is difficult to solve using existing linear algebra methods.
  • step S1100 After determining the T value by the first color difference algorithm as described above, Change the T value so that the value decreases. In step S1100, this process is performed, preferably based on partial differential information at the T value determined in the correction value determination step. Determine the direction of change in T that can reduce .
  • step S1000 the first and third color parameters are mapped to a color space that can linearly obtain T, such as Lab (based on CIE76), and the initial value T is generated by solving a linear equation.
  • T such as Lab (based on CIE76)
  • step S1100 as the value of T (each of the plurality of correction values constituting T) changes in the Lab space, an error (error) in CIEDE2000 is generated. ) finds the change amount information of T that changes or decreases, and updates T (T+ ⁇ T).
  • Figure 14 schematically shows the operation of the color parameter conversion means according to embodiments of the present invention.
  • the color parameter conversion means is implemented as a matrix including a plurality of correction values, and the number of columns or rows of the matrix corresponds to the number of elements of the first color parameter.
  • T 11 to T 33 may correspond to correction values, and a matrix composed of these may correspond to a color parameter conversion means.
  • the second color parameter is derived by matrix multiplying the matrix with respect to the first color parameter
  • the fourth color parameter is derived by matrix multiplying the matrix with respect to the first color parameter.
  • the color parameter conversion means may be multiplied on the right side of the first color parameter as shown in FIG. 14, but may also be defined as multiplied on the left side of the first color parameter as described above.
  • FIG. 15 schematically shows the detailed process in the correction value determination step according to embodiments of the present invention.
  • the first color parameter defined as L, a, and b can be converted to L', a', and b' by a color parameter conversion means (e.g., matrix T).
  • a color parameter conversion means e.g., matrix T
  • the third color parameter corresponding to the groundtruth for the reference color may be expressed as L GT , a GT , and b GT , and may be expressed as a point in space as shown in FIG. 10 .
  • the spatial distance between the points corresponding to L', a', and b' and L GT , a GT , and b GT may correspond to an example of a color difference value derived by the first color difference algorithm, , the T value is found so that the sum of the distances for a plurality of reference colors is minimized.
  • the correction value determination step can operate in the same manner as above.
  • the correction value change stage a color difference algorithm similar to human perception, for example, CIEDE2000, is used, so the color difference between two points in the L, a, and b coordinate system does not appear completely linearly as a straight line. Since the space of the coordinate system changes, the T value is changed to find and reduce the change behavior of the T value so that the sum of the color difference between the photographed value converted by the T of the plurality of reference colors and the ground truth is minimized. Change T to find the optimal T.
  • CIEDE2000 color difference algorithm similar to human perception
  • Figure 16 schematically shows the detailed process of the correction value changing step according to embodiments of the present invention.
  • the first color parameter is derived for the reference color area captured from the patch image.
  • the distance between the color defined by the fourth color parameter and the color defined by the third color parameter is calculated.
  • the correction value of the color parameter conversion means is changed so that the sum of color difference values by the second color difference algorithm, which is closer to human perception than the first color difference algorithm, is reduced.
  • the correction value changing step defines the sum of the color difference values derived by the second color difference algorithm between the fourth color parameter and the third color parameter for the plurality of reference colors as an error function, and Some or all of the correction values are changed based on change information including the first partial derivative of the error function with respect to the correction value.
  • the first partial differential value is (differential value for T 11 of the error function, differential value for T 12 of the error function, T of the error function Differential value for 13 , Differential value for T 21 of the error function, Differential value for T 22 of the error function, Differential value for T 23 of the error function, Differential value for T 31 of the error function, T of the error function It may correspond to a vector consisting of 9 values (differential value for 32 , differential value for T 33 of the error function).
  • the error function can be reduced by increasing the T11 value. It can be seen that this increase in T11 may correspond to change information.
  • the correction value changing step defines the sum of color difference values derived by a second color difference algorithm between the fourth color parameter and the third color parameter for the plurality of reference colors as an error function, and , some or all of the correction values are changed based on change information including the second partial differential value of the error function with respect to the correction value.
  • the second partial differential value may correspond to a Hessian matrix, and as described above, when the color parameter conversion means corresponds to a 3X3 vector, the first partial differential value may be expressed as a 1X9 vector, and the second The partial differential value can be expressed as a 9X9 matrix.
  • FIG 17 schematically shows detailed steps of the correction value changing step according to embodiments of the present invention.
  • the correction value changing step defines the sum of the color difference values derived by the second color difference algorithm between the fourth color parameter and the third color parameter for the plurality of reference colors as an error function. It can be done.
  • a change vector for the change direction of the entire correction value is determined based on the first partial differential value of the error function with respect to the correction value.
  • the first partial differential value is the direction of change of T, for example, if the color parameter conversion means is defined in the form shown in FIG. 9, the direction of change for the 9 correction values of T (e.g. (1, 1, 0) If ,0,0,0,0,0,0,0) corresponds to the first partial differential value, only T11 and T11 are reduced to the same size, and the rest are kept as is) is determined.
  • a change vector can also be expressed as a matrix with the same dimension as T.
  • the second-order partial derivative determines the size of the change vector.
  • the matrix T value can operate in 9 coordinate systems, and the error function can be calculated in each T. Since the first partial derivative value can find the direction that can reduce the error, and the second partial derivative value can derive curvature information of the error function, the optimal moving distance of the change vector can be derived.
  • a scalar value a can be derived from the second partial derivative value, and a* ⁇ T1 corresponds to the final change amount information of T.
  • T+ a* ⁇ T1 can become a new color parameter conversion means, and in this state, the correction value change step (S300) can be repeated. As repetition progresses, the change in the error function may decrease or decrease below a certain value. At this time, S400 is performed and the color parameter conversion means can be determined.
  • the first partial differential value and the second partial differential value can be calculated numerically by approximating the nth order Taylor series.
  • the error function may be defined as f(x), x may correspond to a matrix or vector such as T described above, and f(x) may correspond to a function that outputs a differentiable scalar value. You can.
  • the initial x value that is, x 0
  • the correction value determination step (S200) described above, and while updating x by deriving the first and second partial differential information from x0, f(x) We can find x that can minimize .
  • the Hessian matrix corresponding to the second partial derivative may correspond to an approximation of the Hessian matrix corresponding to the second partial derivative, which may be updated at each step, May correspond to the first partial differential value (gradient value) at x k .
  • Line search in the p k direction can be used to find x k+1 corresponding to the next x value, which is It can be implemented by finding the value of r that minimizes .
  • the optimal T can be found by performing the process of deriving the change value of T several times in the direction of reducing the value of the error function at the current T.
  • various known numerical calculations such as the Broyden-Fletcher-Goldfarb-Shanno algorithm (BFGS) algorithm, can be used.
  • BFGS Broyden-Fletcher-Goldfarb-Shanno algorithm
  • Figure 18 schematically shows the internal configuration of a computing device according to an embodiment of the present invention.
  • the computing device 11000 includes at least one processor 11100, a memory 11200, a peripheral interface 11300, and an input/output subsystem ( It may include at least an I/O subsystem (11400), a power circuit (11500), and a communication circuit (11600). At this time, the computing device 11000 may correspond to the computing device 1000 shown in FIG. 1.
  • the memory 11200 may include, for example, high-speed random access memory, magnetic disk, SRAM, DRAM, ROM, flash memory, or non-volatile memory. .
  • the memory 11200 may include software modules, instruction sets, or other various data necessary for the operation of the computing device 11000.
  • access to the memory 11200 from other components such as the processor 11100 or the peripheral device interface 11300 may be controlled by the processor 11100.
  • the peripheral interface 11300 may couple input and/or output peripherals of the computing device 11000 to the processor 11100 and the memory 11200.
  • the processor 11100 may execute a software module or set of instructions stored in the memory 11200 to perform various functions for the computing device 11000 and process data.
  • the input/output subsystem can couple various input/output peripherals to the peripheral interface 11300.
  • the input/output subsystem may include a controller for coupling peripheral devices such as a monitor, keyboard, mouse, printer, or, if necessary, a touch screen or sensor to the peripheral device interface 11300.
  • input/output peripherals may be coupled to the peripheral interface 11300 without going through the input/output subsystem.
  • Power circuit 11500 may supply power to all or some of the terminal's components.
  • power circuit 11500 may include a power management system, one or more power sources such as batteries or alternating current (AC), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator, or a power source. It may contain arbitrary other components for creation, management, and distribution.
  • the communication circuit 11600 may enable communication with another computing device using at least one external port.
  • the communication circuit 11600 may include an RF circuit to transmit and receive RF signals, also known as electromagnetic signals, to enable communication with other computing devices.
  • FIG. 18 is only an example of the computing device 11000, and the computing device 11000 omits some components shown in FIG. 18, further includes additional components not shown in FIG. 18, or 2. It may have a configuration or arrangement that combines more than one component.
  • a computing device for a communication terminal in a mobile environment may further include a touch screen or a sensor in addition to the components shown in FIG. 18, and may include various communication methods (WiFi, 3G, LTE) in the communication circuit 11600. , Bluetooth, NFC, Zigbee, etc.) may also include a circuit for RF communication.
  • Components that can be included in the computing device 11000 may be implemented as hardware, software, or a combination of both hardware and software, including an integrated circuit specialized for one or more signal processing or applications.
  • Methods according to embodiments of the present invention may be implemented in the form of program instructions that can be executed through various computing devices and recorded on a computer-readable medium.
  • the program according to this embodiment may be composed of a PC-based program or a mobile terminal-specific application.
  • the application to which the present invention is applied can be installed on the computing device 11000 through a file provided by a file distribution system.
  • the file distribution system may include a file transmission unit (not shown) that transmits the file according to a request from the computing device 11000.
  • devices and components described in embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), etc. , may be implemented using one or more general-purpose or special-purpose computers, such as a programmable logic unit (PLU), microprocessor, or any other device capable of executing and responding to instructions.
  • the processing device may execute an operating system (OS) and one or more software applications running on the operating system. Additionally, a processing device may access, store, manipulate, process, and generate data in response to the execution of software.
  • OS operating system
  • a processing device may access, store, manipulate, process, and generate data in response to the execution of software.
  • a single processing device may be described as being used; however, those skilled in the art will understand that a processing device includes multiple processing elements and/or multiple types of processing elements. It can be seen that it may include.
  • a processing device may include a plurality of processors or one processor and one controller. Additionally, other processing configurations, such as parallel processors, are possible.
  • Software may include a computer program, code, instructions, or a combination of one or more of these, which may configure a processing unit to operate as desired, or may be processed independently or collectively. You can command the device.
  • Software and/or data may be used by any type of machine, component, physical device, virtual equipment, computer storage medium or device to be interpreted by or to provide instructions or data to a processing device. , or may be permanently or temporarily embodied in a transmitted signal wave.
  • Software may be distributed over networked computing devices and stored or executed in a distributed manner.
  • Software and data may be stored on one or more computer-readable recording media.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded on a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, etc., singly or in combination.
  • Program instructions recorded on the medium may be specially designed and configured for the embodiment or may be known and available to those skilled in the art of computer software.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic media such as floptical disks.
  • program instructions include machine language code, such as that produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter, etc.
  • the hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
  • the colorimetry method can be easily performed even with a personal mobile terminal such as a smartphone.
  • the effect of deriving the color information of the face is achieved by comparing the patch and the face in the same environment. It can be performed.
  • shape information is derived for each face and patch, it is possible to derive accurate reflectance information by compensating for differences in each characteristic and state in the image.
  • reflectance information that removes the influence of a shadow created by light irradiated on the shape can be derived from the image.
  • the reflectance information is derived by subtracting the shape information and light information that affect the reflectance information from the image, thereby achieving the effect of deriving more accurate reflectance information than when the reflectance information is directly derived from the image. It can be performed.
  • the patch since the patch includes a plurality of reference colors, it can exhibit the effect of responding to various facial colors.
  • color parameters are determined based on the color information of each reference color (reference color in the reflectance information of the patch) in the image and the ground truth. It can be effective in deriving conversion means and deriving color difference information.
  • a learning image is input in the model learning step, and the derived learning shape information, learning light information, and learning reflectance information are combined and compared with the learning image, thereby deriving a clear output value (learning image).
  • This can have the effect of learning detailed parameter values or filter information of the artificial neural network.
  • colorimetry reflects color difference information in a non-linear color space that has characteristics that are robust to external environments such as camera settings, performance, and lighting, and can more accurately implement the color difference perceived by the human eye.
  • Methods, devices, and computer-readable media may be provided.
  • a personal mobile terminal such as a smartphone.
  • non-linearly defined color differences can be approximated and optimized through repetitive calculations in the color space, thereby achieving the effect of simultaneously promoting computational efficiency and colorimetric accuracy. there is.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dentistry (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Dermatology (AREA)
  • Image Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Inspection Of Paper Currency And Valuable Securities (AREA)

Abstract

The present invention relates to a method and a device for measuring the color of skin in an image, and a computer-readable medium, and to a method and a device for measuring color in an image, and a computer-readable medium, the method and the device deriving shape information and light information from a captured image, subtracting the shape information and the light information from the image so as to derive reflectivity information, deriving a color parameter transform means on the basis of the difference between captured color information about a reference color range in patch reflectivity information about a patch and color information about the ground truth of a reference color, and applying the color parameter transform means to facial reflectivity information about the face so as to derive information about a colorimetry color that is an inherent color of the face, from which the influence of illumination is excluded.

Description

이미지에서의 측색 방법, 장치, 및 컴퓨터-판독가능 매체Colorimetric methods, apparatus, and computer-readable media in images
본 발명은 이미지에서의 피부 측색 방법, 장치, 및 컴퓨터-판독가능 매체 에 관한 것으로서, 촬영된 이미지로부터 형상정보 및 빛정보를 도출하여 상기 이미지에서 형상정보 및 빛정보를 감산적용하여 반사율정보를 도출하고, 패치에 대한 상기 패치의 반사율정보에서의 기준색상 영역의 촬영된 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이에 기초하여, 색상파라미터변환수단을 도출하고, 상기 색상파라미터변환수단을 얼굴에 대한 얼굴의 반사율정보에 적용하여 조명에 의한 영향을 배제한 얼굴 고유의 색상인 측색색상정보를 도출하는, 이미지에서의 측색 방법, 장치 및 컴퓨터-판독가능 매체에 관한 것이다.The present invention relates to a method, device, and computer-readable medium for skin color measurement in an image, which derives shape information and light information from a captured image and derives reflectance information by subtracting the shape information and light information from the image. And, based on the difference between the photographed color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color parameter conversion means is used. It relates to a colorimetric method, device, and computer-readable medium in an image that derives colorimetric color information, which is the unique color of the face, excluding the influence of lighting, by applying it to the reflectance information of the face.
미용에 대한 관심이 점차 증대함에 따라, 자신의 피부 상태에 대한 정확한 정보를 얻기 위한 소비자의 노력이 계속되고 있다. 그 예로 퍼스널 컬러에 대한 유행이 있다. 퍼스널 컬러란 개인이 가진 신체색과 조화를 이루어 생기가 돌고 활기차 보이도록 하는 색으로, 퍼스널 컬러에 따라서 어울리는 화장법, 옷의 색이 달라 본인이 퍼스널 컬러를 알 수 있다면, 색에 따른 효과적인 연출이 가능하다.As interest in beauty gradually increases, consumers' efforts to obtain accurate information about their skin condition continue. An example of this is the trend toward personal color. Personal color is a color that harmonizes with the body color of an individual to make the individual look lively and energetic. Makeup methods and clothing colors are different depending on the personal color. If you know your personal color, you can create an effective look according to the color. possible.
이를 위해서, 소비자는피부 의료 시설이나, 피부 관리 샵, 퍼스널 컬러 진단 업체 등에 방문하여 전문 피부 분석 장치의 도움을 받아 자신들의 피부 상태를 점검하고 있다. 이에 따라, 생활 속에서 쉽게 활용할 수 있는 기기로 정확한 색상을 측정할 수 있다면 그 활용성이 매우 높을 것이다.To this end, consumers visit skin medical facilities, skin care shops, personal color diagnosis companies, etc. and check their skin condition with the help of professional skin analysis devices. Accordingly, if accurate color can be measured with a device that can be easily used in daily life, its usability will be very high.
한편, 스마트폰과 같이 카메라가 구비된 사용자 단말이 널리 보급되고 있고, 이를 활용하여 색상이나 시각 정보를 전달하는 매체가 증가하고 있다. 따라서, 고가의 전문 장비 없이 사용자 단말의 카메라를 통해 사용자의 피부가 촬영된 이미지를 기반으로 사용자의 피부를 분석할 수 있다면, 자신들의 피부 상태를 보다 손쉽게 점검할 수 있을 것이다. 하지만 카메라로 촬영된 색상은 조명과 카메라 센서, 디스플레이의 보정 등이 추가된 결과물로서, 정확한 색상을 전달했다고 보기 어렵다. Meanwhile, user terminals equipped with cameras, such as smartphones, are becoming widely available, and the number of media that utilize them to convey color or visual information is increasing. Therefore, if the user's skin can be analyzed based on an image taken of the user's skin through the camera of the user terminal without expensive professional equipment, the user will be able to check the condition of their skin more easily. However, the colors captured by the camera are the result of additional lighting, camera sensor, and display corrections, so it is difficult to say that they delivered accurate colors.
한국공개특허 10-2018-0082172은 사용자단말기에서 측색을 수행할 수 있는 방법 및 장치에 대해서 개시하고 있다. 선행특허 1은 종래의 사용자 단말 기반 피부 분석 기술은 사용자의 피부가 촬영되는 공간에서의 조명 특성을 전혀 고려하지 못하기 때문에, 사용자의 피부를 정확히 분석할 수 없는 한계를 갖는다는 점에 주목하여 별도의 조명의 특성을 분석하여 측색, 혹은 피부분석을 수행하는 기술을 개시하고 있지만, 조명에 대한 특성만을 고려할 뿐, 카메라의 특성, 사람의 시각의 인지적 특성에 대해서는 고려하지 못한다는 문제점이 있다.Korean Patent Publication No. 10-2018-0082172 discloses a method and device for performing colorimetry on a user terminal. Prior Patent 1 noted that the conventional user terminal-based skin analysis technology has the limitation of not being able to accurately analyze the user's skin because it does not take into account the lighting characteristics in the space where the user's skin is photographed. Although technology has been disclosed to perform colorimetry or skin analysis by analyzing the characteristics of lighting, there is a problem in that it only considers the characteristics of the lighting and does not take into account the characteristics of the camera and the cognitive characteristics of human vision.
이에 따라, 카메라설정, 성능, 조명 등의 외부환경에 의하여 촬영한 이미지 정보가 변경될 수 있기 때문에, 측색을 수행함에 있어서 외부환경에 기인한 측색 오차를 최소화하기 위하여 측색기준색상이 인쇄된 컬러 패치 등을 대상과 함께 촬영을 하고, 촬영 대상과 측색기준색상(이에 대한 정확한 색상값인 그라운드투루스를 알고 있음)의 색차에 기반하여 측색을 수행하는 방법에 대한 개발이 필요한 상황이다.Accordingly, since the captured image information may change depending on the external environment such as camera settings, performance, and lighting, a color patch with the colorimetric reference color printed is used to minimize colorimetric errors due to the external environment when performing colorimetry. There is a need to develop a method of photographing the back with an object and performing colorimetry based on the color difference between the photographed object and the colorimetric reference color (ground truth, the exact color value for this is known).
본 발명은 이미지에서의 피부 측색 방법, 장치, 및 컴퓨터-판독가능 매체 에 관한 것으로서, 촬영된 이미지로부터 형상정보 및 빛정보를 도출하여 상기 이미지에서 형상정보 및 빛정보를 감산적용하여 반사율정보를 도출하고, 패치에 대한 상기 패치의 반사율정보에서의 기준색상 영역의 촬영된 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이에 기초하여, 색상파라미터변환수단을 도출하고, 상기 색상파라미터변환수단을 얼굴에 대한 얼굴의 반사율정보에 적용하여 조명에 의한 영향을 배제한 얼굴 고유의 색상인 측색색상정보를 도출하는, 이미지에서의 측색 방법, 장치 및 컴퓨터-판독가능 매체에 관한 것이다.The present invention relates to a method, device, and computer-readable medium for skin color measurement in an image, which derives shape information and light information from a captured image and derives reflectance information by subtracting the shape information and light information from the image. And, based on the difference between the photographed color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color parameter conversion means is used. It relates to a colorimetric method, device, and computer-readable medium in an image that derives colorimetric color information, which is the unique color of the face, excluding the influence of lighting, by applying it to the reflectance information of the face.
상기와 같은 과제를 해결하기 위하여, 본 발명의 일 실시예는, 1 이상의 프로세서 및 1 이상의 메모리를 포함하는 컴퓨팅시스템에서 수행되는 측색방법으로서, 원본이미지로부터 얼굴영역에 대한 얼굴이미지 및 패치영역에 대한 패치이미지를 도출하는 이미지도출단계; 상기 얼굴이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 얼굴의 형상정보, 및 얼굴이미지에서의 빛의 세기 및 방향을 포함하는 빛정보를 도출하고, 얼굴이미지에 대해 상기 얼굴의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 얼굴의 형상에 의하여 색상이 변화되는 영향을 최소화한 얼굴의 반사율정보를 도출하는 제1감산적용단계; 1 이상의 기준색상 영역을 포함하는 패치이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 패치의 형상정보를 도출하고, 패치이미지에 대해 상기 패치의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 패치의 형상에 의하여 색상이 변화되는 영향을 최소화한 패치의 반사율정보를 도출하는 제2감산적용단계; 및 상기 패치의 반사율정보에서의 기준색상 영역의 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이에 기초하여, 색상파라미터변환수단을 도출하고, 상기 얼굴의 반사율정보에서의 측정부위의 색상정보에 상기 색상파라미터변환수단을 적용하여, 상기 측정부위의 측색색상정보를 도출하는 얼굴색상도출단계;를 포함하는, 측색 방법을 제공한다.In order to solve the above problems, an embodiment of the present invention is a colorimetry method performed in a computing system including one or more processors and one or more memories, and is a colorimetric method that calculates a facial image for a face area from an original image and a patch area for the patch area. An image derivation step of deriving a patch image; The face image is input to an inference model including one or more artificial neural networks to derive face shape information and light information including the intensity and direction of light in the face image, and the face shape information for the face image and a first subtraction application step of applying the subtraction to the light information to derive reflectance information of the face that minimizes the effect of color change due to the shape of the face to which the light is irradiated. A patch image containing one or more reference color areas is input into an inference model including one or more artificial neural networks, the shape information of the patch is derived, and the shape information of the patch and the light information are subtracted and applied to the patch image, A second subtraction application step of deriving reflectance information of the patch that minimizes the effect of color change due to the shape of the patch to which light is irradiated; And based on the difference between the color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color information of the measurement area in the reflectance information of the face A colorimetry method is provided, including a face color derivation step of deriving colorimetric color information of the measurement area by applying the color parameter conversion means to the color parameter conversion means.
본 발명의 일 실시예에서는, 상기 형상정보는, 이미지가 포함하고 있는 대상에 대한 이미지 상의 2D 정보에서 도출한 실제 형상에 따른 3D정보를 포함하고, 상기 빛정보는, 이미지가 포함하고 있는 대상에 조사되는 빛의 세기 및 방향을 포함하고, 상기 반사율정보는, 이미지가 포함하고 있는 대상의 형상에 빛이 조사되어 상기 빛의 세기 및 방향에 따라 형상에 생기는 그림자의 영향을 제거한 픽셀별 색상정보를 포함할 수 있다.In one embodiment of the present invention, the shape information includes 3D information according to the actual shape derived from 2D information on the image about the object included in the image, and the light information includes information about the object included in the image. It includes the intensity and direction of the irradiated light, and the reflectance information is color information for each pixel that removes the influence of shadows that appear on the shape according to the intensity and direction of the light when light is irradiated to the shape of the object included in the image. It can be included.
본 발명의 일 실시예에서는, 상기 패치는, 복수의 기준색상 영역을 포함하고, 각각의 기준색상 영역은 같은 형태 및 넓이를 가지고, 서로 구분되게 배치될 수 있다.In one embodiment of the present invention, the patch includes a plurality of reference color areas, and each reference color area has the same shape and area and may be arranged separately from each other.
본 발명의 일 실시예에서는, 상기 추론모델은 공통모델, 표면법선벡터모델, 빛정보추론모델 및 반사율모델을 포함하고, 상기 공통모델은, 1 이상의 인공신경망을 포함하고, 입력된 이미지로부터 상기 표면법선벡터모델, 상기 빛정보추론모델, 및 상기 반사율모델에 공통적으로 입력되는 특징정보를 도출하고, 상기 표면법선벡터모델은, 1 이상의 인공신경망을 포함하고, 상기 특징정보로부터 2D정보인 상기 이미지에서 실제 형상을 담은 3D구조 정보인 상기 형상정보를 도출하고, 상기 빛정보추론모델은, 1 이상의 인공신경망을 포함하고, 상기 특징정보로부터 상기 이미지에서 빛의 세기 및 빛의 방향을 포함하는 상기 빛정보를 도출하고, 상기 반사율모델은, 1 이상의 인공신경망을 포함하고, 상기 특징정보로부터 픽셀별 색상정보인 반사율정보를 도출할 수 있다.In one embodiment of the present invention, the inference model includes a common model, a surface normal vector model, a light information inference model, and a reflectance model, and the common model includes one or more artificial neural networks, and determines the surface from the input image. Feature information that is commonly input to the normal vector model, the light information inference model, and the reflectance model is derived, and the surface normal vector model includes one or more artificial neural networks, and from the feature information in the image that is 2D information The shape information, which is 3D structure information containing the actual shape, is derived, the light information inference model includes one or more artificial neural networks, and the light information includes the intensity of light and the direction of light in the image from the feature information. Derived, the reflectance model includes one or more artificial neural networks, and reflectance information, which is color information for each pixel, can be derived from the feature information.
본 발명의 일 실시예에서는, 상기 제1감산적용단계는, 얼굴이미지를 1 이상의 인공신경망을 포함하는 공통모델에 입력하여 얼굴특징정보를 도출하는 얼굴특징정보도출단계; 상기 얼굴특징정보를 1 이상의 인공신경망을 포함하는 표면법선벡터모델에 입력하여 얼굴의 형상정보를 도출하는 얼굴형상도출단계; 상기 얼굴특징정보를 1 이상의 인공신경망을 포함하는 빛정보추론모델에 입력하여 얼굴이미지에서의 빛의 세기 및 방향을 포함하는 빛정보를 도출하는 빛정보도출단계; 및 상기 얼굴이미지에 대하여 상기 얼굴의 형상정보 및 빛정보를 적용하여 얼굴에 대한 색상만이 반영된 얼굴의 반사율정보를 도출하는 얼굴반사율도출단계;를 포함하고, 상기 제2감산적용단계는, 패치이미지를 1 이상의 인공신경망을 포함하는 공통모델에 입력하여 패치특징정보를 도출하는 패치특징정보도출단계; 상기 패치특징정보를 1 이상의 인공신경망을 포함하는 표면법선벡터모델에 입력하여 패치의 형상정보를 도출하는 패치형상도출단계; 상기 패치이미지에 대하여 상기 패치의 형상정보 및 상기 빛정보를 적용하여 패치에 대한 색상만이 반영된 패치의 반사율정보를 도출하는 패치반사율도출단계;를 포함할 수 있다.In one embodiment of the present invention, the first subtraction application step includes a facial feature information deriving step of deriving facial feature information by inputting a facial image into a common model including one or more artificial neural networks; A facial shape derivation step of deriving facial shape information by inputting the facial feature information into a surface normal vector model including one or more artificial neural networks; A light information derivation step of inputting the facial feature information into a light information inference model including one or more artificial neural networks to derive light information including the intensity and direction of light in the face image; And a facial reflectance derivation step of applying the shape information and light information of the face to the face image to derive reflectance information of the face reflecting only the color of the face. The second subtraction application step includes applying the patch image. A patch feature information deriving step of deriving patch feature information by inputting into a common model including one or more artificial neural networks; A patch shape derivation step of deriving shape information of the patch by inputting the patch feature information into a surface normal vector model including one or more artificial neural networks; It may include a patch reflectance derivation step of applying the shape information of the patch and the light information to the patch image to derive reflectance information of the patch reflecting only the color of the patch.
본 발명의 일 실시예에서는, 상기 측색 방법은 모델학습단계를 더 포함하고, 상기 모델학습단계는, 학습이미지를 상기 공통모델에 입력하고, 공통모델의 학습특징정보를 상기 표면법선벡터모델에 입력하여 학습형상정보를 도출하는 학습형상정보도출단계; 상기 학습특징정보를 상기 빛정보추론모델에 입력하여 학습빛정보를 도출하는 학습빛정보도출단계; 상기 학습특징정보부터 반사율정보를 도출하는 반사율모델에 입력하여 학습반사율정보를 도출하는 학습반사율정보도출단계; 및 상기 학습형상정보, 학습빛정보, 및 학습반사율정보를 결합하여 예측학습이미지를 도출하는 예측학습이미지도출단계;를 수행하되, 상기 예측학습이미지와 상기 학습이미지의 차이가 감소 혹은 최소화하도록 상기 공통모델, 상기 표면법선벡터모델, 상기 빛정보추론모델, 및 상기 반사율모델의 세부파라미터값 혹은 필터정보를 학습할 수 있다.In one embodiment of the present invention, the colorimetric method further includes a model learning step, wherein the model learning step inputs a learning image into the common model and inputs learning feature information of the common model into the surface normal vector model. A learning shape information deriving step of deriving learning shape information; A learning light information deriving step of inputting the learning feature information into the light information inference model to derive learning light information; A learning reflectance information deriving step of deriving learning reflectance information by inputting the learning feature information into a reflectance model that derives reflectance information; And a predicted learning image derivation step of deriving a predicted learning image by combining the learning shape information, learning light information, and learning reflectance information, wherein the common Detailed parameter values or filter information of the model, the surface normal vector model, the light information inference model, and the reflectance model can be learned.
본 발명의 일 실시예에서는, 상기 얼굴색상도출단계는 변환수단도출단계를 포함하고, 상기 변환수단도출단계는, 패치의 반사율정보에서의 복수의 기준색상 영역에 각각에 대한 기설정된 색공간에서의 각각의 제1색상파라미터 정보를 추출하는 색상파라미터추출단계; 복수의 상기 제1색상파라미터 정보에 대하여 복수의 보정수치를 포함하는 색상파라미터변환수단을 적용하여 도출되는 제2색상파라미터와 상기 복수의 기준색상에 대한 그라운드트루스에 해당하는 기설정된 제3색상파라미터 사이의 제1색차이 알고리즘에 의하여 도출되는 색차이 값이 기설정된 기준을 만족하도록 상기 보정수치를 결정하는 보정수치결정단계; 및 복수의 상기 제1색상파라미터 정보에 대하여 복수의 보정수치를 포함하는 색상파라미터변환수단을 적용하여 도출되는 제4색상파라미터와 상기 복수의 기준색상에 대한 그라운드트루스에 해당하는 기설정된 제3색상파라미터 사이의 제2색차이 알고리즘에 의하여 도출되는 색차이 값이 감소하는 방향으로 상기 보정수치 중 일부 혹은 전체를 변경하는 보정수치변경단계;를 포함할 수 있다.In one embodiment of the present invention, the face color derivation step includes a conversion means derivation step, and the conversion means derivation step includes the conversion of a preset color space for each of a plurality of reference color regions in the reflectance information of the patch. A color parameter extraction step of extracting each first color parameter information; Between a second color parameter derived by applying a color parameter conversion means including a plurality of correction values to a plurality of first color parameter information and a preset third color parameter corresponding to the ground truth for the plurality of reference colors A correction value determination step of determining the correction value so that the color difference value derived by the first color difference algorithm satisfies a preset standard; and a fourth color parameter derived by applying a color parameter conversion means including a plurality of correction values to the plurality of first color parameter information and a preset third color parameter corresponding to the ground truth for the plurality of reference colors. It may include a correction value changing step of changing some or all of the correction values in a direction that reduces the color difference value derived by the second color difference algorithm.
본 발명의 일 실시예에서는, 상기 제1색차이 알고리즘에 의한 색차이 값은 색상파라미터에 대하여 선형적이고, 상기 제2색차이 알고리즘에 의한 색차이 값은 색상파라미터에 대하여 비선형적으로 상기 제1색차이 알고리즘과 상기 제2색차이 알고리즘은 서로 상이하고, 상기 색상파라미터변환수단은 복수의 보정수치를 포함하는 행렬로 구현이 되고, 상기 행렬의 열 혹은 행의 수는 상기 제1색상파라미터의 요소의 수에 상응하고, 상기 제2색상파라미터는 상기 제1색상파라미터에 대하여 상기 행렬을 행렬곱하여 도출되고, 상기 제4색상파라미터는 상기 제1색상파라미터에 대하여 상기 행렬을 행렬곱을 하여 도출될 수 있다.In one embodiment of the present invention, the color difference value by the first color difference algorithm is linear with respect to the color parameter, and the color difference value by the second color difference algorithm is non-linear with respect to the color parameter of the first color. The difference algorithm and the second color difference algorithm are different from each other, and the color parameter conversion means is implemented as a matrix including a plurality of correction values, and the number of columns or rows of the matrix is the number of elements of the first color parameter. Corresponds to a number, the second color parameter may be derived by matrix multiplying the matrix with respect to the first color parameter, and the fourth color parameter may be derived by matrix multiplying the matrix with respect to the first color parameter.
상기와 같은 과제를 해결하기 위하여 본 발명의 일 실시예는, 1 이상의 프로세서 및 1 이상의 메모리를 포함하는 컴퓨팅시스템에서 구현되는 측색 장치로서, 상기 측색 장치는, 원본이미지로부터 얼굴영역에 대한 얼굴이미지 및 패치영역에 대한 패치이미지를 도출하는 이미지도출단계; 상기 얼굴이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 얼굴의 형상정보, 및 얼굴이미지에서의 빛의 세기 및 방향을 포함하는 빛정보를 도출하고, 얼굴이미지에 대해 상기 얼굴의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 얼굴의 형상에 의하여 색상이 변화되는 영향을 최소화한 얼굴의 반사율정보를 도출하는 제1감산적용단계; 1 이상의 기준색상 영역을 포함하는 패치이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 패치의 형상정보를 도출하고, 패치이미지에 대해 상기 패치의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 패치의 형상에 의하여 색상이 변화되는 영향을 최소화한 패치의 반사율정보를 도출하는 제2감산적용단계; 및 상기 패치의 반사율정보에서의 기준색상 영역의 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이에 기초하여, 색상파라미터변환수단을 도출하고, 상기 얼굴의 반사율정보에서의 측정부위의 색상정보에 상기 색상파라미터변환수단을 적용하여, 상기 측정부위의 측색색상정보를 도출하는 얼굴색상도출단계;를 수행하는, 측색 장치를 제공한다. In order to solve the above problems, one embodiment of the present invention is a colorimetric device implemented in a computing system including one or more processors and one or more memories. The colorimetric device includes a facial image for the facial area from an original image and An image derivation step of deriving a patch image for the patch area; The face image is input to an inference model including one or more artificial neural networks to derive face shape information and light information including the intensity and direction of light in the face image, and the face shape information for the face image and a first subtraction application step of applying the subtraction to the light information to derive reflectance information of the face that minimizes the effect of color change due to the shape of the face to which the light is irradiated. A patch image containing one or more reference color areas is input into an inference model including one or more artificial neural networks, the shape information of the patch is derived, and the shape information of the patch and the light information are subtracted and applied to the patch image, A second subtraction application step of deriving reflectance information of the patch that minimizes the effect of color change due to the shape of the patch to which light is irradiated; And based on the difference between the color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color information of the measurement area in the reflectance information of the face A colorimetry device is provided that performs a facial color derivation step of deriving colorimetric color information of the measurement area by applying the color parameter conversion means to the color parameter conversion means.
상기와 같은 과제를 해결하기 위하여 본 발명의 일 실시예는, 하나 이상의 프로세서에 의해 실행되는 복수의 명령들을 포함하는, 컴퓨터-판독가능 매체에 저장된 컴퓨터 프로그램으로서, 상기 컴퓨터 프로그램은, 원본이미지로부터 얼굴영역에 대한 얼굴이미지 및 패치영역에 대한 패치이미지를 도출하는 이미지도출단계; 상기 얼굴이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 얼굴의 형상정보, 및 얼굴이미지에서의 빛의 세기 및 방향을 포함하는 빛정보를 도출하고, 얼굴이미지에 대해 상기 얼굴의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 얼굴의 형상에 의하여 색상이 변화되는 영향을 최소화한 얼굴의 반사율정보를 도출하는 제1감산적용단계; 1 이상의 기준색상 영역을 포함하는 패치이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 패치의 형상정보를 도출하고, 패치이미지에 대해 상기 패치의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 패치의 형상에 의하여 색상이 변화되는 영향을 최소화한 패치의 반사율정보를 도출하는 제2감산적용단계; 및 상기 패치의 반사율정보에서의 기준색상 영역의 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이에 기초하여, 색상파라미터변환수단을 도출하고, 상기 얼굴의 반사율정보에서의 측정부위의 색상정보에 상기 색상파라미터변환수단을 적용하여, 상기 측정부위의 측색색상정보를 도출하는 얼굴색상도출단계;를 포함하는, 컴퓨터 프로그램을 제공한다.In order to solve the above problem, one embodiment of the present invention is a computer program stored in a computer-readable medium, including a plurality of instructions executed by one or more processors, wherein the computer program is configured to generate a face from an original image. An image derivation step of deriving a face image for the area and a patch image for the patch area; The face image is input to an inference model including one or more artificial neural networks to derive face shape information and light information including the intensity and direction of light in the face image, and the face shape information for the face image and a first subtraction application step of applying the subtraction to the light information to derive reflectance information of the face that minimizes the effect of color change due to the shape of the face to which the light is irradiated. A patch image containing one or more reference color areas is input into an inference model including one or more artificial neural networks, the shape information of the patch is derived, and the shape information of the patch and the light information are subtracted and applied to the patch image, A second subtraction application step of deriving reflectance information of the patch that minimizes the effect of color change due to the shape of the patch to which light is irradiated; And based on the difference between the color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color information of the measurement area in the reflectance information of the face A facial color derivation step of deriving colorimetric color information of the measurement area by applying the color parameter conversion means to a computer program is provided.
본 발명의 일 실시예에 따르면, 측색 방법은 스마트폰 등의 개인 휴대단말로도 용이하게 수행할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, the colorimetry method can be easily performed even with a personal mobile terminal such as a smartphone.
본 발명의 일 실시예에 따르면, 패치에 포함되어 있는 기준색상과 얼굴을 함께 촬영한 이미지에 대한 측색을 수행함으로서, 패치 및 얼굴을 동일한 환경에서 비교하여 얼굴의 색상정보를 도출할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, by performing colorimetry on an image taken together with the reference color included in the patch and the face, the effect of deriving the color information of the face is achieved by comparing the patch and the face in the same environment. It can be performed.
본 발명의 일 실시예에 따르면, 얼굴 및 패치 각각에 대해 형상정보를 도출하므로, 각각의 특성 및 이미지상의 상태의 차이를 보완하여 정확한 반사율정보를 도출할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, since shape information is derived for each face and patch, it is possible to derive accurate reflectance information by compensating for differences in each characteristic and state in the image.
본 발명의 일 실시예에 따르면, 획득한 이미지에서 형상정보 및 빛정보를 도출함으로서, 상기 형상에 조사된 빛에 의해 만들어진 그림자의 영향을 제거한 반사율정보를 상기 이미지에서 도출할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, by deriving shape information and light information from an acquired image, reflectance information that removes the influence of a shadow created by light irradiated on the shape can be derived from the image. there is.
본 발명의 일 실시예에 따르면, 이미지에서 반사율정보에 영향을 주는 형상정보 및 빛정보를 감산하여 반사율정보를 도출하므로, 이미지에서 반사율정보를 바로 도출하였을 때보다 더 정확한 반사율정보를 도출하는 효과를 발휘할 수 있다.According to one embodiment of the present invention, the reflectance information is derived by subtracting the shape information and light information that affect the reflectance information from the image, thereby achieving the effect of deriving more accurate reflectance information than when the reflectance information is directly derived from the image. It can be performed.
본 발명의 일 실시예에 따르면, 패치는 복수의 기준색상을 포함하므로, 다양한 얼굴 색상에 대응할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, since the patch includes a plurality of reference colors, it can exhibit the effect of responding to various facial colors.
본 발명의 일 실시예에 따르면, 패치의 각각의 기준색상의 그라운드트루스를 알고 있으므로, 이미지상의 각각의 기준색상(패치의 반사율정보에서의 기준색상)의 색정보와 상기 그라운드트루스를 기초로 색상파라미터변환수단을 도출하고, 색차이정보 도출할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, since the ground truth of each reference color of the patch is known, color parameters are determined based on the color information of each reference color (reference color in the reflectance information of the patch) in the image and the ground truth. It can be effective in deriving conversion means and deriving color difference information.
본 발명의 일 실시예에 따르면, 모델학습단계에서 학습이미지를 입력하고, 도출된 학습형상정보, 학습빛정보 및 학습반사율정보를 결합하여 상기 학습이미지와 비교하므로, 명확한 출력값(학습이미지)을 도출할 수 있도록 인공신경망의 세부파라미터값 혹은 필터정보를 학습할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, a learning image is input in the model learning step, and the derived learning shape information, learning light information, and learning reflectance information are combined and compared with the learning image, thereby deriving a clear output value (learning image). This can have the effect of learning detailed parameter values or filter information of the artificial neural network.
본 발명의 실시예들에 따르면, 카메라설정, 성능, 조명 등의 외부환경에 강인한 특성을 가지고, 인간의 눈이 인지하는 색차를 더욱 정확하게 구현할 수 있는 비선형적 색공간에서의 색차정보가 반영되는 측색 방법, 장치, 및 컴퓨터-판독가능 매체를 제공할 수 있는 효과를 발휘할 수 있다.According to embodiments of the present invention, colorimetry reflects color difference information in a non-linear color space that has characteristics that are robust to external environments such as camera settings, performance, and lighting, and can more accurately implement the color difference perceived by the human eye. Methods, devices, and computer-readable media may be provided.
본 발명의 일 실시예들에 따르면, CIEDE2000 등의 사람의 눈의 실제적 인지에 더욱 근접하는 비선형적 색차정보를 반영하여 측색을 수행할 수 있는 효과를 발휘할 수 있다. According to one embodiment of the present invention, it is possible to perform colorimetry by reflecting non-linear color difference information that is closer to the actual perception of the human eye, such as CIEDE2000.
본 발명의 일 실시예들에 따르면, 스마트폰 등의 개인 휴대단말로도 용이하게 측색을 수행할 수 있는 효과를 발휘할 수 있다. According to one embodiment of the present invention, it is possible to easily perform colorimetry using a personal mobile terminal such as a smartphone.
본 발명의 일 실시예들에 따르면, 비선형적으로 정의된 색차를 색공간에서 반복적인 계산을 통해 근사하고 최적화가 가능하게 되어, 연산의 효율성 및 측색의 정확도를 동시에 도모할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, non-linearly defined color differences can be approximated and optimized through repetitive calculations in the color space, thereby achieving the effect of simultaneously promoting computational efficiency and colorimetric accuracy. there is.
도 1은 본 발명의 일 실시예에 따른 측색 방법을 개략적으로 도시한다.Figure 1 schematically shows a colorimetry method according to an embodiment of the present invention.
도 2는 본 발명의 실시예들에 따른 측색 방법이 수행되는 컴퓨팅시스템의 환경을 개략적으로 도시한다.Figure 2 schematically shows the environment of a computing system in which a colorimetric method according to embodiments of the present invention is performed.
도 3는 본 발명의 일 실시예에 따른 형상정보, 빛정보 및 반사율정보를 개략적으로 도시한다.Figure 3 schematically shows shape information, light information, and reflectance information according to an embodiment of the present invention.
도 4은 본 발명의 일 실시예에 따른 패치를 개략적으로 도시한다.Figure 4 schematically shows a patch according to an embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따른 복수의 추론모델을 개략적으로 도시한다.Figure 5 schematically shows a plurality of inference models according to an embodiment of the present invention.
도 6는 본 발명의 일 실시예에 따른 제1감산적용단계 및 제2감산적용단계의 세부단계를 개략적으로 도시한다.Figure 6 schematically shows detailed steps of the first subtraction application step and the second subtraction application step according to an embodiment of the present invention.
도 7는 본 발명의 일 실시예에 따른 모델학습단계를 개략적으로 도시한다.Figure 7 schematically shows the model learning step according to an embodiment of the present invention.
도 8은 CIE76의 색공간에서 정의되는 색차영역에 대한 일 예를 개략적으로 도시한다.Figure 8 schematically shows an example of a color difference region defined in the color space of CIE76.
도 9는 CIE76, CIE94, 및 CIE2000의 색공간에서 정의되는 색차영역에 대한 일 예를 개략적으로 도시한다.Figure 9 schematically shows an example of a color difference region defined in the color spaces of CIE76, CIE94, and CIE2000.
도 10은 선형적으로 정의될 수 있는 색차이값의 도출과정을 개략적으로 도시한다.Figure 10 schematically shows the process of deriving a color difference value that can be linearly defined.
도 11은 본 발명의 실시예들에 따른 변환수단도출단계의 세부 단계들 및 측색장치의 내부 구성요소들을 개략적으로 도시한다.Figure 11 schematically shows the detailed steps of the conversion means derivation step and the internal components of the colorimetric device according to embodiments of the present invention.
도 12는 본 발명의 실시예들에 따른 보정수치결정단계의 세부 과정을 개략적으로 도시한다.Figure 12 schematically shows the detailed process of the correction value determination step according to embodiments of the present invention.
도 13은 본 발명의 실시예들에 따른 변환수단도출단계의 전체적인 과정들에 대해서 개략적으로 도시한다.Figure 13 schematically shows the overall processes of the conversion means derivation step according to embodiments of the present invention.
도 14는 본 발명의 실시예들에 따른 색상파라미터변환수단의 연산을 개략적으로 도시한다.Figure 14 schematically shows the operation of the color parameter conversion means according to embodiments of the present invention.
도 15는 본 발명의 실시예들에 따른 보정수치결정단계에서의 세부 과정을 개략적으로 도시한다.Figure 15 schematically shows the detailed process in the correction value determination step according to embodiments of the present invention.
도 16은 본 발명의 실시예들에 따른 보정수치변경단계의 세부 과정을 개략적으로 도시한다.Figure 16 schematically shows the detailed process of the correction value changing step according to embodiments of the present invention.
도 17은 본 발명의 실시예들에 따른 보정수치변경단계의 세부 단계들을 개략적으로 도시한다.Figure 17 schematically shows detailed steps of the correction value changing step according to embodiments of the present invention.
도 18는 본 발명의 일 실시예에 따른 컴퓨팅장치의 내부 구성을 개략적으로 도시한다.Figure 18 schematically shows the internal configuration of a computing device according to an embodiment of the present invention.
이하에서는, 다양한 실시예들 및/또는 양상들이 이제 도면들을 참조하여 개시된다. 하기 설명에서는 설명을 목적으로, 하나 이상의 양상들의 전반적 이해를 돕기 위해 다수의 구체적인 세부사항들이 개시된다. 그러나, 이러한 양상(들)은 이러한 구체적인 세부사항들 없이도 실행될 수 있다는 점 또한 본 발명의 기술 분야에서 통상의 지식을 가진 자에게 인식될 수 있을 것이다. 이후의 기재 및 첨부된 도면들은 하나 이상의 양상들의 특정한 예시적인 양상들을 상세하게 기술한다. 하지만, 이러한 양상들은 예시적인 것이고 다양한 양상들의 원리들에서의 다양한 방법들 중 일부가 이용될 수 있으며, 기술되는 설명들은 그러한 양상들 및 그들의 균등물들을 모두 포함하고자 하는 의도이다.BRIEF DESCRIPTION OF THE DRAWINGS Various embodiments and/or aspects are now disclosed with reference to the drawings. In the following description, for purposes of explanation, numerous specific details are set forth to facilitate a general understanding of one or more aspects. However, it will also be appreciated by those skilled in the art that this aspect(s) may be practiced without these specific details. The following description and accompanying drawings set forth in detail certain example aspects of one or more aspects. However, these aspects are illustrative and some of the various methods in the principles of the various aspects may be utilized, and the written description is intended to encompass all such aspects and their equivalents.
또한, 다양한 양상들 및 특징들이 다수의 디바이스들, 컴포넌트들 및/또는 모듈들 등을 포함할 수 있는 시스템에 의하여 제시될 것이다. 다양한 시스템들이, 추가적인 장치들, 컴포넌트들 및/또는 모듈들 등을 포함할 수 있다는 점 그리고/또는 도면들과 관련하여 논의된 장치들, 컴포넌트들, 모듈들 등 전부를 포함하지 않을 수도 있다는 점 또한 이해되고 인식되어야 한다.Additionally, various aspects and features may be presented by a system that may include multiple devices, components and/or modules, etc. It is also understood that various systems may include additional devices, components and/or modules, etc. and/or may not include all of the devices, components, modules, etc. discussed in connection with the drawings. It must be understood and recognized.
본 명세서에서 사용되는 "실시예", "예", "양상", "예시" 등은 기술되는 임의의 양상 또는 설계가 다른 양상 또는 설계들보다 양호하다거나, 이점이 있는 것으로 해석되지 않을 수도 있다. 아래에서 사용되는 용어들 '~부', '컴포넌트', '모듈', '시스템', '인터페이스' 등은 일반적으로 컴퓨터 관련 엔티티(computer-related entity)를 의미하며, 예를 들어, 하드웨어, 하드웨어와 소프트웨어의 조합, 소프트웨어를 의미할 수 있다.As used herein, “embodiments,” “examples,” “aspects,” “examples,” etc. may not be construed to mean that any aspect or design described is better or advantageous over other aspects or designs. . The terms '~part', 'component', 'module', 'system', 'interface', etc. used below generally refer to computer-related entities, such as hardware, hardware, etc. A combination of and software, it can mean software.
또한, "포함한다" 및/또는 "포함하는"이라는 용어는, 해당 특징 및/또는 구성요소가 존재함을 의미하지만, 하나 이상의 다른 특징, 구성요소 및/또는 이들의 그룹의 존재 또는 추가를 배제하지 않는 것으로 이해되어야 한다.Additionally, the terms “comprise” and/or “comprising” mean that the feature and/or element is present, but exclude the presence or addition of one or more other features, elements and/or groups thereof. It should be understood as not doing so.
또한, 제1, 제2 등과 같이 서수를 포함하는 용어는 다양한 구성요소들을 설명하는데 사용될 수 있지만, 상기 구성요소들은 상기 용어들에 의해 한정되지는 않는다. 상기 용어들은 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로만 사용된다. 예를 들어, 본 발명의 권리 범위를 벗어나지 않으면서 제1 구성요소는 제2 구성요소로 명명될 수 있고, 유사하게 제2 구성요소도 제1 구성요소로 명명될 수 있다. 및/또는 이라는 용어는 복수의 관련된 기재된 항목들의 조합 또는 복수의 관련된 기재된 항목들 중의 어느 항목을 포함한다.Additionally, terms including ordinal numbers, such as first, second, etc., may be used to describe various components, but the components are not limited by the terms. The above terms are used only for the purpose of distinguishing one component from another. For example, a first component may be named a second component, and similarly, the second component may also be named a first component without departing from the scope of the present invention. The term and/or includes any of a plurality of related stated items or a combination of a plurality of related stated items.
또한, 본 발명의 실시예들에서, 별도로 다르게 정의되지 않는 한, 기술적이거나 과학적인 용어를 포함해서 여기서 사용되는 모든 용어들은 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에 의해 일반적으로 이해되는 것과 동일한 의미를 가지고 있다. 일반적으로 사용되는 사전에 정의되어 있는 것과 같은 용어들은 관련 기술의 문맥 상 가지는 의미와 일치하는 의미를 가지는 것으로 해석되어야 하며, 본 발명의 실시예에서 명백하게 정의하지 않는 한, 이상적이거나 과도하게 형식적인 의미로 해석되지 않는다.In addition, in the embodiments of the present invention, unless otherwise defined, all terms used herein, including technical or scientific terms, are generally understood by those skilled in the art to which the present invention pertains. It has the same meaning as Terms defined in commonly used dictionaries should be interpreted as having a meaning consistent with the meaning in the context of the related technology, and unless clearly defined in the embodiments of the present invention, have an ideal or excessively formal meaning. It is not interpreted as
1.이미지에서의 측색 방법1. Colorimetric method in image
물체는 고유의 색상을 가지고 있다. 하지만 카메라를 통해 물질의 이미지를 획득하게 되면, 눈으로 실제 물체를 보는 것과는 다른 색이 이미지상에 도출될 수 있다. 어떤 색이 이미지로 표현되어 나타나는 색은 고유의 반사율(reflectance)와 더불어 주변의 조명에 따른 빛정보(빛의 방향, 세기 및 색상 등), 그리고 카메라 센서의 감도에 따라 변화할 수 있다. 따라서 상기 조명과 센서의 영향을 제거하고, 정확한 물체 고유의 색을 찾아내기 위해서 고가의 측색기를 사용할 수도 있겠으나, 본 발명의 실시예들에서는 상대적으로 저렴하고 손쉽게 활용할 수 있는 기준색상을 가지는 패치와 함께 측색하고자하는 대상을 함께 촬영하여 본래의 색을 찾아내는 방법을 개시하고 있다.Objects have unique colors. However, when an image of a material is acquired through a camera, colors that are different from those seen in the actual object with the eyes may be derived from the image. The color that appears when a color is expressed in an image can change depending on the inherent reflectance, light information (direction, intensity and color of light, etc.) according to surrounding lighting, and the sensitivity of the camera sensor. Therefore, it is possible to use an expensive colorimeter to remove the effects of the lighting and sensors and find the exact color unique to the object. However, in embodiments of the present invention, a patch having a reference color that is relatively inexpensive and can be easily utilized is used. A method of finding the original color by photographing the object to be colorimetric is disclosed.
도 1은 본 발명의 일 실시예에 따른 측색 방법을 개략적으로 도시한다.Figure 1 schematically shows a colorimetry method according to an embodiment of the present invention.
도 1의 (A)는 원본이미지를 개략적으로 도시한 도면에 해당한다.Figure 1 (A) corresponds to a drawing schematically showing the original image.
본 발명에서는 기준색상을 갖는 특정 물리적 객체인 패치와 측색하고자 하는 대상을 같이 카메라를 통해 촬영한다. 상기 카메라는 카메라 자체가 될 수도 있으나, 스마트폰 등에 내장된 카메라에 해당할 수 있다. 상기 카메라로 원본이미지를 획득할 수 있고, 상기 원본이미지에는 측색하고자하는 대상 및 패치가 포함될 수 있고, 바람직하게는 측색하고자하는 대상 및 패치가 겹치지 않게 획득될 수 있다. 상기 패치는 1 이상의 기준색상을 포함할 수 있고, 상기 측색하고자하는 대상은 얼굴, 손, 동물, 물건 등 다양한 대상이 포함될 수 있으나, 후술할 내용에는 얼굴을 중심으로 서술하도록 한다.In the present invention, a patch, which is a specific physical object with a reference color, and an object to be colorimetric are photographed together with a camera. The camera may be the camera itself, or may correspond to a camera built into a smartphone, etc. An original image can be acquired with the camera, and the original image can include an object and a patch to be colorimetric. Preferably, the object and patch to be colorimetric can be acquired without overlapping. The patch may include one or more reference colors, and the object to be colorimetric may include various objects such as faces, hands, animals, and objects, but the description below will focus on faces.
상기 원본이미지에는 얼굴을 포함하는 얼굴영역 및 패치를 포함하는 패치영역을 포함할 수 있고, 상기 얼굴영역에는 측색을 하고자하는 측정부위가 포함될 수 있다. 즉 상기 측정부위는 얼굴영역에서 일부분 혹은 전체에 해당할 수 있고, 사용자에 의해서 정해질 수 있다.The original image may include a face area including a face and a patch area including a patch, and the face area may include a measurement area for colorimetry. That is, the measurement area may correspond to part or the entire face area, and may be determined by the user.
따라서, 원본이미지에서 도출한 얼굴이미지는 얼굴영역을 포함하는 이미지일 수 있고, 원본이미지에서 도출한 패치이미지는 패치영역을 포함하는 이미지일 수 있다.Therefore, the face image derived from the original image may be an image including the face area, and the patch image derived from the original image may be an image containing the patch area.
도 1의 (B)는 측색방법의 단계들을 개략적으로 도시한 도면에 해당한다.Figure 1(B) corresponds to a diagram schematically showing the steps of the colorimetric method.
도 1의 (B)에 도시된 바와 같이, 1 이상의 프로세서 및 1 이상의 메모리를 포함하는 컴퓨팅시스템에서 수행되는 측색방법으로서, 원본이미지로부터 얼굴영역에 대한 얼굴이미지 및 패치영역에 대한 패치이미지를 도출하는 이미지도출단계(S3000); 상기 얼굴이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 얼굴의 형상정보, 및 얼굴이미지에서의 빛의 세기 및 방향을 포함하는 빛정보를 도출하고, 얼굴이미지에 대해 상기 얼굴의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 얼굴의 형상에 의하여 색상이 변화되는 영향을 최소화한 얼굴의 반사율정보를 도출하는 제1감산적용단계(S3100); 1 이상의 기준색상 영역을 포함하는 패치이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 패치의 형상정보를 도출하고, 패치이미지에 대해 상기 패치의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 패치의 형상에 의하여 색상이 변화되는 영향을 최소화한 패치의 반사율정보를 도출하는 제2감산적용단계(S3200); 및 상기 패치의 반사율정보에서의 기준색상 영역의 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이에 기초하여, 색상파라미터변환수단을 도출하고, 상기 얼굴의 반사율정보에서의 측정부위의 색상정보에 상기 색상파라미터변환수단을 적용하여, 상기 측정부위의 측색색상정보를 도출하는 얼굴색상도출단계(S3300);를 포함할 수 있다. As shown in Figure 1 (B), it is a colorimetry method performed in a computing system including one or more processors and one or more memories, which derives a face image for the face area and a patch image for the patch area from the original image. Image derivation step (S3000); The face image is input to an inference model including one or more artificial neural networks to derive face shape information and light information including the intensity and direction of light in the face image, and the face shape information for the face image and a first subtraction application step (S3100) of subtracting the light information to derive reflectance information of the face that minimizes the effect of color change due to the shape of the face to which the light is irradiated; A patch image containing one or more reference color areas is input into an inference model including one or more artificial neural networks, the shape information of the patch is derived, and the shape information of the patch and the light information are subtracted and applied to the patch image, A second subtraction application step (S3200) of deriving reflectance information of the patch that minimizes the effect of color change due to the shape of the patch to which light is irradiated; And based on the difference between the color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color information of the measurement area in the reflectance information of the face It may include a face color derivation step (S3300) of deriving colorimetric color information of the measurement area by applying the color parameter conversion means.
상기 측색방법은 원본이미지로부터 얼굴영역에 대한 얼굴이미지 및 패치영역에 대한 패치이미지를 도출하는 이미지도출단계(S3000)를 포함할 수 있다. 구체적으로, 도 1 (A)의 상기 원본이미지는 얼굴에 대한 얼굴영역 및 패치에 대한 패치영역을 포함할 수 있고, 상기 얼굴영역은 상기 원본이미지에서 얼굴만을 추출한 영역이고 얼굴이미지에 포함될 수 있고, 상기 패치영역은 상기 원본이미지에서 패치만을 추출한 영역이고 패치이미지에 포함될 수 있다.The colorimetric method may include an image derivation step (S3000) of deriving a face image for the face area and a patch image for the patch area from the original image. Specifically, the original image in Figure 1 (A) may include a face area for a face and a patch area for a patch, and the face area is an area where only the face is extracted from the original image and may be included in the face image, The patch area is an area where only patches are extracted from the original image and may be included in the patch image.
상기 측색방법은 상기 얼굴이미지를 1 이상의 인공신경망을 포함하는 추론모델(2000)에 입력하여 상기 얼굴이미지로부터 얼굴의 형상정보 및 빛정보를 도출할 수 있고, 상기 얼굴이미지에 상기 얼굴의 형상정보 및 빛정보를 감산적용하여 얼굴의 반사율정보를 도출하는 제1감산적용단계(S3100)를 포함할 수 있다.The colorimetric method can input the face image into an inference model (2000) including one or more artificial neural networks to derive face shape information and light information from the face image, and add the face shape information and light information to the face image. It may include a first subtraction application step (S3100) in which light information is subtracted to derive reflectance information of the face.
구체적으로, 상기 제1감산적용단계(S3100)에서의 상기 추론모델(2000)은 공통모델(2100), 표면법선벡터모델(2200) 및 빛정보추론모델(2300)을 포함할 수 있고, 각각은 1 이상의 인공신경망을 포함할 수 있다. 상기 공통모델(2100)은 이미지로부터 특징정보를 도출할 수 있고, 상기 특징정보는 상기 표면법선벡터모델(2200) 및 빛정보추론모델(2300)에 입력되어 형상정보 및 빛정보를 도출할 수 있다. 즉, 상기 얼굴이미지를 공통모델(2100)에 입력하여 얼굴특징정보를 도출하고, 상기 얼굴특징정보를 상기 표면법선벡터모델(2200) 및 상기 빛정보추론모델(2300)에 입력하여 얼굴의 형상정보 및 빛정보를 도출할 수 있다. 상기 얼굴의 형상정보는 상기 얼굴영역의 형상일 수 있고, 상기 형상은 실제 형상을 포함하는 3D정보를 의미할 수 있고, 상기 빛정보는 상기 원본이미지 혹은 상기 얼굴이미지에 포함되어 있는 빛의 방향 및 세기를 포함할 수 있다. 상술한 복수의 추론모델, 형상정보 및 빛정보에 대한 자세한 사항은 후술하도록 한다.Specifically, the inference model 2000 in the first subtraction application step (S3100) may include a common model 2100, a surface normal vector model 2200, and a light information inference model 2300, each of which It may contain one or more artificial neural networks. The common model 2100 can derive feature information from an image, and the feature information can be input to the surface normal vector model 2200 and the light information inference model 2300 to derive shape information and light information. . That is, facial feature information is derived by inputting the facial image into the common model 2100, and inputting the facial feature information into the surface normal vector model 2200 and the light information inference model 2300 to obtain facial shape information. and light information can be derived. The shape information of the face may be the shape of the face area, and the shape may mean 3D information including the actual shape, and the light information may be the direction and direction of light included in the original image or the face image. May include century. Details about the plurality of inference models, shape information, and light information described above will be described later.
이어서, 상기 얼굴의 형상정보 및 상기 빛정보에 기초하여 상기 얼굴이미지에서 상기 얼굴의 반사율정보를 도출할 수 있다. 상기 얼굴이미지에는 상기 얼굴의 형상정보, 상기 빛정보 및 상기 얼굴의 반사율정보를 포함하고 있어, 상기 얼굴이미지에 상기 얼굴의 형상정보 및 상기 빛정보를 감산하게 되면 상기 얼굴의 반사율정보를 도출할 수 있다.Subsequently, reflectance information of the face can be derived from the face image based on the shape information of the face and the light information. The face image includes the shape information of the face, the light information, and the reflectance information of the face. By subtracting the shape information of the face and the light information from the face image, the reflectance information of the face can be derived. there is.
본 발명의 실시예에 따르면, 상기 얼굴이미지에서 상기 얼굴의 형상정보 및 빛정보를 추출하여 상기 얼굴이미지에서 상기 얼굴의 형상정보 및 상기 빛정보를 감산작용하여 얼굴의 반사율정보를 도출하므로, 상기 얼굴이미지에서 바로 도출한 얼굴의 반사율정보보다 더 정확한 반사율정보를 도출할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the shape information and light information of the face are extracted from the face image and the reflectance information of the face is derived by subtracting the shape information and light information of the face from the face image, so that the face It can have the effect of deriving more accurate reflectance information than the reflectance information of the face derived directly from the image.
또한, 상기 측색방법은 상기 패치이미지를 1 이상의 인공신경망을 포함하는 추론모델에 입력하여 상기 패치이미지로부터 패치의 형상정보를 도출할 수 있고, 상기 패치이미지에 상기 패치의 형상정보 및 상기 빛정보를 감산적용하여 패치의 반사율정보를 도출하는 제2감산적용단계(S3200)를 포함할 수 있다.In addition, the colorimetric method can input the patch image into an inference model including one or more artificial neural networks to derive the shape information of the patch from the patch image, and add the shape information of the patch and the light information to the patch image. It may include a second subtraction application step (S3200) in which reflectance information of the patch is derived by applying subtraction.
구체적으로, 제2감산적용단계(S3200)의 상기 추론모델은 공통모델(2100) 및 표면법선벡터모델(2200)을 포함할 수 있고, 각각은 1 이상의 인공신경망을 포함할 수 있다. 상기 공통모델(2100)은 이미지로부터 특징정보를 도출할 수 있고, 상기 특징정보는 상기 표면법선벡터모델(2200)에 입력되어 형상정보를 도출할 수 있다. 즉, 상기 패치이미지를 공통모델(2100)에 입력하여 패치특징정보를 도출하고, 상기 패치특징정보를 상기 표면법선벡터모델(2200)에 입력하여 패치의 형상정보를 도출할 수 있다. 상기 패치의 형상정보는 상기 패치영역의 형상일 수 있고, 상기 형상은 실제 형상을 포함하는 3D정보를 의미할 수 있다. Specifically, the inference model in the second subtraction application step (S3200) may include a common model 2100 and a surface normal vector model 2200, and each may include one or more artificial neural networks. The common model 2100 can derive feature information from an image, and the feature information can be input to the surface normal vector model 2200 to derive shape information. That is, the patch image can be input into the common model 2100 to derive patch feature information, and the patch feature information can be input into the surface normal vector model 2200 to derive patch shape information. The shape information of the patch may be the shape of the patch area, and the shape may mean 3D information including the actual shape.
이어서, 상기 패치의 형상정보 및 상기 빛정보(상기 얼굴이미지에서 도출)에 기초하여 상기 패치이미지에서 상기 패치의 반사율정보를 도출할 수 있다. 상기 패치이미지에는 상기 패치의 형상정보, 상기 빛정보 및 상기 패치의 반사율정보를 포함하고 있어, 상기 패치이미지에 상기 패치의 형상정보 및 상기 빛정보를 감산하게 되면 상기 패치의 반사율정보를 도출할 수 있다.Subsequently, reflectance information of the patch can be derived from the patch image based on the shape information of the patch and the light information (derived from the face image). The patch image includes the shape information of the patch, the light information, and the reflectance information of the patch. By subtracting the shape information of the patch and the light information from the patch image, the reflectance information of the patch can be derived. there is.
본 발명의 실시예에 따르면, 상기 패치이미지에서 상기 패치의 형상정보 및 얼굴이미지에서 빛정보를 추출하여 상기 패치이미지에서 상기 패치의 형상정보 및 상기 빛정보를 감산작용하여 패치의 반사율정보를 도출하므로, 상기 패치이미지에서 패치의 반사율정보를 바로 도출한 패치의 반사율정보보다 더 정확한 반사율정보를 도출할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the shape information of the patch and the light information from the face image are extracted from the patch image, and the shape information of the patch and the light information are subtracted from the patch image to derive the reflectance information of the patch. , it can have the effect of deriving more accurate reflectance information than the reflectance information of the patch derived directly from the patch image.
본 발명의 일 실시예 따르면, 상기 제1감산적용단계(S3100) 및 제2감산적용단계(S3200)에서의 복수의 추론모델(공통모델(2100) 및 표면법선벡터모델(2200))은 각각은 같은 추론 모델일 수 있다. According to one embodiment of the present invention, the plurality of inference models (common model 2100 and surface normal vector model 2200) in the first subtraction application step (S3100) and the second subtraction application step (S3200) are each It may be the same inference model.
본 발명의 다른 실시예 따르면, 상기 제1감산적용단계(S3100) 및 제2감산적용단계(S3200)에서의 복수의 추론모델(공통모델(2100) 및 표면법선벡터모델(2200))은 각각은 서로 다른 추론 모델일 수 있다.According to another embodiment of the present invention, the plurality of inference models (common model 2100 and surface normal vector model 2200) in the first subtraction application step (S3100) and the second subtraction application step (S3200) are each These may be different inference models.
이후, 상술한 단계에서 도출한, 상기 패치의 반사율정보, 상기 얼굴의 반사율정보 및 기설정되어 있는 패치의 그라운드트루스에 기초하여 색상파라미터변환수단을 도출하고, 상기 색상파라미터변환수단을 상기 얼굴의 반사율정보에 적용하여 얼굴의 측정부위의 측색색상정보를 도출할 수 있다. 자세한 사항은 도 1의 (C)에서 후술하도록한다.Thereafter, a color parameter conversion means is derived based on the reflectance information of the patch, the reflectance information of the face, and the preset ground truth of the patch derived in the above-described step, and the color parameter conversion means is converted to the reflectance information of the face. By applying this information, colorimetric color information of the measured part of the face can be derived. Details will be described later in Figure 1 (C).
도 1의 (C)는 얼굴색상도출단계(S3300)를 개략적으로 도시한 도면에 해당한다.Figure 1(C) corresponds to a diagram schematically showing the face color derivation step (S3300).
도 1의 (C)에 도시된 바와 같이, 1 이상의 프로세서 및 1 이상의 메모리를 포함하는 컴퓨팅시스템에서 수행되는 측색방법으로서, 상기 패치의 반사율정보에서의 기준색상 영역의 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이에 기초하여, 색상파라미터변환수단을 도출하고, 상기 얼굴의 반사율정보에서의 측정부위의 색상정보에 상기 색상파라미터변환수단을 적용하여, 상기 측정부위의 측색색상정보를 도출하는 얼굴색상도출단계(S3300)를 포함할 수 있다.As shown in (C) of Figure 1, it is a colorimetric method performed in a computing system including one or more processors and one or more memories, wherein the color information of the reference color area in the reflectance information of the patch and the ground truth of the reference color are used. Based on the difference between the color information, a color parameter conversion means is derived, and the color parameter conversion means is applied to the color information of the measurement area in the reflectance information of the face to derive colorimetric color information of the measurement area. It may include a face color derivation step (S3300).
구체적으로, 상기 패치에는 1 이상의 기준색상이 포함되어 있어, 상기 패치의 반사율정보에는 1 이상의 기준색상 영역의 각각에 대한 색상정보가 포함되어 있다. 즉, 패치의 반사율정보의 1 이상의 기준색상 영역의 각각에 대한 색상정보는 원본이미지에서의 패치영역에서 도출한 패치의 반사율정보에서의 색상정보일 수 있다. 상기 패치에 포함되어 있는 1 이상의 기준색상에 대해서는 기설정되어 있는 그라운드트루스의 색상정보가 컴퓨팅시스템에 저장되어 있고, 상기 그라운드트루스는 색상의 참값인 색상파라미터에 해당할 수 있다. 상기 기준색상 영역의 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이에 기초하여 색상파라미터변환수단을 도출할 수 있다. 상기 차이는 상기 기준색상 영역의 색상정보와 기준색상의 그라운드트루스의 색상정보 각각의 색상파라미터에 해당 알고리즘을 적용하여 도출한 색차이 값일 수 있다.Specifically, the patch includes one or more reference colors, and the reflectance information of the patch includes color information for each of the one or more reference color areas. In other words, the color information for each of one or more reference color areas of the reflectance information of the patch may be color information from the reflectance information of the patch derived from the patch area in the original image. For one or more reference colors included in the patch, color information of a preset ground truth is stored in the computing system, and the ground truth may correspond to a color parameter that is the true value of the color. A color parameter conversion means can be derived based on the difference between the color information of the reference color area and the color information of the ground truth of the reference color. The difference may be a color difference value derived by applying the corresponding algorithm to the color parameters of each of the color information of the reference color area and the color information of the ground truth of the reference color.
상기 얼굴의 반사율정보의 측정부위의 색상정보에 상기 색상파라미터변환수단을 적용하여 상기 측정부위의 측색색상정보를 도출할 수 있다. 상기 색상파라미터변환수단은 1 이상의 보정수치를 포함하고, 상기 색상파라미터변환수단을 적용한다는 것은 상기 보정수치를 적용한다는 의미일 수 있고, 상기 보정수치 예로서, 상기 기준색상 영역의 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이값이 해당할 수 있다. 상기 색상파라미터변환수단의 도출 및 자세한 사항에 대해서는 후술하도록 한다.The color parameter conversion means can be applied to the color information of the measurement area of the face reflectance information to derive colorimetric color information of the measurement area. The color parameter conversion means includes a correction value of 1 or more, and applying the color parameter conversion means may mean applying the correction value. As an example of the correction value, color information in the reference color area and the reference color The difference value between the color information of the ground truth may correspond. The derivation and details of the color parameter conversion means will be described later.
도출한 상기 측정부위의 측색색상정보는 본 발명에서의 얼굴영역의 측정부위를 측색한 색상정보이고, 이미지상이 아닌 측정부위에 상응하는 실제 얼굴 부위의 참값(그라운드트루스) 혹은 참값(그라운드트루스)에 가까운 색상정보에 해당하는 것이 바람직하다.The derived colorimetric color information of the measurement area is color information obtained by measuring the measurement area of the face area in the present invention, and is based on the true value (ground truth) or the true value (ground truth) of the actual facial area corresponding to the measurement area, not on the image. It is desirable that it corresponds to nearby color information.
상기 얼굴의 반사율정보의 측정부위의 색상정보는 상기 측정부위에 대한 얼굴이미지상의 반사율정보의 색상정보(얼굴의 반사율정보에서 해당 측정부위에 대해 추출함)를 의미할 수 있다.The color information of the measurement part of the face reflectance information may mean the color information of the reflectance information on the face image for the measurement part (extracted for the corresponding measurement part from the reflectance information of the face).
상기 얼굴색상도출단계(S3300)에서 만약, 패치가 1개의 기준색상만을 포함하고 있다면, 상기 색상파라미터변환수단은 상기 기준색상 영역의 색상정보와 상기 그라운드트루스의 색상정보 사이의 차이값 그 자체가 될 수 있고(즉, 색상파라미터변환수단의 보정수치가 상기 차이값에 해당할 수 있다.), 상기 차이값을 상기 얼굴의 반사율정보에서의 측정부위의 색상정보에 적용하여 상기 측정부위의 측색색상정보를 도출할 수 있다. In the face color derivation step (S3300), if the patch contains only one reference color, the color parameter conversion means will be the difference value itself between the color information of the reference color area and the color information of the ground truth. (i.e., the correction value of the color parameter conversion means may correspond to the difference value), and the difference value is applied to the color information of the measurement area in the reflectance information of the face to obtain colorimetric color information of the measurement area. can be derived.
또한, 만약, 패치가 1개의 기준색상만을 포함하고 있지만, 상기 색상파라미터변환수단의 보정수치가 상기 차이값만 포함하고 있는 것이 아니라면, 복수의 보정수치를 포함하는 행렬로 구현될 수 있다. Additionally, if the patch includes only one reference color, but the correction value of the color parameter conversion means does not include only the difference value, it may be implemented as a matrix including a plurality of correction values.
상기 얼굴색상도출단계(S3300)에서 만약, 패치가 복수의 기준색상을 포함하고 있다면, 상기 색상파라미터변환수단은 복수의 보정수치를 포함하는 행렬로 구현될 수 있다.In the face color derivation step (S3300), if the patch includes a plurality of reference colors, the color parameter conversion means may be implemented as a matrix including a plurality of correction values.
본 발명의 실시예에 따르면, 복수의 기준색상 중 상기 얼굴의 반사율정보에서의 측정부위의 색상정보와 색차이가 가장 적은 기준색상 하나를 도출하여 상기 얼굴색상도출단계(S3300)를 수행할 수 있다.According to an embodiment of the present invention, the face color derivation step (S3300) can be performed by deriving one reference color that has the smallest color difference from the color information of the measurement area in the reflectance information of the face among a plurality of reference colors. .
따라서, 만약, 패치가 복수의 기준색상을 포함하고 있지만, 상기 얼굴색상도출단계(S3300)에서 복수의 기준색상 중 상기 얼굴의 반사율정보에서의 측정부위의 색상정보와 색차이가 가장 적은 기준색상 하나를 도출한다면, 상기 색상파라미터변환수단의 보정수치는 상기 기준색상 하나와 해당 그라운드트루스 색상정보의 사이의 차이값을 포함할 수 있다.Therefore, if the patch includes a plurality of reference colors, in the face color derivation step (S3300), among the plurality of reference colors, one reference color has the smallest color difference from the color information of the measurement area in the reflectance information of the face. If derived, the correction value of the color parameter conversion means may include a difference value between the one reference color and the corresponding ground truth color information.
본 발명의 일 실시예에 따르면, 상기 색상파라미터변환수단을 도출하여 색상정보를 보정하므로, 다양한 요인을 고려하여 측색을 하는 효과를 발휘할 수 있다.According to one embodiment of the present invention, by deriving the color parameter conversion means and correcting color information, it is possible to achieve the effect of performing color measurement by considering various factors.
도 2는 본 발명의 실시예들에 따른 측색방법이 수행되는 컴퓨팅시스템의 환경을 개략적으로 도시한다.Figure 2 schematically shows the environment of a computing system in which a colorimetric method according to embodiments of the present invention is performed.
도 2의 (A)에 도시된 실시예에서는, 사용자단말기에 내장된 카메라모듈에 의하여 사용자가 측색하고자 하는 부분과 패치를 동시에 촬영한 원본이미지를 획득하고, 사용자단말기 내부에서 상술하고 후술하는 방법에 의하여 정확한 측색에 의한 색상정보를 도출할 수 있다.In the embodiment shown in Figure 2 (A), the original image of the part and the patch that the user wants to colorize is simultaneously acquired by the camera module built into the user terminal, and the method described above and below is used inside the user terminal. This allows color information to be derived through accurate colorimetry.
도 2의 (B)에 도시된 실시예에서는, 사용자단말기에 내장된 카메라모듈에 의하여 사용자가 측색하고자 하는 부분과 패치를 동시에 촬영한 원본이미지를 획득하고, 획득된 원본이미지를 서버시스템에 전송한 후에, 서버시스템 내부에서 상술하고 후술하는 방법에 의하여 정확한 측색에 의한 색상정보를 도출할 수 있다.In the embodiment shown in Figure 2 (B), an original image is obtained by simultaneously photographing the part and patch that the user wants to colorize using a camera module built into the user terminal, and the obtained original image is transmitted to the server system. Later, color information can be derived through accurate colorimetry within the server system by the method described above and later.
본 발명의 실시예에 따른 측색방법은 도 2의 (A), (B)와 같은 환경 외에도 다양한 형태의 시스템으로 구현될 수 있으나, 공통적으로 측색대상과 기준색상이 표시되는 특정 물리적 객체(예를들어, 패치)에 대한 이미지를 컴퓨팅시스템에서 분석하는 방식이면 모두 해당된다고 할 것이다.The colorimetric method according to an embodiment of the present invention may be implemented in various types of systems in addition to environments such as (A) and (B) of Figures 2, but in common, a specific physical object on which the colorimetric target and reference color are displayed (e.g. For example, it can be said that any method that analyzes the image for a patch) in a computing system is applicable.
전술한 도 1에서는 측색하고자 하는 대상(예를 들어 사람의 피부색)과 기준색상이 부여된 물리적 객체가 하나의 이미지로 촬영하는 형태를 도시하고 있으나, 본 발명은 이에 한정되지 않고, 측색하고자 하는 대상(예를 들어 사람의 피부색)과 기준색상이 부여된 물리적 객체가 각각 분리된 복수의 이미지로 획득되어 후술하는 측색방법이 수행되는 실시예도 포함한다. 상기 복수의 이미지로 획득되는 경우에는 각각의 이미지에 해당하는 빛정보를 각각 도출할 수 있다.The above-mentioned Figure 1 shows a form in which the object to be colorimetric (for example, human skin color) and a physical object to which a reference color is assigned are captured as one image, but the present invention is not limited to this, and the object to be colorimetric is It also includes an embodiment in which a physical object to which a reference color (for example, a person's skin color) and a reference color are assigned are acquired as a plurality of separate images, and a colorimetric method described later is performed. When the plurality of images are acquired, light information corresponding to each image can be derived.
도 3는 본 발명의 일 실시예에 따른 형상정보, 빛정보 및 반사율정보를 개략적으로 도시한다.Figure 3 schematically shows shape information, light information, and reflectance information according to an embodiment of the present invention.
도 3에 도시된 바와 같이, 상기 형상정보는, 이미지가 포함하고 있는 대상에 대한 이미지 상의 2D 정보에서 도출한 실제 형상에 따른 3D정보를 포함하고, 상기 빛정보는, 이미지가 포함하고 있는 대상에 조사되는 빛의 세기 및 방향과 그림자를 포함하고, 상기 반사율정보는, 이미지가 포함하고 있는 대상의 형상에 빛이 조사되어, 상기 빛의 세기 및 방향에 따라 형상에 생기는 그림자의 영향을 제거한 픽셀별 색상정보를 포함할 수 있다.As shown in FIG. 3, the shape information includes 3D information according to the actual shape derived from 2D information on the image about the object included in the image, and the light information is related to the object included in the image. It includes the intensity and direction of the irradiated light and the shadow, and the reflectance information is for each pixel that removes the influence of the shadow that occurs on the shape according to the intensity and direction of the light when light is irradiated to the shape of the object included in the image. Color information may be included.
구체적으로, 이미지에는 이미지에 포함되어 있는 대상의 형상, 대상의 색상 및 대상에 조사되고 있는 빛의 세기 및 방향이 포함되어 있다.Specifically, the image includes the shape of the object included in the image, the color of the object, and the intensity and direction of light irradiating the object.
상기 이미지를 획득하는 카메라의 성능, 상기 빛의 세기 및 방향, 상기 형상에 조사되는 빛에 의한 그림자 등으로 인해 상기 대상의 색상이 변화될 수 있다. 예를 들어, 뎃생을 하기 위한 석고상 자체는 하얀색이지만, 석고상에 조사되는 빛에 따른 그림자로 인해 변화된 색상이 생긴다. 본 발명에서는 이러한 색상 변경의 영향을 배제하고 측색하기 위해서, 상기 이미지에서 형상정보 및 빛정보를 감산할 수 있다.The color of the object may change due to the performance of the camera that acquires the image, the intensity and direction of the light, the shadow caused by the light irradiated to the shape, etc. For example, the plaster image itself for sketching is white, but the color changes due to the shadow caused by the light irradiated on the plaster image. In the present invention, in order to exclude the influence of such color change and measure color, shape information and light information can be subtracted from the image.
상기 형상정보는 상기 이미지에서 색상 및 빛을 제외한 정보이고, 상기 이미지에 포함되어 있는 대상의 형상에 대한 3D정보에 해당할 수 있고, 해당 대상의 색상정보는 포함하지 않을 수 있다.The shape information is information excluding color and light from the image, may correspond to 3D information about the shape of an object included in the image, and may not include color information of the object.
실제로 존재하고 있는 상기 대상은 3D이지만, 상기 대상을 카메라를 통해 이미지로 획득하면 상기 대상은 2D 정보로 이미지에 포함될 수 있어, 상기 이미지에 포함되어 있는 대상의 형상은 2D정보이다.The object that actually exists is 3D, but if the object is acquired as an image through a camera, the object may be included in the image as 2D information, and the shape of the object included in the image is 2D information.
이에 따라, 상기 이미지를 1 이상의 추론모델에 입력하여 상기 이미지가 포함하고 있는 대상의 2D정보 형상에서 실제로 존재하고 있는 상기 대상의 형상인 3D정보인 형상정보를 도출할 수 있다. 예를 들어, 상기 형상정보는 빛의 영향을 받지 않은 상기 석고상 그 자체일 수 있고, 3D 모델링 프로그램상에 디스플레이되는 색상(반사율정보)을 제외하고 빛의 영향을 받지 않은 3D 모델링이라고 할 수 있다.Accordingly, by inputting the image into one or more inference models, shape information, which is 3D information that is the shape of the object that actually exists, can be derived from the 2D information shape of the object included in the image. For example, the shape information may be the plaster statue itself that is not affected by light, or it may be said to be 3D modeling that is not affected by light except for the color (reflectance information) displayed on the 3D modeling program.
즉 상기 형상정보는 상기 이미지에서 반사율정보 및 빛정보를 제외한 순수하게 대상의 형상만을 담은 정보일 수 있다.In other words, the shape information may be information containing only the shape of the object in the image, excluding reflectance information and light information.
상기 빛정보는 상기 이미지 상에서 대상에 조사되고 있는 빛의 세기 및 방향을 의미할 수 있고, 상기 빛정보에 의하여 상기 대상의 형상에 그림자가 생성될 수 있고, 상기 빛의 방향에 따라 그림자의 각도, 모양, 넓이 등이 변할 수 있고, 상기 빛의 세기에 따라 그림자의 명암이 변할 수 있다. 따라서, 상기 빛정보에 따라 그림자가 형성되어 상기 대상이 가지고 있는 색상이 변화될 수 있다.The light information may mean the intensity and direction of light irradiated to the object in the image, a shadow may be created on the shape of the object by the light information, and the angle of the shadow depending on the direction of the light, The shape, area, etc. may change, and the light and dark of the shadow may change depending on the intensity of the light. Accordingly, a shadow may be formed according to the light information and the color of the object may change.
상기 이미지를 1 이상의 추론모델에 입력하여 상기 이미지가 포함하고 있는 빛정보를 도출할 수 있다.The light information contained in the image can be derived by inputting the image into one or more inference models.
본 발명의 실시예에 따르면, 이는, 얼굴이미지 및 패치이미지는 원본이미지에서 추출하므로, 상기 빛정보를 얼굴이미지에서 도출하여도 원본이미지에서 빛정보를 도출한 효과를 발휘할 수 있다. 따라서, 상기 얼굴이미지에서 도출한 빛정보를 상기 패치이미지에 적용하여도 상기 패치이미지에서 도출한 빛정보를 적용한것과 같은 효과를 발휘할 수 있다.According to an embodiment of the present invention, since the face image and patch image are extracted from the original image, even if the light information is derived from the face image, the effect of deriving the light information from the original image can be achieved. Therefore, applying light information derived from the face image to the patch image can produce the same effect as applying light information derived from the patch image.
본 발명의 실시예에서는 원본이미지에서 얼굴이미지 및 패치이미지를 도출하여, 상기 빛정보를 얼굴이미지에서 도출하므로, In an embodiment of the present invention, a face image and a patch image are derived from the original image, and the light information is derived from the face image,
상기 반사율정보는 상기 이미지에 포함되어 있는 상기 빛정보 및 빛이 조사된 형상에 의해 생성된 그림자에 의해 변화되지 않은 색상정보일 수 있다. 상기 이미지에서 상기 형상정보 및 상기 빛정보를 감산하게 되면 상기 반사율정도를 도출할 수 있다.The reflectance information may be color information that is not changed by the light information included in the image and the shadow created by the shape to which the light is irradiated. By subtracting the shape information and the light information from the image, the degree of reflectance can be derived.
즉, 상기 이미지에는 이미지 상의 빛정보, 형상에 대한 형상정보 및 반사율정보를 포함하고 있고, 상기 형상정보와 빛정보가 합쳐지게 되면 상기 대상의 형상에 빛에 의한 그림자가 생기게 되고, 상기 형상정보, 상기 빛정보 및 상기 반사율정보를 합치게 되면, 상기 그림자에 의해서 상기 반사율정보의 색상이 변화된 상기 이미지가 될 수 있다.That is, the image includes light information on the image, shape information about the shape, and reflectance information, and when the shape information and light information are combined, a shadow due to light is created on the shape of the object, and the shape information, When the light information and the reflectance information are combined, the image can be created with the color of the reflectance information changed by the shadow.
한편, 상기 이미지에서 도출한 반사율정보는 카메라의 성능에 의한 색상 변화가 포함될 수 있다. 따라서, 본 발명의 실시예에 따르면, 기설정된 패치의 그라운드트루스의 색상정보 및 이미지에서 도출한 패치의 반사율정보에 기초하여 측정부위의 측색색상정보를 도출하므로, 카메라의 성능에 의해 변화된 반사율정보를 보정하여 참값에 가까운 색상정보를 도출할 수 있는 효과를 발휘할 수 있다.Meanwhile, the reflectance information derived from the image may include color changes due to camera performance. Therefore, according to an embodiment of the present invention, the colorimetric color information of the measured area is derived based on the color information of the ground truth of the preset patch and the reflectance information of the patch derived from the image, so that the reflectance information changed by the performance of the camera is derived. Through correction, it can be effective in deriving color information close to the true value.
도 4은 본 발명의 일 실시예에 따른 패치를 개략적으로 도시한다.Figure 4 schematically shows a patch according to an embodiment of the present invention.
도 4에 도시된 바와 같이, 상기 패치는, 복수의 기준색상 영역을 포함하고, 각각의 기준색상 영역은 같은 형태 및 넓이를 가지고, 서로 구분되게 배치되어 있을 수 있다.As shown in FIG. 4, the patch includes a plurality of reference color areas, and each reference color area has the same shape and area and may be arranged separately from each other.
상기 각각의 기준색상 영역의 그라운드트루스는 기설정되어 컴퓨팅시스템에 저장되어 있을 수 있고, 각각의 기준색상의 위치 혹은 배열이 컴퓨팅시스템에 저장되어 있을 수 있다.The ground truth of each reference color area may be preset and stored in the computing system, and the location or arrangement of each reference color may be stored in the computing system.
본 발명의 실시예에 따른 상기 패치의 각각의 기준색상 영역은 같은 형태 및 넓이를 가지고, 서로 구분되게 배치되어 있어, 추론모델에서 상기 패치영역을 도출하고, 각각의 기준색상영역의 색상정보를 용이하게 도출할 수 있는 효과를 발휘할 수 있다.Each reference color area of the patch according to an embodiment of the present invention has the same shape and area and is arranged separately from each other, making it easy to derive the patch area from the inference model and obtain color information of each reference color area. It is possible to achieve a certain effect.
도 4에서는 기준색상#1 부터 기준색상 #N으로 도시되어 있지만, 패치가 가지고 있는 기준색상의 개수는 이에 한정하지 않는다.In Figure 4, reference colors #1 to reference colors are shown, but the number of reference colors a patch has is not limited to this.
도 5는 본 발명의 일 실시예에 따른 복수의 추론모델을 개략적으로 도시한다.Figure 5 schematically shows a plurality of inference models according to an embodiment of the present invention.
도 5에 도시된 바와 같이, 상기 추론모델은 공통모델(2100), 표면법선벡터모델(2200), 빛정보추론모델(2300) 및 반사율모델(2400)을 포함하고, 상기 공통모델(2100)은, 1 이상의 인공신경망을 포함하고, 입력된 이미지로부터 상기 표면법선벡터모델(2200), 상기 빛정보추론모델(2300), 및 상기 반사율모델(2400)에 공통적으로 입력되는 특징정보를 도출하고, 상기 표면법선벡터모델(2200)은, 1 이상의 인공신경망을 포함하고, 상기 특징정보로부터 2D정보인 상기 이미지에서 실제 형상인 3D정보인 형상정보를 도출하고, 상기 빛정보추론모델(2300)은, 1 이상의 인공신경망을 포함하고, 상기 특징정보로부터 상기 이미지에서 빛의 세기 및 빛의 방향을 포함하는 빛정보를 도출하고, 상기 반사율모델(2400)은, 1 이상의 인공신경망을 포함하고, 상기 특징정보로부터 픽셀별 색상정보인 반사율정보를 도출할 수 있다.As shown in Figure 5, the inference model includes a common model 2100, a surface normal vector model 2200, a light information inference model 2300, and a reflectance model 2400, and the common model 2100 is , includes one or more artificial neural networks, and derives feature information commonly input to the surface normal vector model (2200), the light information inference model (2300), and the reflectance model (2400) from the input image, and The surface normal vector model 2200 includes one or more artificial neural networks and derives shape information, which is 3D information, which is the actual shape, from the image, which is 2D information, from the feature information. The light information inference model 2300 has 1 It includes the above artificial neural network, and derives light information including the intensity of light and the direction of light in the image from the feature information, and the reflectance model 2400 includes one or more artificial neural networks, and derives light information including the intensity of light and the direction of light in the image from the feature information. Reflectance information, which is color information for each pixel, can be derived.
구체적으로, 본 발명에서의 추론모델은 상기 공통모델(2100), 상기 표면법선벡터모델(2200), 상기 빛정보추론모델(2300) 및 상기 반사율모델(2400)을 포함하고, 각각은 합성곱신경망모델(CNN), 캡슐네트워크(CapsNet) 등의 인공신경망 모델, 및 규칙-기반 특징정보 추출 모델 중 1 이상을 포함하는 형태로 구현될 수 있고, 바람직하게는 SFS NET 및 RI_render(3ddfa)를 포함할 수 있다.Specifically, the inference model in the present invention includes the common model 2100, the surface normal vector model 2200, the light information inference model 2300, and the reflectance model 2400, each of which is a convolutional neural network. It can be implemented in a form that includes one or more of artificial neural network models such as CNN, capsule network (CapsNet), and rule-based feature information extraction models, and preferably includes SFS NET and RI_render (3ddfa). You can.
상기 공통모델(2100)은 입력된 이미지로부터 특징정보를 도출하고, 상기 특징정보는 복수의 추론모델에 의하여 상기 형상정보, 상기 빛정보 및 상기 반사율정보를 도출할 수 있는 출력값일 수 있다. 상기 이미지는 본 발명에서의 모든 이미지를 포함할 수 있고, 바람직하게는 원본이미지, 원본이미지에서 도출한 얼굴이미지 및 패치이미지, 학습이미지를 포함할 수 있다. 상기 공통모델(2100)은 입력된 이미지가 얼굴이미지일 경우에는 얼굴특징정보를 도출하고, 패치이미지일 경우에는 패치특징정보를 도출하고, 학습이미지일 경우에는 학습특징정보를 도출할 수 있다. 상기 특징정보는 상기 표면법선벡터모델(2200), 상기 빛정보추론모델(2300) 및 상기 반사율모델(2400)에 공통적으로 입력될 수 있다.The common model 2100 derives feature information from an input image, and the feature information may be an output value from which the shape information, light information, and reflectance information can be derived by a plurality of inference models. The image may include all images in the present invention, and preferably may include the original image, a face image and patch image derived from the original image, and a learning image. The common model 2100 can derive facial feature information when the input image is a face image, derive patch feature information when the input image is a patch image, and derive learning feature information when it is a learning image. The feature information may be commonly input to the surface normal vector model (2200), the light information inference model (2300), and the reflectance model (2400).
상기 표면법선벡터모델(2200)은 상기 공통모델(2100)로부터 상기 특징정보를 입력받아 형상정보를 도출할 수 있다. 상기 특징정보가 얼굴특징정보일 경우에는 얼굴의 형상정보를 도출하고, 패치특징정보일 경우에는 패치의 형상정보를 도출하고, 학습특징정보일 경우에는 학습형상정보를 도출할 수 있다.The surface normal vector model 2200 can receive the feature information from the common model 2100 and derive shape information. If the feature information is facial feature information, face shape information can be derived, if the feature information is patch feature information, patch shape information can be derived, and if the feature information is learning feature information, learning shape information can be derived.
상기 빛정보추론모델(2300)은 상기 공통모델(2100)로부터 상기 특징정보를 입력받아 빛정보를 도출할 수 있다. 상기 특징정보가 얼굴특징정보일 경우에는 빛정보를 도출하고, 학습특징정보일 경우에는 학습빛정보를 도출할 수 있다.The light information inference model 2300 can receive the feature information from the common model 2100 and derive light information. If the feature information is facial feature information, light information can be derived, and if the feature information is learning feature information, learning light information can be derived.
본 발명의 실시예에 따르면, 상기 빛정보는 상기 얼굴특징정보를 상기 빛정보추론모델(2300)에 입력하고 도출하여, 상기 제1감산적용단계(S3100) 및 상기 제2감산적용단계에 모두 사용할 수 있다.According to an embodiment of the present invention, the light information is derived by inputting the facial feature information into the light information inference model 2300, and is used in both the first subtraction application step (S3100) and the second subtraction application step. You can.
본 발명의 다른 실시예에 따르면, 상기 얼굴특징정보를 상기 빛정보추론모델(2300)에 입력하여 얼굴빛정보를 도출하고, 상기 패치특징정보를 상기 빛정보추론모델(2300)에 입력하여 패치빛정보를 도출하여 각각에 맞는 단계에 사용할 수 있다.According to another embodiment of the present invention, facial light information is derived by inputting the facial feature information into the light information inference model 2300, and inputting the patch feature information into the light information inference model 2300 to obtain patch light information. can be derived and used in each appropriate step.
상기 반사율모델(2400)은 상기 공통모델(2100)로부터 상기 특징정보를 입력받아 반사율정보를 도출할 수 있다. 상기 특징정보가 학습특징정보일 경우에는 학습반사율정보를 도출할 수 있다.The reflectance model 2400 can receive the feature information from the common model 2100 and derive reflectance information. If the feature information is learning feature information, learning reflectance information can be derived.
본 발명의 실시예에서 상기 반사율모델(2400)은 상기 모델학습단계에 사용되어 상기 공통모델(2100), 상기 표면법선벡터모델(2200) 및 상기 빛정보추론모델(2300)을 학습시키는 효과를 발휘할 수 있다.In an embodiment of the present invention, the reflectance model 2400 is used in the model learning step to have the effect of learning the common model 2100, the surface normal vector model 2200, and the light information inference model 2300. You can.
본 발명의 다른 실시예에서는, 상기 측색방법은 반사율추론단계를 더 포함하고, 상기 반사율추론단계는, 얼굴이미지를 반사율모델(2400)에 입력하여 얼굴의 반사율정보를 도출하는 단계; 패치이미지를 반사율모델(2400)에 입력하여 패치의 반사율정보를 도출하는 단계; 및 상기 패치의 반사율정보에서의 기준색상 영역의 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이에 기초하여, 색상파라미터변환수단을 도출하고, 상기 얼굴의 반사율정보에서의 측정부위의 색상정보에 상기 색상파라미터변환수단을 적용하여, 상기 측정부위의 측색색상정보를 도출하는 단계를 포함할 수 있다.In another embodiment of the present invention, the colorimetric method further includes a reflectance inference step, wherein the reflectance inference step includes inputting a face image into a reflectance model 2400 to derive reflectance information of the face; Inputting the patch image into the reflectance model 2400 to derive reflectance information of the patch; And based on the difference between the color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color information of the measurement area in the reflectance information of the face It may include the step of deriving colorimetric color information of the measurement area by applying the color parameter conversion means.
한편, 상기 공통모델(2100), 상기 표면법선벡터모델(2200) 및 상기 빛정보추론모델(2300)은 상기 제1감산적용단계에 사용될 수 있고, 상기 공통모델(2100) 및 상기 표면법선벡터모델(2200)은 상기 제2감산적용단계에 사용될 수 있고, 상기 공통모델(2100), 상기 표면법선벡터모델(2200), 상기 빛정보추론모델(2300) 및 상기 반사율모델(2400)은 후술할 모델학습단계에 사용될 수 있다.Meanwhile, the common model 2100, the surface normal vector model 2200, and the light information inference model 2300 may be used in the first subtraction application step, and the common model 2100 and the surface normal vector model (2200) can be used in the second subtraction application step, and the common model (2100), the surface normal vector model (2200), the light information inference model (2300), and the reflectance model (2400) are models to be described later. It can be used in the learning phase.
각각의 단계에서 사용되는 모델은 같은 모델일 수도 있고, 각각의 단계에 최적화된 각각 다른 모델일 수 있다. 예를 들어, 상기 공통모델(2100)은 상기 제1감산적용단계에서는 얼굴특징정보를 도출하고, 상기 제2감산적용단계에서는 패치특징정보를 도출하고, 상기 모델학습단계에서는 학습특징정보를 도출한다. 상기 제1감산적용단계에서 사용되는 공통모델(2100)과 상기 제2감산적용단계에서 사용되는 공통모델(2100)은 같을수도 있고, 다른 모델일 수도 있다. 따라서, 서로 다른 모델일 경우에는 상기 모델학습단계에서 각각 학습을 시킬 수 있다.The model used in each step may be the same model, or it may be a different model optimized for each step. For example, the common model 2100 derives facial feature information in the first subtraction application step, derives patch feature information in the second subtraction application step, and derives learning feature information in the model learning step. . The common model 2100 used in the first subtraction application step and the common model 2100 used in the second subtraction application step may be the same or different models. Therefore, in the case of different models, each model can be trained in the model learning step.
도 6는 본 발명의 일 실시예에 따른 제1감산적용단계(S3100) 및 제2감산적용단계(S3200)의 세부단계를 개략적으로 도시한다.Figure 6 schematically shows the detailed steps of the first subtraction application step (S3100) and the second subtraction application step (S3200) according to an embodiment of the present invention.
도 6의 (A)는 제1감산적용단계(S3100)의 세부단계를 도시한 도면에 해당한다.Figure 6 (A) corresponds to a diagram showing detailed steps of the first subtraction application step (S3100).
도 6의 (A)에 도시된 바와 같이, 상기 제1감산적용단계는, 얼굴이미지를 1 이상의 인공신경망을 포함하는 공통모델(2100)에 입력하여 얼굴특징정보를 도출하는 얼굴특징정보도출단계(S3110); 상기 얼굴특징정보를 1 이상의 인공신경망을 포함하는 표면법선벡터모델(2200)에 입력하여 얼굴의 형상정보를 도출하는 얼굴형상도출단계(S3120); 상기 얼굴특징정보를 1 이상의 인공신경망을 포함하는 빛정보추론모델(2300)에 입력하여 얼굴이미지에서의 빛의 세기 및 방향을 포함하는 빛정보를 도출하는 빛정보도출단계(S3130); 및 상기 얼굴이미지에 대하여 상기 얼굴의 형상정보 및 빛정보를 적용하여 얼굴에 대한 색상만이 반영된 얼굴의 반사율정보를 도출하는 얼굴반사율도출단계(S3140);를 포함할 수 있다.As shown in (A) of FIG. 6, the first subtraction application step is a facial feature information derivation step ( S3110); A face shape derivation step (S3120) of deriving shape information of the face by inputting the facial feature information into a surface normal vector model (2200) including one or more artificial neural networks; A light information derivation step (S3130) of inputting the facial feature information into a light information inference model (2300) including one or more artificial neural networks to derive light information including the intensity and direction of light in the face image; and a facial reflectance derivation step (S3140) of deriving reflectance information of the face in which only the color of the face is reflected by applying the face shape information and light information to the face image.
구체적으로, 상기 제1감산적용단계(S3100)는 도 1의 (A)의 원본이미지로부터 얼굴영역만 잘라내서 리사이징한 얼굴이미지를 상기 공통모델(2100)에 입력하여 얼굴특징정보를 도출하는 얼굴특징정보추출단계(S3110)를 포함할 수 있다. 상기 얼굴특징정보는 공통모델(2100)의 출력값 형태이고, 얼굴의 형상정보 및 빛정보로 도출될 수 있다.Specifically, in the first subtraction application step (S3100), facial feature information is derived by inputting a resized face image by cutting out only the face area from the original image in (A) of FIG. 1 into the common model (2100) to derive facial feature information. It may include an extraction step (S3110). The facial feature information is in the form of an output value of the common model 2100, and can be derived from face shape information and light information.
상기 제1감산적용단계(S3100)는 상기 얼굴특징정보를 표면법선벡터모델(2200)에 입력하여 상기 얼굴의 형상정보를 도출하는 얼굴형상도출단계(S3120)를 포함할 수 있다. 상기 얼굴의 형상정보는 얼굴에 대한 형상정보이고, 상기 얼굴이미지에서 머리카락 등 피부를 포함하지 않은 부분을 제외하여 도출한 정보일 수 있다.The first subtraction application step (S3100) may include a face shape derivation step (S3120) of inputting the facial feature information into the surface normal vector model (2200) to derive the shape information of the face. The shape information of the face is shape information about the face, and may be information derived by excluding parts that do not include skin, such as hair, from the face image.
상기 제1감산적용단계(S3100)는 상기 얼굴특징정보를 빛정보추론모델(2300)에 입력하여 빛정보를 도출하는 빛정보도출단계(S3130)를 포함할 수 있다. 상기 빛정보는 상기 패치이미지에서 상기 얼굴의 반사율정보를 도출하는데 사용될 뿐만 아니라, 상기 패치의 반사율정보를 도출하는 제2감산적용단계(S3200)에서도 사용될 수 있다.The first subtraction application step (S3100) may include a light information derivation step (S3130) in which light information is derived by inputting the facial feature information into the light information inference model (2300). The light information is not only used to derive reflectance information of the face from the patch image, but can also be used in the second subtraction application step (S3200) to derive reflectance information of the patch.
상기 제1감산적용단계(S3100)는 상기 얼굴형상도출단계(S3120) 및 상기 빛정보도출단계(S3130)에서 도출한 상기 얼굴의 형상정보 및 빛정보를 상기 얼굴이미지에 적용하여 얼굴의 반사율정보를 도출하는 얼굴반사율도출단계(S3140)를 포함할 수 있다. 구체적으로, 상기 얼굴이미지에서 상기 얼굴의 형상정보 및 상기 빛정보를 감산적용하여 얼굴에 대한 색상만이 반영된 얼굴의 반사율정보를 도출할 수 있다.The first subtraction application step (S3100) applies the shape information and light information of the face derived from the face shape derivation step (S3120) and the light information derivation step (S3130) to the face image to obtain reflectance information of the face. It may include a facial reflectance derivation step (S3140). Specifically, by subtracting and applying the shape information of the face and the light information from the face image, reflectance information of the face that reflects only the color of the face can be derived.
도 6의 (B)는 제2감산적용단계(S3200)의 세부단계를 도시한 도면에 해당한다.Figure 6(B) corresponds to a diagram showing detailed steps of the second subtraction application step (S3200).
도 6의 (B)에 도시된 바와 같이, 상기 제2감산적용단계(S3200)는, 패치이미지를 1 이상의 인공신경망을 포함하는 공통모델(2100)에 입력하여 패치특징정보를 도출하는 패치특징정보도출단계(S3210); 상기 패치특징정보를 1 이상의 인공신경망을 포함하는 표면법선벡터모델(2200)에 입력하여 패치의 형상정보를 도출하는 패치형상도출단계(S3220); 상기 패치이미지에 대하여 상기 패치의 형상정보 및 상기 빛정보를 적용하여 패치에 대한 색상만이 반영된 패치의 반사율정보를 도출하는 패치반사율도출단계(S3230);를 포함할 수 있다.As shown in (B) of FIG. 6, the second subtraction application step (S3200) inputs the patch image into a common model 2100 including one or more artificial neural networks to derive patch feature information. Derivation step (S3210); A patch shape derivation step (S3220) of deriving shape information of the patch by inputting the patch feature information into a surface normal vector model (2200) including one or more artificial neural networks; It may include a patch reflectance derivation step (S3230) of deriving reflectance information of the patch in which only the color of the patch is reflected by applying the shape information of the patch and the light information to the patch image.
구체적으로, 상기 제2감산적용단계(S3200)는 도 1의 (A)의 원본이미지로부터 패치영역만 잘라내서 리사이징한 패치이미지를 상기 공통모델(2100)에 입력하여 패치특징정보를 도출하는 패치특징정보추출단계를 포함할 수 있다. 상기 패치특징정보는 공통모델(2100)의 출력값 형태이고, 패치의 형상정보로 도출될 수 있다.Specifically, in the second subtraction application step (S3200), patch feature information is derived by inputting a resized patch image by cutting out only the patch area from the original image in (A) of FIG. 1 into the common model (2100). An extraction step may be included. The patch characteristic information is in the form of an output value of the common model 2100 and can be derived as the shape information of the patch.
상기 제2감산적용단계(S3200)는 상기 패치특징정보를 표면법선벡터모델(2200)에 입력하여 상기 패치의 형상정보를 도출하는 패치형상도출단계(S3220)를 포함할 수 있다. 상기 패치의 형상정보는 패치에 대한 형상정보이고, 복수의 기준색상 영역에 대한 형상정보를 포함할 수 있다.The second subtraction application step (S3200) may include a patch shape derivation step (S3220) of deriving the shape information of the patch by inputting the patch feature information into the surface normal vector model (2200). The shape information of the patch is shape information about the patch and may include shape information about a plurality of reference color areas.
상기 제2감산적용단계(S3200)는 상기 패치형상도출단계(S3220) 및 상기 제1감산적용단계(S3100)에 포함되는 빛정보도출단계(S3130)에서 도출한 상기 패치의 형상정보 및 빛정보를 상기 패치이미지에 적용하여 패치의 반사율정보를 도출하는 패치반사율도출단계(S3230)를 포함할 수 있다. 구체적으로, 상기 패치이미지에서 상기 패치의 형상정보 및 상기 빛정보를 감산적용하여 패치에 대한 색상만이 반영된 패치의 반사율정보를 도출할 수 있다.The second subtraction application step (S3200) uses the shape information and light information of the patch derived in the light information derivation step (S3130) included in the patch shape derivation step (S3220) and the first subtraction application step (S3100). It may include a patch reflectance derivation step (S3230) of deriving reflectance information of the patch by applying it to the patch image. Specifically, the reflectance information of the patch reflecting only the color of the patch can be derived by subtracting the shape information and the light information of the patch from the patch image.
도 7는 본 발명의 일 실시예에 따른 모델학습단계를 개략적으로 도시한다.Figure 7 schematically shows the model learning step according to an embodiment of the present invention.
도 7에 도시된 바와 같이, 상기 측색 방법은 모델학습단계를 더 포함하고,As shown in Figure 7, the colorimetric method further includes a model learning step,
상기 모델학습단계는, 학습이미지를 상기 공통모델(2100)에 입력하고, 공통모델(2100)의 학습특징정보를 상기 표면법선벡터모델(2200)에 입력하여 학습형상정보를 도출하는 학습형상정보도출단계; 상기 학습특징정보를 상기 빛정보추론모델(2300)에 입력하여 학습빛정보를 도출하는 학습빛정보도출단계; 상기 학습특징정보부터 반사율정보를 도출하는 반사율모델(2400)에 입력하여 학습반사율정보를 도출하는 학습반사율정보도출단계; 및 상기 학습형상정보, 학습빛정보, 및 학습반사율정보를 결합하여 예측학습이미지를 도출하는 예측학습이미지도출단계;를 수행하되, 상기 예측학습이미지와 상기 학습이미지의 차이가 감소 혹은 최소화하도록 상기 공통모델(2100), 상기 표면법선벡터모델(2200), 상기 빛정보추론모델(2300), 및 상기 반사율모델(2400)의 세부파라미터값 혹은 필터정보를 학습할 수 있다.In the model learning step, learning shape information is derived by inputting a learning image into the common model (2100) and inputting learning feature information of the common model (2100) into the surface normal vector model (2200) to derive learning shape information. step; A learning light information deriving step of inputting the learning feature information into the light information inference model (2300) to derive learning light information; A learning reflectance information deriving step of deriving learning reflectance information by inputting the learning feature information into a reflectance model (2400) that derives reflectance information; And a predicted learning image derivation step of deriving a predicted learning image by combining the learning shape information, learning light information, and learning reflectance information, wherein the common Detailed parameter values or filter information of the model 2100, the surface normal vector model 2200, the light information inference model 2300, and the reflectance model 2400 can be learned.
구체적으로, 상기 모델학습단계는 본 발명에 포함되는 복수의 추론모델을 학습시켜 입력한 학습이미지에 기초하여 도출되는 예측학습이미지를 학습이미지와 최대한 동일하게 도출할 수 있는 것을 목표로 할 수 있고, 상기 복수의 추론모델은 공통모델(2100), 표면법선벡터모델(2200), 빛정보추론모델(2300) 및 반사율모델(2400)이 포함될 수 있다. 상기 복수의 추론모델은 본 발명에서 포함하는 모델과 동일할 수 있고, 구체적으로, 상기 공통모델(2100)은 본 발명에서의 얼굴특징정보 및 패치특징정보를 추출하는 추론모델, 상기 표면법선벡터모델(2200)은 본 발명에서의 얼굴의 형상정보 및 패치의 형상정보를 추출하는 추론모델, 상기 빛정보추론모델(2300)은 본 발명에서의 빛정보를 추출하는 추론모델일 수 있다.Specifically, the model learning step may aim to train a plurality of inference models included in the present invention to derive a predicted learning image derived based on the input learning image as identical as possible to the learning image, The plurality of inference models may include a common model (2100), a surface normal vector model (2200), a light information inference model (2300), and a reflectance model (2400). The plurality of inference models may be the same as the model included in the present invention, and specifically, the common model 2100 is an inference model for extracting facial feature information and patch feature information in the present invention, and the surface normal vector model. 2200 may be an inference model for extracting face shape information and patch shape information in the present invention, and the light information inference model 2300 may be an inference model for extracting light information in the present invention.
상기 공통모델(2100)은 이미지로부터 형상정보, 빛정보 및 반사율정보를 포함하는 특징정보를 도출할 수 있는 추론모델일 수 있다. 상기 공통모델(2100)에 상기 학습이미지를 입력하면, 공통모델(2100)의 출력값으로 학습특징정보가 도출될 수 있다. 상기 학습특징정보는 학습이미지의 학습형상정보, 학습빛정보 및 학습반사율정보를 포함할 수 있고, 상기 학습특징정보의 형태는 공통모델(2100)의 종류, 상기 공통모델(2100)이 포함하고 있는 인공신경망 등에 따라 정해질 수 있다. 상기 학습특징정보는 상기 복수의 추론모델에 입력될 수 있다. 즉, 상기 학습특징정보는 상기 표면법선벡터모델(2200), 상기 빛정보추론모델(2300) 및 상기 반사율모델(2400)에 입력될 수 있다.The common model 2100 may be an inference model that can derive feature information including shape information, light information, and reflectance information from an image. When the learning image is input to the common model 2100, learning feature information can be derived as an output value of the common model 2100. The learning feature information may include learning shape information, learning light information, and learning reflectance information of the learning image, and the form of the learning feature information includes the type of the common model 2100 and the information included in the common model 2100. It can be determined according to artificial neural networks, etc. The learning feature information may be input to the plurality of inference models. That is, the learning feature information can be input to the surface normal vector model (2200), the light information inference model (2300), and the reflectance model (2400).
본 발명에서의 측색방법은 상기 공통모델(2100)에 학습이미지를 입력하여 도출한 학습특징정보를 상기 표면법선벡터모델(2200)에 입력하여 학습형상정보를 도출하는 학습형상정보도출단계를 포함할 수 있다.The colorimetric method in the present invention may include a learning shape information derivation step of deriving learning shape information by inputting learning feature information derived by inputting a learning image into the common model 2100 into the surface normal vector model 2200. You can.
상기 표면법선벡터모델(2200)은 상기 특징정보로부터 형상정보를 도출할 수 있는 추론모델일 수 있다. 상기 형상정보는 상기 공통모델(2100)이 입력받은 이미지에서 해당 대상에 대한 형상정보이고, 해당 대상의 실제 형상인 3D정보 일 수 있다. 상기 형상정보는 빛에 대한 영향을 받지 않아 그림자를 포함하지 않고 색상(반사율)을 포함하지 않은 실제 형상에 대한 정보 일 수 있다. 예를 들면, 프로그램상에 빛에 대한 정보를 입력하지 않아 모든 방향에서 보아도 그림자가 형성되지 않는 색상을 입히지 않은 3D 모델링(형상)에 해당할 수 있다.The surface normal vector model 2200 may be an inference model that can derive shape information from the feature information. The shape information is shape information about the object in the image input to the common model 2100, and may be 3D information that is the actual shape of the object. The shape information may be information about the actual shape that is not affected by light and therefore does not include a shadow or color (reflectance). For example, it may correspond to uncolored 3D modeling (shape) in which no shadow is formed even when viewed from all directions because information about light is not entered into the program.
본 발명에서의 측색방법은 상기 공통모델(2100)에 학습이미지를 입력하여 도출한 학습특징정보를 상기 빛정보추론모델(2300)에 입력하여 학습빛정보를 도출하는 학습빛정보도출단계를 포함할 수 있다.The colorimetric method in the present invention may include a learning light information derivation step of inputting learning feature information derived by inputting a learning image into the common model 2100 to the light information inference model 2300 to derive learning light information. You can.
상기 빛정보추론모델(2300)은 상기 특징정보로부터 빛정보를 도출할 수 있는 추론모델일 수 있다. 상기 빛정보는 상기 공통모델(2100)이 입력받은 이미지에 포함되어 있는 형상에 조사되고 있는 빛정보이고, 빛의 세기 및 빛의 방향을 포함할 수 있다. 상기 이미지에서 빛정보에 의해 상기 이미지 상의 형상에 그림자가 생성될 수 있다.The light information inference model 2300 may be an inference model that can derive light information from the feature information. The light information is light information being irradiated to the shape included in the image input by the common model 2100, and may include light intensity and light direction. A shadow may be created on a shape in the image by light information in the image.
본 발명에서의 측색방법은 상기 반사율모델(2400)에 상기 이미지를 입력하여 학습반사율정보를 도출하는 학습반사율정보도출단계를 포함할 수 있다.The colorimetric method in the present invention may include a learning reflectance information derivation step of deriving learning reflectance information by inputting the image into the reflectance model 2400.
상기 반사율모델(2400)은 상기 특징정보로부터 반사율정보를 도출할 수 있는 추론모델일 수 있다. 상기 반사율정보는 상기 이미지에서 해당 형상에 대한 색상정보이고, 빛정보 및/혹은 상기 그림자 등에 의한 영향을 받지 않은 변화되지 않은 색상정보일 수 있다.The reflectance model 2400 may be an inference model that can derive reflectance information from the feature information. The reflectance information is color information about the corresponding shape in the image, and may be unchanged color information that is not affected by light information and/or the shadow.
본 발명에서의 측색방법은 각각의 추론모델에 의해 도출된 상기 학습형상정보, 상기 학습빛정보 및 상기 학습반사율정보를 결합하여 예측학습이미지를 도출하는 예측학습이미지도출단계를 포함할 수 있다.The colorimetric method in the present invention may include a predicted learning image derivation step of deriving a predicted learning image by combining the learning shape information, the learning light information, and the learning reflectance information derived by each inference model.
상기 예측학습이미지와 입력한 상기 학습이미지를 비교하였을 때 차이가 적은 상기 예측학습이미지가 도출될 수 있도록, 상기 공통모델(2100), 상기 표면법선벡터모델(2200), 상기 빛정보추론모델(2300), 및 상기 반사율모델(2400)의 세부파라미터값 혹은 필터정보를 학습시킬 수 있다. 따라서, 본 발명의 실시예에 따르면 상기 모델학습단계는 상술한 단계들을 반복하여 상기 예측학습이미지와 상기 학습이미지의 차이가 감소 혹은 최소화되도록 상기 공통모델(2100), 상기 표면법선벡터모델(2200), 상기 빛정보추론모델(2300), 및 상기 반사율모델(2400)의 세부파라미터값 혹은 필터정보를 학습시킬 수 있는 효과를 발휘할 수 있다.When comparing the predicted learning image with the input learning image, the common model 2100, the surface normal vector model 2200, and the light information inference model 2300 are used so that the predicted learning image with a small difference can be derived. ), and detailed parameter values or filter information of the reflectance model 2400 can be learned. Therefore, according to an embodiment of the present invention, the model learning step repeats the above-described steps to reduce or minimize the difference between the prediction learning image and the learning image by using the common model 2100 and the surface normal vector model 2200. , the light information inference model 2300, and the reflectance model 2400 can have the effect of learning detailed parameter values or filter information.
2.색상파라미터변환수단을 도출하는 방법2. Method of deriving color parameter conversion means
도 8은 CIE76의 색공간에서 정의되는 색차영역에 대한 일 예를 개략적으로 도시한다.Figure 8 schematically shows an example of a color difference region defined in the color space of CIE76.
CIE76는 CIELAB 좌표셋을 이용하여 색차를 결정하는 수식에 해당한다. 이와 같은 CIE76에 의한 색차는 하기의 식으로 표현될 수 있다.CIE76 corresponds to a formula that determines color difference using the CIELAB coordinate set. The color difference by CIE76 can be expressed by the following equation.
Figure PCTKR2023005838-appb-img-000001
Figure PCTKR2023005838-appb-img-000001
(여기서 제1색상은 Lab색상계에서 (L* 1, a* 1, b* 1)의 색상파라미터를 가지고, 제2색상은 (L* 2, a* 2, b* 2)의 색상파리미터를 가지는 경우, 제1색상과 제2색상의 색차는 위의 ΔE* ab로 표현할 수 있고, 색차는 스칼라값을 갖는다)(Here, the first color has color parameters of (L * 1 , a * 1 , b * 1 ) in the Lab color system, and the second color has color parameters of (L * 2 , a * 2 , b * 2 ). In this case, the color difference between the first color and the second color can be expressed as ΔE * ab above, and the color difference has a scalar value)
이와 같은 CIE76의 색공간에서 정의되는 색차는 결과적으로 세부 색상파라미터들과 선형적인 관계를 갖는다.As a result, the color difference defined in the color space of CIE76 has a linear relationship with detailed color parameters.
도 8에 도시된 바와 같이, RGBY색상계에서의 색상을 원형으로 표시하고, CIE76의 색공간에서 정의되는 색차의 일정범위를 원형으로 표시하는 경우, 도 3에서와 같이, 규칙성을 가짐을 볼 수 있다.As shown in FIG. 8, when colors in the RGBY color system are displayed as circles and a certain range of color differences defined in the color space of CIE76 are displayed as circles, it can be seen that they have regularity, as shown in FIG. 3. You can.
도 9는 CIE76, CIE94, 및 CIE2000의 색공간에서 정의되는 색차영역에 대한 일 예를 개략적으로 도시한다.Figure 9 schematically shows an example of a color difference region defined in the color spaces of CIE76, CIE94, and CIE2000.
도 9에서는 Lab색상계에서 x축을 a의 색상파라미터값으로 하고, y축을 b의 색상파라미터값으로 한 경우, 표시되는 색상에 대하여, CIE76, CIE94, 및 CIE2000 각각의 색공간에서 정의되는 색차의 일정범위를 원형으로 표시한 것을 도시한다.In Figure 9, when the x-axis is set to the color parameter value of a and the y-axis is set to the color parameter value of b in the Lab color system, the color difference defined in each of the CIE76, CIE94, and CIE2000 color spaces is shown for the displayed color. The range is shown in a circle.
색상파라미터와 선형적 관계를 갖는 CIE76 의 색공간에 의하여 정의되는 색차의 경우, 완전한 선형적 관계(완전한 원형)를 가짐을 알 수 있으나, CIE94의 경우 타원형의 형태로 등색차범위선이 나타나게 되고, CIE2000의 경우에는 일부는 CIE94에 따르지만 예를 들어 b값이 -0.5 이하에서는 완전히 다른 거동을 나타낸다.In the case of color difference defined by the color space of CIE76, which has a linear relationship with color parameters, it can be seen that it has a completely linear relationship (complete circle), but in the case of CIE94, the color difference range line appears in an oval shape, In the case of CIE2000, some parts follow CIE94, but for example, when the b value is -0.5 or less, it shows completely different behavior.
이는 후술하는 바와 같이, CIE2000은 사람의 인지적 특성을 더욱 정확하게 반영하기 때문에, CIE76에서와 같이 세부 색상파라미터에 대한 단일식으로 표현되기가 어렵고, 경계라는 개념이 존재하여 영역별로 색차의 계산이 달라질 수 있기 때문이며, 수식 자체도 비선형적인 요소를 가지고 있다.As will be explained later, because CIE2000 reflects human cognitive characteristics more accurately, it is difficult to express detailed color parameters in a single formula like in CIE76, and the concept of boundary exists, so the calculation of color difference varies for each area. This is because the formula itself also has non-linear elements.
그러나, 인간이 실제로 인식하는 색차의 경우 CIE76과 같이 완전히 선형적인 관계로 인식되기는 어렵고, 색상에 따라 차이의 민감도가 상이할 수 있다. 이러한 점을 반영된 색공간에 따른 색차계산은 CIE2000와 같이 비선형적일 수 있고, 후술하는 본 발명의 실시예들에 따른 측색방법은 이와 같은 비선형적인 색차계산을 고려하여 측색을 수행함으로써, 사람의 실제적 인지에 더욱 정확한 측색을 수행할 수 있다.However, in the case of color differences that humans actually perceive, it is difficult to recognize a completely linear relationship like CIE76, and the sensitivity of the difference may vary depending on the color. Color difference calculation according to the color space reflecting this point may be non-linear, such as CIE2000, and the colorimetric method according to embodiments of the present invention, which will be described later, performs colorimetry in consideration of such non-linear color difference calculation, thereby allowing a person's actual perception. More accurate colorimetry can be performed.
도 10은 선형적으로 정의될 수 있는 색차이값의 도출과정을 개략적으로 도시한다.Figure 10 schematically shows the process of deriving a color difference value that can be linearly defined.
예를 들어, CIE76에서의 색차이값은 전술한 바와 같이 하기의 식으로 표현될 수 있고, 이는 3차원 혹은 다차원 공간에서의 거리로 나타낼 수 있다.For example, the color difference value in CIE76 can be expressed as the following equation, as described above, and can be expressed as a distance in three-dimensional or multi-dimensional space.
Figure PCTKR2023005838-appb-img-000002
Figure PCTKR2023005838-appb-img-000002
(여기서 제1색상은 Lab색상계에서 (L* 1, a* 1, b* 1)의 색상파라미터를 가지고, 제2색상은 (L* 2, a* 2, b* 2)의 색상파리미터를 가지는 경우, 제1색상과 제2색상의 색차는 위의 ΔE* ab로 표현할 수 있고, 색차는 스칼라값을 갖는다)(Here, the first color has color parameters of (L * 1 , a * 1 , b * 1 ) in the Lab color system, and the second color has color parameters of (L * 2 , a * 2 , b * 2 ). In this case, the color difference between the first color and the second color can be expressed as ΔE * ab above, and the color difference has a scalar value)
도 10은 이와 같은 색상파라미터와 색차이값이 선형적인 관계를 갖는 경우의 색상파라미터(도 10에서의 X, Y, Z축 좌표계에 해당함)에 대한 색차이를 직선형태로 도시한 예에 해당한다. 본 발명에서 의미하는 “선형적”의 의미는 색차이 자체 혹은 색차이와 관련된 변환행렬 등의 다른 간접적 수치적 정보를 선형대수 방법으로 풀 수 있는 경우를 포함하는 광의로 해석하여야 할 것이다.Figure 10 corresponds to an example of the color difference for color parameters (corresponding to the X, Y, and Z coordinate systems in Figure 10) in the case where such color parameters and color difference values have a linear relationship . The meaning of “linear” in the present invention should be interpreted in a broad sense, including cases where the color difference itself or other indirect numerical information such as a transformation matrix related to the color difference can be solved by linear algebraic methods.
예를 들어, 3개의 색상이 있고, 색상 파라미터가 3개로 주어지고, 임의의 색상이 있는 경우, 3개의 색상 각각으로부터 임의의 색상에 대한 색차를 알고 있는 경우에는, 위의 선형적인 색차이로 정의될 수 있는 CIE76의 경우에는 대수적 방법으로 임의의 색상을 알 수 있지만, CIE2000의 경우, 대수적 방법으로 임의의 색상을 알 수 없다. For example, if there are 3 colors, 3 color parameters are given, and there is a random color, and the color difference for the random color from each of the 3 colors is known, it is defined as the linear color difference above. In the case of CIE76, an arbitrary color can be known using an algebraic method, but in the case of CIE2000, an arbitrary color cannot be known using an algebraic method.
혹은 선형적인 색차이로 정의되는 알고리즘의 경우, (a-b)의 계산값으로 구성되는 색차이를 포함할 수도 있다.Alternatively, in the case of an algorithm defined as a linear color difference, the color difference consisting of the calculated value of (a-b) may be included.
도 11은 본 발명의 실시예들에 따른 변환수단도출단계의 세부 단계들 및 측색장치의 내부 구성요소들을 개략적으로 도시한다.Figure 11 schematically shows the detailed steps of the conversion means derivation step and the internal components of the colorimetric device according to embodiments of the present invention.
본 발명의 실시예들에 따른 측색 방법은 1 이상의 프로세서 및 1 이상의 메모리를 포함하는 컴퓨팅시스템에서 수행되고, 패치의 반사율정보에서의 복수의 기준색상 영역에 각각에 대한 기설정된 색공간에서의 각각의 제1색상파라미터 정보를 추출하는 색상파라미터추출단계(S100); 복수의 상기 제1색상파라미터 정보에 대하여 복수의 보정수치를 포함하는 색상파라미터변환수단을 적용하여 도출되는 제2색상파라미터와 상기 복수의 기준색상에 대한 그라운드트루스에 해당하는 기설정된 제3색상파라미터 사이의 제1색차이 알고리즘에 의하여 도출되는 색차이 값이 기설정된 기준을 만족하도록 상기 보정수치를 결정하는 보정수치결정단계(S200); 및 복수의 상기 제1색상파라미터 정보에 대하여 복수의 보정수치를 포함하는 색상파라미터변환수단을 적용하여 도출되는 제4색상파라미터와 상기 복수의 기준색상에 대한 그라운드트루스에 해당하는 기설정된 제3색상파라미터 사이의 제2색차이 알고리즘에 의하여 도출되는 색차이 값이 감소하는 방향으로 상기 보정수치 중 일부 혹은 전체를 변경하는 보정수치변경단계(S300); 상기 보정수치변경단계(S300)가 1 회 이상 수행되고, 기설정된 기준에 부합하는 경우에 최종적으로 업데이트된 보정수치로 색상파라미터변환수단을 결정하는 색상파라미터변환수단결정단계(S400); 및 상기 얼굴의 반사율정보에서의 측정부위의 색상정보에 상기 색상파라미터변환수단을 적용하여, 실제 색상에 해당하는 상기 측정부위의 측색색상정보를 도출하는 측색단계(S500)을 포함한다.The colorimetric method according to embodiments of the present invention is performed in a computing system including one or more processors and one or more memories, and each color in a preset color space for each of a plurality of reference color areas in the reflectance information of the patch is used. A color parameter extraction step (S100) of extracting first color parameter information; Between a second color parameter derived by applying a color parameter conversion means including a plurality of correction values to a plurality of first color parameter information and a preset third color parameter corresponding to the ground truth for the plurality of reference colors A correction value determination step (S200) of determining the correction value so that the color difference value derived by the first color difference algorithm satisfies a preset standard; and a fourth color parameter derived by applying a color parameter conversion means including a plurality of correction values to the plurality of first color parameter information and a preset third color parameter corresponding to the ground truth for the plurality of reference colors. A correction value changing step (S300) of changing some or all of the correction values in a direction that reduces the color difference value derived by the second color difference algorithm between; If the correction value changing step (S300) is performed one or more times and meets a preset standard, a color parameter conversion means determining step (S400) of finally determining a color parameter conversion means with the updated correction value; And a colorimetric step (S500) of applying the color parameter conversion means to the color information of the measurement area in the reflectance information of the face to derive colorimetric color information of the measurement area corresponding to the actual color.
바람직하게는, 상기 제1색차이 알고리즘에 의한 색차이 값은 색상파라미터에 대하여 선형적이다. 이는 초기의 보정수치를 결정하기 위함이다.Preferably, the color difference value obtained by the first color difference algorithm is linear with respect to the color parameter. This is to determine the initial correction value.
예를 들어, 색상파리미터에 대해 선형적인 색차이 값은 하기와 같은 식들로 표현될 수 있다.For example, the linear color difference value for the color parameter can be expressed by the following equations.
Figure PCTKR2023005838-appb-img-000003
Figure PCTKR2023005838-appb-img-000003
Figure PCTKR2023005838-appb-img-000004
Figure PCTKR2023005838-appb-img-000004
상기 제2색차이 알고리즘에 의한 색차이 값은 상기 제1색차이 알고리즘에 의한 색차이 값과 다른 형태로 도출된다. 바람직하게는, 제2색차이 알고리즘에 의한 색차이 값은 색상파라미터에 대하여 비선형적이다.The color difference value by the second color difference algorithm is derived in a different form from the color difference value by the first color difference algorithm. Preferably, the color difference value obtained by the second color difference algorithm is non-linear with respect to the color parameter.
예를들어, CIEDE94에서의 색차이값은 하기와 같이 정의될 수 있다.For example, the color difference value in CIEDE94 can be defined as follows.
Figure PCTKR2023005838-appb-img-000005
Figure PCTKR2023005838-appb-img-000005
여기서 제1색상은 Lab색상계에서 (L* 1, a* 1, b* 1)의 색상파라미터를 가지고, 제2색상은 (L* 2, a* 2, b* 2)의 색상파리미터를 가지는 경우, 제1색상과 제2색상의 색차는 위의 ΔE* 94 로 표현할 수 있고, 색차는 스칼라값을 갖는다Here, the first color has color parameters of (L * 1 , a * 1 , b * 1 ) in the Lab color system, and the second color has color parameters of (L * 2 , a * 2 , b * 2 ). In this case, the color difference between the first color and the second color can be expressed as ΔE * 94 above, and the color difference has a scalar value.
여기서, 각각 변수의 계산방법은 하기의 식들에 따른다.Here, the calculation method of each variable follows the following equations.
Figure PCTKR2023005838-appb-img-000006
Figure PCTKR2023005838-appb-img-000006
kL, K1, K2은 해당 색차의 응용분야에 따른 상수값에 해당하고, 분야에 따라서 하기와 같은 값을 가질 수 있다. 이 값들은 색차의 사용용도에 따라서 가변될 수 있고, 일 예로서 하기의 표 1에 따를 수 있다.k L , K 1 , and K 2 correspond to constant values according to the application field of the corresponding color difference, and may have the following values depending on the field. These values may vary depending on the intended use of the color difference, and as an example, may follow Table 1 below.
Figure PCTKR2023005838-appb-img-000007
Figure PCTKR2023005838-appb-img-000007
바람직하게는, 상기 제1색차이 알고리즘에 의한 색차이 값은 적어도 하나의 (제1색상파라미터의 제n요소 - 제2색상파라미터의 제n요소)의 값을 포함하는 식에 의하여 결정된다. (제1색상파라미터의 제n요소 - 제2색상파라미터의 제n요소)의 계산값을 포함하는 식에 의하여 색차이 값이 결정되는 경우, 해당 색차이 알고리즘이 선형적이라고 할 수 있다.Preferably, the color difference value by the first color difference algorithm is determined by an equation including at least one value (nth element of the first color parameter - nth element of the second color parameter). When the color difference value is determined by an equation including the calculated value of (nth element of the first color parameter - nth element of the second color parameter), the color difference algorithm can be said to be linear.
바람직하게는, 상기 제2색차이 알고리즘에 의한 색차이 값은 CIEDE2000 규약에 따른 색차이 값을 포함한다. 본 발명의 다른 실시예에서는 제2색차이 알고리즘에 의한 색차이 식은 다른 비선형적으로 계산될 수 있는 색차이 값에 해당할 수 있다.Preferably, the color difference value according to the second color difference algorithm includes a color difference value according to the CIEDE2000 standard. In another embodiment of the present invention, the color difference equation based on the second color difference algorithm may correspond to a color difference value that can be calculated non-linearly.
CIEDE2000에 의하여 정의되는 색차이 값은 다음과 같은 식에 의하여 정의될 수 있다.The color difference value defined by CIEDE2000 can be defined by the following equation.
Figure PCTKR2023005838-appb-img-000008
Figure PCTKR2023005838-appb-img-000008
여기서, 동일하게 지칭되는 변수들은 전술한 CIEDE76 및 CIEDE94에서의 변수와 동일하다.Here, the variables referred to the same way are the same as the variables in CIEDE76 and CIEDE94 described above.
여기서, 각각의 변수의 계산방법은 하기의 식들에 따른다Here, the calculation method of each variable follows the formulas below:
Figure PCTKR2023005838-appb-img-000009
Figure PCTKR2023005838-appb-img-000009
Figure PCTKR2023005838-appb-img-000010
Figure PCTKR2023005838-appb-img-000010
Figure PCTKR2023005838-appb-img-000011
Figure PCTKR2023005838-appb-img-000011
Figure PCTKR2023005838-appb-img-000012
Figure PCTKR2023005838-appb-img-000012
바람직하게는, 상기 보정수치변경단계(S300)는 상기 복수의 상기 제1색상파라미터 정보에 대하여 복수의 보정수치를 포함하는 색상파라미터변환수단을 적용하여 도출되는 제4색상파라미터와 상기 복수의 기준색상에 대한 그라운드트루스에 해당하는 기설정된 제3색상파라미터 사이의 제2색차이 알고리즘에 의하여 도출되는 색차이 값의 합이 기설정된 기준에 따라 수렴하도록 반복되어 수행된다.Preferably, the correction value changing step (S300) includes a fourth color parameter derived by applying a color parameter conversion means including a plurality of correction values to the plurality of first color parameter information and the plurality of reference colors. The sum of the color difference values derived by the second color difference algorithm between the preset third color parameters corresponding to the ground truth for is repeatedly performed to converge according to the preset standard.
색차이 값의 합이 기설정된 기준에 따라 수렴이 된 후에, S400단계에 따라 색상파라미터변환수단이 결정이 되고, 이후 측색을 할 때, 색상파라미터변환수단을 이용하여 촬영된 이미지에서의 색상파라미터(얼굴의 반사율정보에서의 측정부위의 색상정보)에 색상파라미터변환수단을 적용하여 정확한 측색결과(측정부위의 측색색상정보)를 도출할 수 있다.After the sum of the color difference values converges according to the preset standard, the color parameter conversion means is determined in step S400, and when colorimetry is performed, the color parameters ( By applying color parameter conversion means to the color information of the measurement area in the reflectance information of the face, accurate colorimetric results (color information of the measurement area) can be derived.
도 11의 (B)의 색상파라미터추출부(1100), 보정수치결정부(1200), 보정수치변경부(1300), 색상파라미터변환수단결정부(1400), 및 측색부(1500)는 각각 전술한 S100, S200, S300, S400, 및 S500의 단계를 수행한다. 이에 대한 중복된 설명은 생략하기로 한다.The color parameter extraction unit 1100, correction value determination unit 1200, correction value change unit 1300, color parameter conversion means determination unit 1400, and colorimetry unit 1500 of FIG. 11(B) are each described above. Perform steps S100, S200, S300, S400, and S500. Redundant explanations regarding this will be omitted.
도 12는 본 발명의 실시예들에 따른 보정수치결정단계의 세부 과정을 개략적으로 도시한다.Figure 12 schematically shows the detailed process of the correction value determination step according to embodiments of the present invention.
보정수치결정단계(S200)은 복수의 상기 제1색상파라미터 정보에 대하여 복수의 보정수치를 포함하는 색상파라미터변환수단을 적용하여 도출되는 제2색상파라미터와 상기 복수의 기준색상에 대한 그라운드트루스에 기설정된 제3색상파라미터 사이의 제1색차이 알고리즘에 의하여 도출되는 색차이 값이 기설정된 기준을 만족하도록 상기 보정수치를 결정한다.The correction value determination step (S200) is based on the second color parameter derived by applying a color parameter conversion means including a plurality of correction values to the plurality of first color parameter information and the ground truth for the plurality of reference colors. The correction value is determined so that the color difference value derived by the first color difference algorithm between the set third color parameters satisfies a preset standard.
구체적으로, 도 12의 패치이미지에는 복수의 기준색상이 촬영된다. 기준색상은 도 4을 참조하여 설명하였던 패치에 표시되는 기준색상 영역에 해당할 수 있다.Specifically, a plurality of reference colors are captured in the patch image of FIG. 12. The reference color may correspond to the reference color area displayed on the patch described with reference to FIG. 4.
이후, 패치이미지에서 촬영된 기준색상 별로 제1색상파라미터를 추출한다. 이 경우, 각각의 기준색상의 위치 혹은 배열이 컴퓨팅시스템에 기록이 되어 있음이 바람직하고, 이를 수행하기 위한 전처리 과정, 예를들어, 패치 영역만을 잘라내서, 리사이징을 수행한 후에, 각각의 기준색상 위치에 대한 픽셀정보를 추출한 후에, 이를 제1색상파라미터로 변환(혹은 픽셀정보 자체가 제1색상파리미터일 수도 있음) 등의 과정이 수행된다.Afterwards, the first color parameter is extracted for each reference color captured in the patch image. In this case, it is desirable that the position or arrangement of each reference color is recorded in the computing system, and after performing a preprocessing process to perform this, for example, cutting out only the patch area and performing resizing, the position of each reference color is After extracting the pixel information, a process such as converting it to a first color parameter (or the pixel information itself may be the first color parameter) is performed.
상기 복수의 기준색상 각각에 대하여 제1색상파라미터를 추출한다. 도 12에서는 편의상 Lab색좌표계를 기준으로 3개의 세부색상파라미터로 정의되는 예를 기준으로 설명한다.A first color parameter is extracted for each of the plurality of reference colors. In Figure 12, for convenience, the description is based on an example defined by three detailed color parameters based on the Lab color coordinate system.
이후, 제1색상파라미터에 대해서는 복수의 보정수치를 포함하는 색상파라미터변환수단에 의하여 제2색상파라미터를 도출할 수 있다. 색상파라미터변환수단은 조명, 카메라설정, 카메라성능, 주변환경 등의 외부 조건에 의하여 색상이 다르게 촬영되는 경우에 이를 보정하는 수단에 해당하고, 본 발명의 실시예들에서는 복수의 보정수치에 의하여 구현될 수 있고, 상기 기준색상이 1개일 경우에는 그라운드트루스의 색상정보와 제2색상파라미터의 색차이값 그 자체일 수 있다.Thereafter, for the first color parameter, the second color parameter can be derived using a color parameter conversion means including a plurality of correction values. The color parameter conversion means corresponds to a means of correcting when the color is photographed differently due to external conditions such as lighting, camera settings, camera performance, and surrounding environment, and in embodiments of the present invention, it is implemented by a plurality of correction values. It may be, and if the reference color is one, it may be the color difference value itself between the color information of the ground truth and the second color parameter.
보정수치의 가장 단순한 예로서, L, a, b가 있는 경우 L에 대한 가감값, a에 대한 가감값, b에 대한 가감값이 이에 해당할 수 있고, 혹은 L, a, b를 벡터열로 취급하여 이에 대한 행렬연산을 수행하는 매트릭스의 내부 요소들이 보정수치에 해당할 수 있다.As the simplest example of a correction value, if there are L, a, and b, the addition or subtraction value for L, the addition or subtraction value for a, or the addition or subtraction value for b may correspond to this, or L, a, and b may be converted into a vector string. The internal elements of the matrix that are handled and perform matrix operations on them may correspond to correction values.
한편, 기준색상에 대해서는 컴퓨팅시스템은 해당 기준색상의 참값(groundtruth)에 해당되는 색상파라미터가 저장되어 있다. 이와 같이 기설정된 기준색상을 포함하는 패치 등을 사용하고, 기준색상의 촬영값과 참값사이의 색 차이를 최소화하는 방향으로 색상파라미터변환수단을 설정함으로써, 색상파라미터변환수단을 일차적으로 결정할 수 있다.Meanwhile, for the reference color, the computing system stores color parameters corresponding to the groundtruth of the reference color. In this way, the color parameter conversion means can be primarily determined by using a patch containing a preset reference color and setting the color parameter conversion means in a direction to minimize the color difference between the photographed value of the reference color and the true value.
바람직하게는, 상기 보정수치결정단계에서는, 복수의 기준색상에 대하여, 상기 제1색차이 알고리즘에 의하여 도출되는 색차이 값의 합계가 최소화되도록 상기 보정수치가 결정될 수 있다. Preferably, in the correction value determination step, the correction value may be determined to minimize the sum of color difference values derived by the first color difference algorithm for a plurality of reference colors.
바람직하게는, 제1색차이 알고리즘은 선형대수적 방법으로 색차이 값의 합계가 최소화되는 보정수치를 찾아낼 수 있는 알고리즘에 해당하고, 대표적으로는 변수들은 (제1색상파라미터의 제n요소 - 제2색상파라미터의 제n요소)의 형태의 수식안에서만 존재하는 CIEDE76이 이에 해당할 수 있다.Preferably, the first color difference algorithm corresponds to an algorithm that can find a correction value that minimizes the sum of color difference values using a linear algebraic method, and typically the variables are (nth element of the first color parameter - CIEDE76, which exists only in a formula of the nth element of a two-color parameter, may correspond to this.
도 13은 본 발명의 실시예들에 따른 변환수단도출단계의 전체적인 과정들에 대해서 개략적으로 도시한다.Figure 13 schematically shows the overall processes of the conversion means derivation step according to embodiments of the present invention.
본 발명의 실시예들에서는 기준색에 대한 정보를 사용하여 색을 보정하는 색상파라미터변환수단을 결정한다. Lab색좌표계를 사용하는 경우, 3차원 공간에서 색을 표현할 수 있고, 색차이는 3차원 공간에서의 두 색을 지칭하는 두 점 사이의 거리에 해당한다. 이하에서는, Lab 색좌표계를 기준으로 색상파라미터변환수단의 보정수치를 도출하는 방법에 대하여 설명하도록 한다.In embodiments of the present invention, a color parameter conversion means for color correction is determined using information about a reference color. When using the Lab color coordinate system, colors can be expressed in three-dimensional space, and the color difference corresponds to the distance between two points that refer to two colors in three-dimensional space. Below, we will explain how to derive the correction value of the color parameter conversion means based on the Lab color coordinate system.
하기의 식에서
Figure PCTKR2023005838-appb-img-000013
은 촬영된 이미지에서의 색상에 대한 정보, 즉 제1색상파라미터에 해당하고, 조명, 카메라 환경 등의 외부적 요인에 의하여 변화된 값을 보정하기 위하여 3x3 행렬
Figure PCTKR2023005838-appb-img-000014
로 곱하여 원래 색(
Figure PCTKR2023005838-appb-img-000015
)을 찾아내도록 한다. 원래 색(
Figure PCTKR2023005838-appb-img-000016
)은 제3색상파라미터에 해당할 수 있고, 이는 기준색상에 대한 그라운드트루스에 해당하는 색상파라미터에 해당한다. T는 색상파라미터변환수단에 해당할 수 있다.
In the equation below
Figure PCTKR2023005838-appb-img-000013
corresponds to information about the color in the captured image, that is, the first color parameter, and is a 3x3 matrix to correct values changed by external factors such as lighting and camera environment.
Figure PCTKR2023005838-appb-img-000014
Multiply by the original color (
Figure PCTKR2023005838-appb-img-000015
) to find out. Original color (
Figure PCTKR2023005838-appb-img-000016
) may correspond to the third color parameter, which corresponds to the color parameter corresponding to the ground truth for the reference color. T may correspond to a color parameter conversion means.
Figure PCTKR2023005838-appb-img-000017
Figure PCTKR2023005838-appb-img-000017
Figure PCTKR2023005838-appb-img-000018
Figure PCTKR2023005838-appb-img-000018
이와 같이 촬영된 기준색상에 대한 참값 (Groundtruth) 색상파라미터값을 아는 경우에, T이는 아래와 같은 최적화 문제를 푸는 것이 된다.In this way, if the groundtruth color parameter value for the photographed reference color is known, the following optimization problem can be solved.
Figure PCTKR2023005838-appb-img-000019
Figure PCTKR2023005838-appb-img-000019
이 방법은 색의 공간과 거리가 잘 정의될 때, 예를들어 선형적으로 정의될 때 풀 수 있다. This method can be solved when the color space and distance are well defined, for example, linearly.
제1색차이 알고리즘의 예로서 sRGB, CIE76 등의 색공간에서는 Euclidean 공간상에 3차원 벡터로 색을 표현할 수 있고, 이 경우, 일반적인 선형대수를 사용하여, T의 최적화문제를 풀 수 있다. 이 경우, 추가적으로 tone mapping이나 Gamut mapping 등의 과정이 추가될 수도 있다.As an example of the first color difference algorithm, in color spaces such as sRGB and CIE76, colors can be expressed as three-dimensional vectors in Euclidean space, and in this case, the optimization problem of T can be solved using general linear algebra. In this case, additional processes such as tone mapping or gamut mapping may be added.
단계 S1000에서는 이와 같은 제1색차이 알고리즘에서 색상파라미터변환수단에 해당하는 T의 정보, 즉 보정수치를 결정한다. In step S1000, information on T corresponding to the color parameter conversion means, that is, a correction value, is determined in the first color difference algorithm.
한편, 인지적으로 더 우수하다고 알려진 CIEDE 2000 방법의 경우, 색차이값에 해당하는 거리 함수가 공간 상의 점 거리로 표현될 수 없고, 두 점 사이의 거리를 나타내는 복잡한 함수
Figure PCTKR2023005838-appb-img-000020
로 구할 수 있다. 이와 같은 경우, 기존의 선형대수 방법으로 풀기 어렵다.
Meanwhile, in the case of the CIEDE 2000 method, which is known to be cognitively superior, the distance function corresponding to the color difference value cannot be expressed as a point distance in space, and is a complex function representing the distance between two points.
Figure PCTKR2023005838-appb-img-000020
It can be obtained with In cases like this, it is difficult to solve using existing linear algebra methods.
Figure PCTKR2023005838-appb-img-000021
Figure PCTKR2023005838-appb-img-000021
본 발명의 실시예들에서는 전술한 바와 같은 제1색차이 알고리즘에 의하여 T값을 결정한 후에,
Figure PCTKR2023005838-appb-img-000022
값이 감소할 수 있도록 T값을 변화한다. 단계 S1100에서는 이와 같은 과정을 수행하고, 바람직하게는, 보정수치결정단계에서 결정된 T값에서의 편미분 정보에 기초하여
Figure PCTKR2023005838-appb-img-000023
를 감소시킬 수 있는 T의 변화방향을 결정한다.
In embodiments of the present invention, after determining the T value by the first color difference algorithm as described above,
Figure PCTKR2023005838-appb-img-000022
Change the T value so that the value decreases. In step S1100, this process is performed, preferably based on partial differential information at the T value determined in the correction value determination step.
Figure PCTKR2023005838-appb-img-000023
Determine the direction of change in T that can reduce .
즉, 단계 S1000에서는 Lab(CIE76기반) 등의 선형적으로 T를 구할 수 있는 색공간에 제1색상파라미터 및 제3색상파라미터를 매핑하고, 선형식을 풀어 초기값 T를 생성한다. That is, in step S1000, the first and third color parameters are mapped to a color space that can linearly obtain T, such as Lab (based on CIE76), and the initial value T is generated by solving a linear equation.
이후, 단계 S1100에서는 Lab공간 상에서 T의 값(T를 구성하는 복수의 보정수치 각각)이 변화함에 따라 CIEDE2000에서의 에러(
Figure PCTKR2023005838-appb-img-000024
)가 변화, 혹은 감소하는 T의 변화량정보를 찾아내고, T를 업데이트한다(T+ΔT).
Afterwards, in step S1100, as the value of T (each of the plurality of correction values constituting T) changes in the Lab space, an error (error) in CIEDE2000 is generated.
Figure PCTKR2023005838-appb-img-000024
) finds the change amount information of T that changes or decreases, and updates T (T+ΔT).
더욱 상세하게는, S1100 및 S1200에서는 공간상의 절대적인 값이 아닌, 거리값으로 주어진 상대적인 값의 변화를 찾아내야 하기 때문에 거리 값을 기반으로 정의된 함수의 국소적인 모양을 Lab공간에 투영하여 표현한다. 이를 바탕으로 최적화를 진행하되, 새로운 색상에 대해서는 또 다른
Figure PCTKR2023005838-appb-img-000025
가 정의되기 때문에, 이를 반복적으로 업데이트한다.
More specifically, in S1100 and S1200, because it is necessary to find changes in relative values given as distance values rather than absolute values in space, the local shape of the function defined based on the distance value is expressed by projecting it into Lab space. Optimization will be carried out based on this, but for new colors, another
Figure PCTKR2023005838-appb-img-000025
Since is defined, it is updated repeatedly.
도 14는 본 발명의 실시예들에 따른 색상파라미터변환수단의 연산을 개략적으로 도시한다.Figure 14 schematically shows the operation of the color parameter conversion means according to embodiments of the present invention.
상기 색상파라미터변환수단은 복수의 보정수치를 포함하는 행렬로 구현이 되고, 상기 행렬의 열 혹은 행의 수는 상기 제1색상파라미터의 요소의 수에 상응한다.The color parameter conversion means is implemented as a matrix including a plurality of correction values, and the number of columns or rows of the matrix corresponds to the number of elements of the first color parameter.
도 14에서 T11 내지 T33은 보정수치에 해당할 수 있고, 이들로 구성되는 행렬이 색상파라미터변환수단에 해당할 수 있다.In FIG. 14, T 11 to T 33 may correspond to correction values, and a matrix composed of these may correspond to a color parameter conversion means.
한편, 상기 제2색상파라미터는 상기 제1색상파라미터에 대하여 상기 행렬을 행렬곱하여 도출되고, 상기 제4색상파라미터는 상기 제1색상파라미터에 대하여 상기 행렬을 행렬곱을 하여 도출된다.Meanwhile, the second color parameter is derived by matrix multiplying the matrix with respect to the first color parameter, and the fourth color parameter is derived by matrix multiplying the matrix with respect to the first color parameter.
본 발명의 실시예들에서는 도 14에서와 같이 색상파라미터변환수단이 제1색상파라미터의 우측에서 곱해질 수도 있지만, 전술한 바와 같이 제1색상파라미터의 좌측에서 곱해지는 형태로 정의할 수도 있다.In embodiments of the present invention, the color parameter conversion means may be multiplied on the right side of the first color parameter as shown in FIG. 14, but may also be defined as multiplied on the left side of the first color parameter as described above.
도 15는 본 발명의 실시예들에 따른 보정수치결정단계에서의 세부 과정을 개략적으로 도시한다.Figure 15 schematically shows the detailed process in the correction value determination step according to embodiments of the present invention.
도 15에 도시된 바와 같이 L, a, b로 정의되는 제1색상파라미터는 색상파라미터변환수단(예를들어 행렬 T)에 의하여 L', a', b'으로 변환될 수 있다. As shown in FIG. 15, the first color parameter defined as L, a, and b can be converted to L', a', and b' by a color parameter conversion means (e.g., matrix T).
한편, 해당 기준색상에 대한 참값(groundtruth)에 해당하는 제3색상파라미터는 LGT, aGT, bGT로 표시될 수 있고, 도 10에서와 같이 공간상의 어느 점으로 표현될 수도 있다. 이 경우, L', a', b' 에 해당하는 점과 LGT, aGT, bGT 사이의 공간상의 거리가 제1색차이 알고리즘에 의하여 도출되는 색차이값의 일 예에 해당할 수 있고, 복수의 기준색상들에 대하여 상기 거리들의 합이 최소화가 되도록 T값을 찾아낸다.Meanwhile, the third color parameter corresponding to the groundtruth for the reference color may be expressed as L GT , a GT , and b GT , and may be expressed as a point in space as shown in FIG. 10 . In this case, the spatial distance between the points corresponding to L', a', and b' and L GT , a GT , and b GT may correspond to an example of a color difference value derived by the first color difference algorithm, , the T value is found so that the sum of the distances for a plurality of reference colors is minimized.
기본적으로, 보정수치결정단계는 위와 같은 방식으로 동작할 수 있다.Basically, the correction value determination step can operate in the same manner as above.
한편, 보정수치변경단계에서는 사람의 인지와 유사한 색차이 알고리즘, 예를들어 CIEDE2000을 이용하기 때문에, L, a, b 좌표계에서의 두 점 사이의 색차이가 완전히 선형적으로 직선으로 나타나는 것이 아니라, 좌표계의 공간이 변화가 되기 때문에, T값을 변화시켜서 복수의 기준색상들의 T에 의하여 변환이 된 촬영값과 그라운드트루스 사이의 색차이의 합이 최소화하도록 T값의 변화량 거동을 찾고, 감소시키는 방향으로 T를 변화시켜, 최적의 T를 찾는다.Meanwhile, in the correction value change stage, a color difference algorithm similar to human perception, for example, CIEDE2000, is used, so the color difference between two points in the L, a, and b coordinate system does not appear completely linearly as a straight line. Since the space of the coordinate system changes, the T value is changed to find and reduce the change behavior of the T value so that the sum of the color difference between the photographed value converted by the T of the plurality of reference colors and the ground truth is minimized. Change T to find the optimal T.
도 16은 본 발명의 실시예들에 따른 보정수치변경단계의 세부 과정을 개략적으로 도시한다.Figure 16 schematically shows the detailed process of the correction value changing step according to embodiments of the present invention.
도 16에서는 도 12와 유사하게 패치이미지로부터 촬영된 기준색상영역에 대해 제1색상파라미터를 도출한다.In Figure 16, similar to Figure 12, the first color parameter is derived for the reference color area captured from the patch image.
이후 마찬가지로, 제1색상파라미터에 대하여 색상파라미터변환수단을 적용하여, 제4색상파라미터를 도출한 후에, 제4색상파라미터에 의하여 정의되는 색상과 제3색상파라미터에 의하여 정의되는 색 사이의 거리를 제1색차이 알고리즘보다 더욱 사람의 인지에 가까운 제2색차이 알고리즘에 의한 색차이 값의 합이 감소하도록 색상파라미터변환수단의 보정수치를 변경한다.Similarly, after applying the color parameter conversion means to the first color parameter to derive the fourth color parameter, the distance between the color defined by the fourth color parameter and the color defined by the third color parameter is calculated. The correction value of the color parameter conversion means is changed so that the sum of color difference values by the second color difference algorithm, which is closer to human perception than the first color difference algorithm, is reduced.
구체적으로, 상기 보정수치변경단계는 상기 복수의 기준색상에 대한 제4색상파라미터와 상기 제3색상파라미터 사이의 제2색차이 알고리즘에 의하여 도출되는 색차이 값의 합계를 에러함수로 정의하고, 상기 에러함수의 상기 보정수치에 대한 1차편미분값을 포함하는 변화량정보에 기초하여 상기 보정수치 중 일부 혹은 전체를 변경한다.Specifically, the correction value changing step defines the sum of the color difference values derived by the second color difference algorithm between the fourth color parameter and the third color parameter for the plurality of reference colors as an error function, and Some or all of the correction values are changed based on change information including the first partial derivative of the error function with respect to the correction value.
예를 들어, 색상파라미터변환수단이 도 14에서와 같은 행렬로 정의되는 경우, 1차 편미분 값은 (에러함수의 T11에 대한 미분값, 에러함수의 T12에 대한 미분값, 에러함수의 T13에 대한 미분값, 에러함수의 T21에 대한 미분값, 에러함수의 T22에 대한 미분값, 에러함수의 T23에 대한 미분값, 에러함수의 T31에 대한 미분값, 에러함수의 T32에 대한 미분값, 에러함수의 T33에 대한 미분값)의 9개의 값으로 이루어진 벡터에 해당할 수 있다.For example, when the color parameter conversion means is defined as a matrix as shown in FIG. 14, the first partial differential value is (differential value for T 11 of the error function, differential value for T 12 of the error function, T of the error function Differential value for 13 , Differential value for T 21 of the error function, Differential value for T 22 of the error function, Differential value for T 23 of the error function, Differential value for T 31 of the error function, T of the error function It may correspond to a vector consisting of 9 values (differential value for 32 , differential value for T 33 of the error function).
가장 간단한 예로서, 현재의 T에서 편미분 값의 벡터가 (-10, 0, 0, 0, 0, 0, 0, 0, 0)이 되는 경우에는, T11 값을 증가시킴으로서, 에러함수가 줄어듬을 알 수 있고, 이와 같은 T11의 증가라는 것이 변화량정보에 해당할 수 있다.As the simplest example, if the vector of partial differential values at the current T is (-10, 0, 0, 0, 0, 0, 0, 0, 0), the error function can be reduced by increasing the T11 value. It can be seen that this increase in T11 may correspond to change information.
바람직하게는, 상기 보정수치변경단계는, 상기 복수의 기준색상에 대한 제4색상파라미터와 상기 제3색상파라미터 사이의 제2색차이 알고리즘에 의하여 도출되는 색차이 값의 합계를 에러함수로 정의하고, 상기 에러함수의 상기 보정수치에 대한 2차편미분값을 포함하는 변화량정보에 기초하여 상기 보정수치 중 일부 혹은 전체를 변경한다.Preferably, the correction value changing step defines the sum of color difference values derived by a second color difference algorithm between the fourth color parameter and the third color parameter for the plurality of reference colors as an error function, and , some or all of the correction values are changed based on change information including the second partial differential value of the error function with respect to the correction value.
상기 2차편미분값은 헤시안(Hessian) 매트릭스에 해당할 수 있고, 전술한 바와 같이 색상파라미터변환수단이 3X3벡터에 해당하는 경우에는, 1차 편미분값은 1X9의 벡터로 표현할 수 있고, 2차편미분값은 9X9의 행렬로 표현될 수 있다. The second partial differential value may correspond to a Hessian matrix, and as described above, when the color parameter conversion means corresponds to a 3X3 vector, the first partial differential value may be expressed as a 1X9 vector, and the second The partial differential value can be expressed as a 9X9 matrix.
도 17은 본 발명의 실시예들에 따른 보정수치변경단계의 세부 단계들을 개략적으로 도시한다.Figure 17 schematically shows detailed steps of the correction value changing step according to embodiments of the present invention.
전술한 바와 같이, 상기 보정수치변경단계는 상기 복수의 기준색상에 대한 제4색상파라미터와 상기 제3색상파라미터 사이의 제2색차이 알고리즘에 의하여 도출되는 색차이 값의 합계를 에러함수로 정의하여 수행될 수 있다.As described above, the correction value changing step defines the sum of the color difference values derived by the second color difference algorithm between the fourth color parameter and the third color parameter for the plurality of reference colors as an error function. It can be done.
단계 S2000에서는, 상기 에러함수의 상기 보정수치에 대한 1차편미분값에 의하여 상기 보정수치 전체의 변경방향에 대한 변경벡터를 결정한다. 즉, 1차편미분값은 T의 변화방향, 예를들어 도 9와 같은 형태로 색상파라미터변환수단이 정의되는 경우, T의 9개의 보정수치에 대한 변화방향(예를들어 (1,1,0,0,0,0,0,0,0)이 1차편미분값에 해당하는 경우에는 T11, 및 T11만 동일한 크기로 감소시키고, 나머지는 그대로 유지)를 결정한다.In step S2000, a change vector for the change direction of the entire correction value is determined based on the first partial differential value of the error function with respect to the correction value. In other words, the first partial differential value is the direction of change of T, for example, if the color parameter conversion means is defined in the form shown in FIG. 9, the direction of change for the 9 correction values of T (e.g. (1, 1, 0) If ,0,0,0,0,0,0) corresponds to the first partial differential value, only T11 and T11 are reduced to the same size, and the rest are kept as is) is determined.
예를들어, 변경벡터도 T와 같은 차원을 갖는 행렬로 표현할 수 있다.For example, a change vector can also be expressed as a matrix with the same dimension as T.
Figure PCTKR2023005838-appb-img-000026
Figure PCTKR2023005838-appb-img-000026
상기 에러함수의 상기 보정수치에 대한 2차편미분값에 기초하여 상기 변경벡터의 방향을 유지하면서 크기를 변경한 후에, 상기 변경벡터를 적용하여 상기 보정수치 중 일부 혹은 전체를 변경한다.After changing the size while maintaining the direction of the change vector based on the second partial differential value of the error function with respect to the correction value, some or all of the correction value is changed by applying the change vector.
즉, 2차편미분값은 상기 변경벡터의 크기를 결정한다. 예를 들어 상기 행렬에서 행렬 T값은 9개의 좌표계에서 거동할 수 있고, 각각의 T에서 에러함수가 계산될 수 있다. 1차편미분값은 에러를 감소시킬 수 있는 방향을 찾을 수 있고, 2차편미분값은 에러함수의 곡률정보를 도출할 수 있기 때문에, 변경벡터의 최적 이동거리를 도출할 수 있다. In other words, the second-order partial derivative determines the size of the change vector. For example, in the above matrix, the matrix T value can operate in 9 coordinate systems, and the error function can be calculated in each T. Since the first partial derivative value can find the direction that can reduce the error, and the second partial derivative value can derive curvature information of the error function, the optimal moving distance of the change vector can be derived.
즉, 1차 편미분값으로 추출되는 변경백터가 ΔT1이라고 한다면, 2차 편미분값으로는 스칼라값인 a를 도출할 수 있고, a* ΔT1이 최종적인 T의 변화량 정보에 해당한다.In other words, if the change vector extracted with the first partial differential value is ΔT1, a scalar value a can be derived from the second partial derivative value, and a* ΔT1 corresponds to the final change amount information of T.
이후 T+ a* ΔT1이 새로운 색상파라미터변환수단이 될 수 있고, 이 상태에서 보정수치변경단계(S300)을 반복할 수 있다. 반복이 진행됨에 따라 에러함수의 변화가 감소하거나, 혹은 특정 수치 이하로 줄어들 수 있는 데, 이 때, S400이 수행되어 색상파라미터변환수단이 결정될 수 있다. Afterwards, T+ a*ΔT1 can become a new color parameter conversion means, and in this state, the correction value change step (S300) can be repeated. As repetition progresses, the change in the error function may decrease or decrease below a certain value. At this time, S400 is performed and the color parameter conversion means can be determined.
바람직하게는, 상기 제1편미분값 및 제2편미분값은 n차 테일러급수로 근사화하여 수치적으로 계산할 수 있다.Preferably, the first partial differential value and the second partial differential value can be calculated numerically by approximating the nth order Taylor series.
바람직하게는, 에러함수는 f(x)로 정의될 수 있고, x는 전술한 T와 같은 행렬, 혹은 벡터에 해당할 수 있고, f(x)는 미분가능한 스칼라값을 출력하는 함수에 해당할 수 있다.Preferably, the error function may be defined as f(x), x may correspond to a matrix or vector such as T described above, and f(x) may correspond to a function that outputs a differentiable scalar value. You can.
이 경우, 전술한 보정수치결정단계(S200)에 의하여 초기의 x값, 즉 x0를 찾을 수 있고, x0에서 1차편미분정보 및 2차편미분정보를 도출하여 x를 업데이트하면서, f(x)를 최소화할 수 있는 x를 찾아낼 수 있다.In this case, the initial x value, that is, x 0 , can be found through the correction value determination step (S200) described above, and while updating x by deriving the first and second partial differential information from x0, f(x) We can find x that can minimize .
x를 변화시키는 변화방향(전술한 변경벡터에 해당함)에 해당하는 k단계에서의 pk는 뉴튼식 (newton equation)의 해에 의하여 하기와 같이 표현될 수 있다.p k at step k corresponding to the direction of change that changes x (corresponding to the above-mentioned change vector) can be expressed as follows by the solution of the Newton equation.
Figure PCTKR2023005838-appb-img-000027
Figure PCTKR2023005838-appb-img-000027
여기서
Figure PCTKR2023005838-appb-img-000028
는 2차 편미분값에 해당하는 헤시안 행렬의 근사값에 해당할 수 있고, 이는 각각의 단계에서 업데이트 될 수 있고,
Figure PCTKR2023005838-appb-img-000029
는 xk에서의 1차 편미분값(그레디언트값)에 해당할 수 있다.
here
Figure PCTKR2023005838-appb-img-000028
may correspond to an approximation of the Hessian matrix corresponding to the second partial derivative, which may be updated at each step,
Figure PCTKR2023005838-appb-img-000029
May correspond to the first partial differential value (gradient value) at x k .
pk방향에서의 라인 탐색은 다음 x값에 해당하는 xk+1 를 구하는 데 사용될 수 있고, 이는
Figure PCTKR2023005838-appb-img-000030
을 최소화하는 r의 값을 찾는 방식으로 구현될 수 있다.
Line search in the p k direction can be used to find x k+1 corresponding to the next x value, which is
Figure PCTKR2023005838-appb-img-000030
It can be implemented by finding the value of r that minimizes .
이와 같이, 본 발명의 실시예들에서는 현재의 T에서의 에러함수의 값을 줄이는 방향으로 T의 변화량값을 도출하는 과정을 여러 번 수행함으로써 최적의 T를 찾아낼 수 있다. 이는 전술한 방법 외에도, 공지된 다양한 수치적 계산, 예를들어 BFGS(Broyden-Fletcher-Goldfarb-Shanno algorithm) 알고리즘 등이 이용될 수 있다.As such, in embodiments of the present invention, the optimal T can be found by performing the process of deriving the change value of T several times in the direction of reducing the value of the error function at the current T. In addition to the above-mentioned method, various known numerical calculations, such as the Broyden-Fletcher-Goldfarb-Shanno algorithm (BFGS) algorithm, can be used.
도 18는 본 발명의 일 실시예에 따른 컴퓨팅장치의 내부 구성을 개략적으로 도시한다.Figure 18 schematically shows the internal configuration of a computing device according to an embodiment of the present invention.
도 18에 도시된 바와 같이, 컴퓨팅장치(11000)는 적어도 하나의 프로세서(processor)(11100), 메모리(memory)(11200), 주변장치 인터페이스(peripheral interface)(11300), 입/출력 서브시스템(I/O subsystem)(11400), 전력 회로(11500) 및 통신 회로(11600)를 적어도 포함할 수 있다. 이때, 컴퓨팅장치(11000)는 도 1에 도시된 컴퓨팅장치(1000)에 해당될 수 있다.As shown in FIG. 18, the computing device 11000 includes at least one processor 11100, a memory 11200, a peripheral interface 11300, and an input/output subsystem ( It may include at least an I/O subsystem (11400), a power circuit (11500), and a communication circuit (11600). At this time, the computing device 11000 may correspond to the computing device 1000 shown in FIG. 1.
메모리(11200)는 일례로 고속 랜덤 액세스 메모리(high-speed random access memory), 자기 디스크, 에스램(SRAM), 디램(DRAM), 롬(ROM), 플래시 메모리 또는 비휘발성 메모리를 포함할 수 있다. 메모리(11200)는 컴퓨팅장치(11000)의 동작에 필요한 소프트웨어 모듈, 명령어 집합 또는 그 밖에 다양한 데이터를 포함할 수 있다.The memory 11200 may include, for example, high-speed random access memory, magnetic disk, SRAM, DRAM, ROM, flash memory, or non-volatile memory. . The memory 11200 may include software modules, instruction sets, or other various data necessary for the operation of the computing device 11000.
이때, 프로세서(11100)나 주변장치 인터페이스(11300) 등의 다른 컴포넌트에서 메모리(11200)에 액세스하는 것은 프로세서(11100)에 의해 제어될 수 있다.At this time, access to the memory 11200 from other components such as the processor 11100 or the peripheral device interface 11300 may be controlled by the processor 11100.
주변장치 인터페이스(11300)는 컴퓨팅장치(11000)의 입력 및/또는 출력 주변장치를 프로세서(11100) 및 메모리 (11200)에 결합시킬 수 있다. 프로세서(11100)는 메모리(11200)에 저장된 소프트웨어 모듈 또는 명령어 집합을 실행하여 컴퓨팅장치(11000)을 위한 다양한 기능을 수행하고 데이터를 처리할 수 있다.The peripheral interface 11300 may couple input and/or output peripherals of the computing device 11000 to the processor 11100 and the memory 11200. The processor 11100 may execute a software module or set of instructions stored in the memory 11200 to perform various functions for the computing device 11000 and process data.
입/출력 서브시스템은 다양한 입/출력 주변장치들을 주변장치 인터페이스(11300)에 결합시킬 수 있다. 예를 들어, 입/출력 서브시스템은 모니터나 키보드, 마우스, 프린터 또는 필요에 따라 터치스크린이나 센서 등의 주변장치를 주변장치 인터페이스(11300)에 결합시키기 위한 컨트롤러를 포함할 수 있다. 다른 측면에 따르면, 입/출력 주변장치들은 입/출력 서브시스템을 거치지 않고 주변장치 인터페이스(11300)에 결합될 수도 있다.The input/output subsystem can couple various input/output peripherals to the peripheral interface 11300. For example, the input/output subsystem may include a controller for coupling peripheral devices such as a monitor, keyboard, mouse, printer, or, if necessary, a touch screen or sensor to the peripheral device interface 11300. According to another aspect, input/output peripherals may be coupled to the peripheral interface 11300 without going through the input/output subsystem.
전력 회로(11500)는 단말기의 컴포넌트의 전부 또는 일부로 전력을 공급할 수 있다. 예를 들어 전력 회로(11500)는 전력 관리 시스템, 배터리나 교류(AC) 등과 같은 하나 이상의 전원, 충전 시스템, 전력 실패 감지 회로(power failure detection circuit), 전력 변환기나 인버터, 전력 상태 표시자 또는 전력 생성, 관리, 분배를 위한 임의의 다른 컴포넌트들을 포함할 수 있다. Power circuit 11500 may supply power to all or some of the terminal's components. For example, power circuit 11500 may include a power management system, one or more power sources such as batteries or alternating current (AC), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator, or a power source. It may contain arbitrary other components for creation, management, and distribution.
통신 회로(11600)는 적어도 하나의 외부 포트를 이용하여 다른 컴퓨팅장치와 통신을 가능하게 할 수 있다.The communication circuit 11600 may enable communication with another computing device using at least one external port.
또는 상술한 바와 같이 필요에 따라 통신 회로(11600)는 RF 회로를 포함하여 전자기 신호(electromagnetic signal)라고도 알려진 RF 신호를 송수신함으로써, 다른 컴퓨팅장치와 통신을 가능하게 할 수도 있다.Alternatively, as described above, if necessary, the communication circuit 11600 may include an RF circuit to transmit and receive RF signals, also known as electromagnetic signals, to enable communication with other computing devices.
이러한 도 18의 실시예는, 컴퓨팅장치(11000)의 일례일 뿐이고, 컴퓨팅장치(11000)는 도 18에 도시된 일부 컴포넌트가 생략되거나, 도 18에 도시되지 않은 추가의 컴포넌트를 더 구비하거나, 2개 이상의 컴포넌트를 결합시키는 구성 또는 배치를 가질 수 있다. 예를 들어, 모바일 환경의 통신 단말을 위한 컴퓨팅장치는 도 18에 도시된 컴포넌트들 외에도, 터치스크린이나 센서 등을 더 포함할 수도 있으며, 통신 회로(11600)에 다양한 통신방식(WiFi, 3G, LTE, Bluetooth, NFC, Zigbee 등)의 RF 통신을 위한 회로가 포함될 수도 있다. 컴퓨팅장치(11000)에 포함 가능한 컴포넌트들은 하나 이상의 신호 처리 또는 어플리케이션에 특화된 집적 회로를 포함하는 하드웨어, 소프트웨어, 또는 하드웨어 및 소프트웨어 양자의 조합으로 구현될 수 있다.This embodiment of FIG. 18 is only an example of the computing device 11000, and the computing device 11000 omits some components shown in FIG. 18, further includes additional components not shown in FIG. 18, or 2. It may have a configuration or arrangement that combines more than one component. For example, a computing device for a communication terminal in a mobile environment may further include a touch screen or a sensor in addition to the components shown in FIG. 18, and may include various communication methods (WiFi, 3G, LTE) in the communication circuit 11600. , Bluetooth, NFC, Zigbee, etc.) may also include a circuit for RF communication. Components that can be included in the computing device 11000 may be implemented as hardware, software, or a combination of both hardware and software, including an integrated circuit specialized for one or more signal processing or applications.
본 발명의 실시예에 따른 방법들은 다양한 컴퓨팅장치를 통하여 수행될 수 있는 프로그램 명령(instruction) 형태로 구현되어 컴퓨터 판독 가능 매체에 기록될 수 있다. 특히, 본 실시예에 따른 프로그램은 PC 기반의 프로그램 또는 모바일 단말 전용의 어플리케이션으로 구성될 수 있다. 본 발명이 적용되는 어플리케이션은 파일 배포 시스템이 제공하는 파일을 통해 컴퓨팅장치(11000)에 설치될 수 있다. 일 예로, 파일 배포 시스템은 컴퓨팅장치(11000)의 요청에 따라 상기 파일을 전송하는 파일 전송부(미도시)를 포함할 수 있다.Methods according to embodiments of the present invention may be implemented in the form of program instructions that can be executed through various computing devices and recorded on a computer-readable medium. In particular, the program according to this embodiment may be composed of a PC-based program or a mobile terminal-specific application. The application to which the present invention is applied can be installed on the computing device 11000 through a file provided by a file distribution system. As an example, the file distribution system may include a file transmission unit (not shown) that transmits the file according to a request from the computing device 11000.
이상에서 설명된 장치는 하드웨어 구성요소, 소프트웨어 구성요소, 및/또는 하드웨어 구성요소 및 소프트웨어구성요소의 조합으로 구현될 수 있다. 예를 들어, 실시예들에서 설명된 장치 및 구성요소는, 예를 들어, 프로세서, 콘트롤러, ALU(arithmetic logic unit), 디지털 신호 프로세서(digital signal processor), 마이크로컴퓨터, FPGA(field programmable gate array), PLU(programmable logic unit), 마이크로프로세서, 또는 명령(instruction)을 실행하고 응답할 수 있는 다른 어떠한 장치와 같이, 하나 이상의 범용 컴퓨터 또는 특수 목적컴퓨터를 이용하여 구현될 수 있다. 처리 장치는 운영 체제(OS) 및 상기 운영 체제 상에서 수행되는 하나 이상의 소프트웨어 어플리케이션을 수행할 수 있다. 또한, 처리 장치는 소프트웨어의 실행에 응답하여, 데이터를 접근, 저장, 조작, 처리 및 생성할 수도 있다. 이해의 편의를 위하여, 처리 장치는 하나가 사용되는 것으로 설명된 경우도 있지만, 해당 기술분야에서 통상의 지식을 가진 자는, 처리 장치가 복수 개의 처리 요소(processing element) 및/또는 복수 유형의 처리 요소를 포함할 수 있음을 알 수 있다. 예를 들어, 처리 장치는 복수 개의 프로세서 또는 하나의 프로세서 및 하나의 콘트롤러를 포함할 수 있다. 또한, 병렬 프로세서(parallel processor)와 같은, 다른 처리 구성(processing configuration)도 가능하다.The device described above may be implemented with hardware components, software components, and/or a combination of hardware components and software components. For example, devices and components described in embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), etc. , may be implemented using one or more general-purpose or special-purpose computers, such as a programmable logic unit (PLU), microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. Additionally, a processing device may access, store, manipulate, process, and generate data in response to the execution of software. For ease of understanding, a single processing device may be described as being used; however, those skilled in the art will understand that a processing device includes multiple processing elements and/or multiple types of processing elements. It can be seen that it may include. For example, a processing device may include a plurality of processors or one processor and one controller. Additionally, other processing configurations, such as parallel processors, are possible.
소프트웨어는 컴퓨터 프로그램(computer program), 코드(code), 명령(instruction), 또는 이들 중 하나 이상의 조합을 포함할 수 있으며, 원하는 대로 동작하도록 처리 장치를 구성하거나 독립적으로 또는 결합적으로 (collectively) 처리 장치를 명령할 수 있다. 소프트웨어 및/또는 데이터는, 처리 장치에 의하여 해석되거나 처리 장치에 명령 또는 데이터를 제공하기 위하여, 어떤 유형의 기계, 구성요소(component), 물리적 장치, 가상장치(virtual equipment), 컴퓨터 저장 매체 또는 장치, 또는 전송되는 신호 파(signal wave)에 영구적으로, 또는 일시적으로 구체화(embody)될 수 있다. 소프트웨어는 네트워크로 연결된 컴퓨팅장치 상에 분산되어서, 분산된 방법으로 저장되거나 실행될 수도 있다. 소프트웨어 및 데이터는 하나 이상의 컴퓨터 판독 가능 기록 매체에 저장될 수 있다.Software may include a computer program, code, instructions, or a combination of one or more of these, which may configure a processing unit to operate as desired, or may be processed independently or collectively. You can command the device. Software and/or data may be used by any type of machine, component, physical device, virtual equipment, computer storage medium or device to be interpreted by or to provide instructions or data to a processing device. , or may be permanently or temporarily embodied in a transmitted signal wave. Software may be distributed over networked computing devices and stored or executed in a distributed manner. Software and data may be stored on one or more computer-readable recording media.
실시예에 따른 방법은 다양한 컴퓨터 수단을 통하여 수행될 수 있는 프로그램 명령 형태로 구현되어 컴퓨터 판독 가능 매체에 기록될 수 있다. 상기 컴퓨터 판독 가능 매체는 프로그램 명령, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합하여 포함할 수 있다. 상기 매체에 기록되는 프로그램 명령은 실시예를 위하여 특별히 설계되고 구성된 것들이거나 컴퓨터 소프트웨어 당업자에게 공지되어 사용 가능한 것일 수도 있다. 컴퓨터 판독 가능 기록 매체의 예에는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체(magnetic media), CD-ROM, DVD와 같은 광기록 매체(optical media), 플롭티컬 디스크(floptical disk)와 같은 자기-광 매체(magneto-optical media), 및 롬(ROM), 램(RAM), 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다. 프로그램 명령의 예에는 컴파일러에 의해 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드를 포함한다. 상기된 하드웨어 장치는 실시예의 동작을 수행하기 위해 하나 이상의 소프트웨어 모듈로서 작동하도록 구성될 수 있으며, 그 역도 마찬가지이다.The method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded on a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, etc., singly or in combination. Program instructions recorded on the medium may be specially designed and configured for the embodiment or may be known and available to those skilled in the art of computer software. Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic media such as floptical disks. -Includes optical media (magneto-optical media) and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, etc. Examples of program instructions include machine language code, such as that produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter, etc. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
본 발명의 일 실시예에 따르면, 측색 방법은 스마트폰 등의 개인 휴대단말로도 용이하게 수행할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, the colorimetry method can be easily performed even with a personal mobile terminal such as a smartphone.
본 발명의 일 실시예에 따르면, 패치에 포함되어 있는 기준색상과 얼굴을 함께 촬영한 이미지에 대한 측색을 수행함으로서, 패치 및 얼굴을 동일한 환경에서 비교하여 얼굴의 색상정보를 도출할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, by performing colorimetry on an image taken together with the reference color included in the patch and the face, the effect of deriving the color information of the face is achieved by comparing the patch and the face in the same environment. It can be performed.
본 발명의 일 실시예에 따르면, 얼굴 및 패치 각각에 대해 형상정보를 도출하므로, 각각의 특성 및 이미지상의 상태의 차이를 보완하여 정확한 반사율정보를 도출할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, since shape information is derived for each face and patch, it is possible to derive accurate reflectance information by compensating for differences in each characteristic and state in the image.
본 발명의 일 실시예에 따르면, 획득한 이미지에서 형상정보 및 빛정보를 도출함으로서, 상기 형상에 조사된 빛에 의해 만들어진 그림자의 영향을 제거한 반사율정보를 상기 이미지에서 도출할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, by deriving shape information and light information from an acquired image, reflectance information that removes the influence of a shadow created by light irradiated on the shape can be derived from the image. there is.
본 발명의 일 실시예에 따르면, 이미지에서 반사율정보에 영향을 주는 형상정보 및 빛정보를 감산하여 반사율정보를 도출하므로, 이미지에서 반사율정보를 바로 도출하였을 때보다 더 정확한 반사율정보를 도출하는 효과를 발휘할 수 있다.According to one embodiment of the present invention, the reflectance information is derived by subtracting the shape information and light information that affect the reflectance information from the image, thereby achieving the effect of deriving more accurate reflectance information than when the reflectance information is directly derived from the image. It can be performed.
본 발명의 일 실시예에 따르면, 패치는 복수의 기준색상을 포함하므로, 다양한 얼굴 색상에 대응할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, since the patch includes a plurality of reference colors, it can exhibit the effect of responding to various facial colors.
본 발명의 일 실시예에 따르면, 패치의 각각의 기준색상의 그라운드트루스를 알고 있으므로, 이미지상의 각각의 기준색상(패치의 반사율정보에서의 기준색상)의 색정보와 상기 그라운드트루스를 기초로 색상파라미터변환수단을 도출하고, 색차이정보 도출할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, since the ground truth of each reference color of the patch is known, color parameters are determined based on the color information of each reference color (reference color in the reflectance information of the patch) in the image and the ground truth. It can be effective in deriving conversion means and deriving color difference information.
본 발명의 일 실시예에 따르면, 모델학습단계에서 학습이미지를 입력하고, 도출된 학습형상정보, 학습빛정보 및 학습반사율정보를 결합하여 상기 학습이미지와 비교하므로, 명확한 출력값(학습이미지)을 도출할 수 있도록 인공신경망의 세부파라미터값 혹은 필터정보를 학습할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, a learning image is input in the model learning step, and the derived learning shape information, learning light information, and learning reflectance information are combined and compared with the learning image, thereby deriving a clear output value (learning image). This can have the effect of learning detailed parameter values or filter information of the artificial neural network.
본 발명의 실시예들에 따르면, 카메라설정, 성능, 조명 등의 외부환경에 강인한 특성을 가지고, 인간의 눈이 인지하는 색차를 더욱 정확하게 구현할 수 있는 비선형적 색공간에서의 색차정보가 반영되는 측색 방법, 장치, 및 컴퓨터-판독가능 매체를 제공할 수 있는 효과를 발휘할 수 있다.According to embodiments of the present invention, colorimetry reflects color difference information in a non-linear color space that has characteristics that are robust to external environments such as camera settings, performance, and lighting, and can more accurately implement the color difference perceived by the human eye. Methods, devices, and computer-readable media may be provided.
본 발명의 일 실시예들에 따르면, CIEDE2000 등의 사람의 눈의 실제적 인지에 더욱 근접하는 비선형적 색차정보를 반영하여 측색을 수행할 수 있는 효과를 발휘할 수 있다. According to one embodiment of the present invention, it is possible to perform colorimetry by reflecting non-linear color difference information that is closer to the actual perception of the human eye, such as CIEDE2000.
본 발명의 일 실시예들에 따르면, 스마트폰 등의 개인 휴대단말로도 용이하게 측색을 수행할 수 있는 효과를 발휘할 수 있다. According to one embodiment of the present invention, it is possible to easily perform colorimetry using a personal mobile terminal such as a smartphone.
본 발명의 일 실시예들에 따르면, 비선형적으로 정의된 색차를 색공간에서 반복적인 계산을 통해 근사하고 최적화가 가능하게 되어, 연산의 효율성 및 측색의 정확도를 동시에 도모할 수 있는 효과를 발휘할 수 있다.According to one embodiment of the present invention, non-linearly defined color differences can be approximated and optimized through repetitive calculations in the color space, thereby achieving the effect of simultaneously promoting computational efficiency and colorimetric accuracy. there is.
이상과 같이 실시예들이 비록 한정된 실시예와 도면에 의해 설명되었으나, 해당 기술분야에서 통상의 지식을 가진 자라면 상기의 기재로부터 다양한 수정 및 변형이 가능하다. 예를 들어, 설명된 기술들이 설명된 방법과 다른 순서로 수행되거나, 및/또는 설명된 시스템, 구조, 장치, 회로 등의 구성요소들이 설명된 방법과 다른 형태로 결합 또는 조합되거나, 다른 구성요소 또는 균등물에 의하여 대치되거나 치환되더라도 적절한 결과가 달성될 수 있다.As described above, although the embodiments have been described with limited examples and drawings, various modifications and variations can be made by those skilled in the art from the above description. For example, the described techniques are performed in a different order than the described method, and/or components of the described system, structure, device, circuit, etc. are combined or combined in a different form than the described method, or other components are used. Alternatively, appropriate results may be achieved even if substituted or substituted by an equivalent.
그러므로, 다른 구현들, 다른 실시예들 및 특허청구범위와 균등한 것들도 후술하는 특허청구범위의 범위에 속한다.Therefore, other implementations, other embodiments, and equivalents of the claims also fall within the scope of the claims described below.

Claims (10)

1 이상의 프로세서 및 1 이상의 메모리를 포함하는 컴퓨팅시스템에서 수행되는 측색방법으로서,A colorimetric method performed in a computing system including one or more processors and one or more memories,
원본이미지로부터 얼굴영역에 대한 얼굴이미지 및 패치영역에 대한 패치이미지를 도출하는 이미지도출단계;An image derivation step of deriving a face image for the face area and a patch image for the patch area from the original image;
상기 얼굴이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 얼굴의 형상정보, 및 얼굴이미지에서의 빛의 세기 및 방향을 포함하는 빛정보를 도출하고, 얼굴이미지에 대해 상기 얼굴의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 얼굴의 형상에 의하여 색상이 변화되는 영향을 최소화한 얼굴의 반사율정보를 도출하는 제1감산적용단계;The face image is input to an inference model including one or more artificial neural networks to derive face shape information and light information including the intensity and direction of light in the face image, and the face shape information for the face image and a first subtraction application step of applying the subtraction to the light information to derive reflectance information of the face that minimizes the effect of color change due to the shape of the face to which the light is irradiated.
1 이상의 기준색상 영역을 포함하는 패치이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 패치의 형상정보를 도출하고, 패치이미지에 대해 상기 패치의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 패치의 형상에 의하여 색상이 변화되는 영향을 최소화한 패치의 반사율정보를 도출하는 제2감산적용단계; 및A patch image containing one or more reference color areas is input into an inference model including one or more artificial neural networks, the shape information of the patch is derived, and the shape information of the patch and the light information are subtracted and applied to the patch image, A second subtraction application step of deriving reflectance information of the patch that minimizes the effect of color change due to the shape of the patch to which light is irradiated; and
상기 패치의 반사율정보에서의 기준색상 영역의 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이에 기초하여, 색상파라미터변환수단을 도출하고, 상기 얼굴의 반사율정보에서의 측정부위의 색상정보에 상기 색상파라미터변환수단을 적용하여, 상기 측정부위의 측색색상정보를 도출하는 얼굴색상도출단계;를 포함하는, 측색 방법.Based on the difference between the color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color information of the measurement area in the reflectance information of the face is derived. A colorimetric method comprising: applying the color parameter conversion means to derive colorimetric color information of the measurement area.
청구항 1에 있어서,In claim 1,
상기 형상정보는,The shape information is,
이미지가 포함하고 있는 대상에 대한 이미지 상의 2D 정보에서 도출한 실제 형상에 따른 3D정보를 포함하고,Contains 3D information according to the actual shape derived from 2D information on the image about the object contained in the image,
상기 빛정보는,The light information is,
이미지가 포함하고 있는 대상에 조사되는 빛의 세기 및 방향을 포함하고,Includes the intensity and direction of light irradiated to the object contained in the image,
상기 반사율정보는,The reflectance information is,
이미지가 포함하고 있는 대상의 형상에 빛이 조사되어 상기 빛의 세기 및 방향에 따라 형상에 생기는 그림자의 영향을 제거한 픽셀별 색상정보를 포함하는, 측색 방법.A colorimetric method that includes color information for each pixel in which light is irradiated to the shape of an object included in an image and the influence of shadows that appear on the shape depending on the intensity and direction of the light is removed.
청구항 1에 있어서,In claim 1,
상기 패치는,The patch is,
복수의 기준색상 영역을 포함하고, 각각의 기준색상 영역은 같은 형태 및 넓이를 가지고, 서로 구분되게 배치되어 있는, 측색 방법.A colorimetric method that includes a plurality of reference color areas, where each reference color area has the same shape and area and is arranged separately from each other.
청구항 1에 있어서,In claim 1,
상기 추론모델은 공통모델, 표면법선벡터모델, 빛정보추론모델 및 반사율모델을 포함하고,The inference model includes a common model, a surface normal vector model, a light information inference model, and a reflectance model,
상기 공통모델은,The common model is,
1 이상의 인공신경망을 포함하고, 입력된 이미지로부터 상기 표면법선벡터모델, 상기 빛정보추론모델, 및 상기 반사율모델에 공통적으로 입력되는 특징정보를 도출하고,It includes one or more artificial neural networks, and derives feature information commonly input to the surface normal vector model, the light information inference model, and the reflectance model from the input image,
상기 표면법선벡터모델은,The surface normal vector model is,
1 이상의 인공신경망을 포함하고, 상기 특징정보로부터 2D정보인 상기 이미지에서 실제 형상을 담은 3D구조 정보인 상기 형상정보를 도출하고,It includes one or more artificial neural networks, and derives the shape information, which is 3D structure information containing the actual shape, from the image, which is 2D information, from the feature information,
상기 빛정보추론모델은,The light information inference model is,
1 이상의 인공신경망을 포함하고, 상기 특징정보로부터 상기 이미지에서 빛의 세기 및 빛의 방향을 포함하는 상기 빛정보를 도출하고, Includes one or more artificial neural networks, and derives the light information including the intensity and direction of light in the image from the feature information,
상기 반사율모델은,The reflectance model is,
1 이상의 인공신경망을 포함하고, 상기 특징정보로부터 픽셀별 색상정보인 반사율정보를 도출하는, 측색 방법.A colorimetric method comprising one or more artificial neural networks and deriving reflectance information, which is color information for each pixel, from the feature information.
청구항 1에 있어서,In claim 1,
상기 제1감산적용단계는,The first subtraction application step is,
얼굴이미지를 1 이상의 인공신경망을 포함하는 공통모델에 입력하여 얼굴특징정보를 도출하는 얼굴특징정보도출단계;A facial feature information deriving step of deriving facial feature information by inputting a facial image into a common model including one or more artificial neural networks;
상기 얼굴특징정보를 1 이상의 인공신경망을 포함하는 표면법선벡터모델에 입력하여 얼굴의 형상정보를 도출하는 얼굴형상도출단계;A facial shape derivation step of deriving facial shape information by inputting the facial feature information into a surface normal vector model including one or more artificial neural networks;
상기 얼굴특징정보를 1 이상의 인공신경망을 포함하는 빛정보추론모델에 입력하여 얼굴이미지에서의 빛의 세기 및 방향을 포함하는 빛정보를 도출하는 빛정보도출단계; 및A light information derivation step of inputting the facial feature information into a light information inference model including one or more artificial neural networks to derive light information including the intensity and direction of light in the face image; and
상기 얼굴이미지에 대하여 상기 얼굴의 형상정보 및 빛정보를 적용하여 얼굴에 대한 색상만이 반영된 얼굴의 반사율정보를 도출하는 얼굴반사율도출단계;를 포함하고,A facial reflectance derivation step of applying the shape information and light information of the face to the face image to derive reflectance information of the face in which only the color of the face is reflected,
상기 제2감산적용단계는,The second subtraction application step is,
패치이미지를 1 이상의 인공신경망을 포함하는 공통모델에 입력하여 패치특징정보를 도출하는 패치특징정보도출단계;A patch feature information deriving step of deriving patch feature information by inputting the patch image into a common model including one or more artificial neural networks;
상기 패치특징정보를 1 이상의 인공신경망을 포함하는 표면법선벡터모델에 입력하여 패치의 형상정보를 도출하는 패치형상도출단계;A patch shape derivation step of deriving shape information of the patch by inputting the patch feature information into a surface normal vector model including one or more artificial neural networks;
상기 패치이미지에 대하여 상기 패치의 형상정보 및 상기 빛정보를 적용하여 패치에 대한 색상만이 반영된 패치의 반사율정보를 도출하는 패치반사율도출단계;를 포함하는, 측색 방법.A patch reflectance derivation step of applying the shape information of the patch and the light information to the patch image to derive reflectance information of the patch reflecting only the color of the patch.
청구항 1에 있어서,In claim 1,
상기 측색 방법은 모델학습단계를 더 포함하고,The colorimetric method further includes a model learning step,
상기 모델학습단계는,The model learning step is,
학습이미지를 상기 공통모델에 입력하고, 공통모델의 학습특징정보를 상기 표면법선벡터모델에 입력하여 학습형상정보를 도출하는 학습형상정보도출단계;A learning shape information deriving step of inputting a learning image into the common model and inputting learning feature information of the common model into the surface normal vector model to derive learning shape information;
상기 학습특징정보를 상기 빛정보추론모델에 입력하여 학습빛정보를 도출하는 학습빛정보도출단계;A learning light information deriving step of inputting the learning feature information into the light information inference model to derive learning light information;
상기 학습특징정보부터 반사율정보를 도출하는 반사율모델에 입력하여 학습반사율정보를 도출하는 학습반사율정보도출단계; 및A learning reflectance information deriving step of deriving learning reflectance information by inputting the learning feature information into a reflectance model that derives reflectance information; and
상기 학습형상정보, 학습빛정보, 및 학습반사율정보를 결합하여 예측학습이미지를 도출하는 예측학습이미지도출단계;를 수행하되, 상기 예측학습이미지와 상기 학습이미지의 차이가 감소 혹은 최소화하도록 상기 공통모델, 상기 표면법선벡터모델, 상기 빛정보추론모델, 및 상기 반사율모델의 세부파라미터값 혹은 필터정보를 학습하는, 측색 방법.A predicted learning image derivation step of deriving a predicted learning image by combining the learning shape information, learning light information, and learning reflectance information, wherein the common model is used to reduce or minimize the difference between the predicted learning image and the learning image. , A colorimetric method for learning detailed parameter values or filter information of the surface normal vector model, the light information inference model, and the reflectance model.
청구항 1에 있어서,In claim 1,
상기 얼굴색상도출단계는 변환수단도출단계를 포함하고,The face color derivation step includes a conversion means derivation step,
상기 변환수단도출단계는, The conversion means derivation step is,
패치의 반사율정보에서의 복수의 기준색상 영역에 각각에 대한 기설정된 색공간에서의 각각의 제1색상파라미터 정보를 추출하는 색상파라미터추출단계; A color parameter extraction step of extracting first color parameter information in a preset color space for each of a plurality of reference color areas in the reflectance information of the patch;
복수의 상기 제1색상파라미터 정보에 대하여 복수의 보정수치를 포함하는 색상파라미터변환수단을 적용하여 도출되는 제2색상파라미터와 상기 복수의 기준색상에 대한 그라운드트루스에 해당하는 기설정된 제3색상파라미터 사이의 제1색차이 알고리즘에 의하여 도출되는 색차이 값이 기설정된 기준을 만족하도록 상기 보정수치를 결정하는 보정수치결정단계; 및Between a second color parameter derived by applying a color parameter conversion means including a plurality of correction values to a plurality of first color parameter information and a preset third color parameter corresponding to the ground truth for the plurality of reference colors A correction value determination step of determining the correction value so that the color difference value derived by the first color difference algorithm satisfies a preset standard; and
복수의 상기 제1색상파라미터 정보에 대하여 복수의 보정수치를 포함하는 색상파라미터변환수단을 적용하여 도출되는 제4색상파라미터와 상기 복수의 기준색상에 대한 그라운드트루스에 해당하는 기설정된 제3색상파라미터 사이의 제2색차이 알고리즘에 의하여 도출되는 색차이 값이 감소하는 방향으로 상기 보정수치 중 일부 혹은 전체를 변경하는 보정수치변경단계;를 포함하는, 측색 방법.Between a fourth color parameter derived by applying a color parameter conversion means including a plurality of correction values to a plurality of first color parameter information and a preset third color parameter corresponding to the ground truth for the plurality of reference colors. A colorimetric method including; a correction value changing step of changing some or all of the correction values in a direction that reduces the color difference value derived by the second color difference algorithm of .
청구항 7에 있어서,In claim 7,
상기 제1색차이 알고리즘에 의한 색차이 값은 색상파라미터에 대하여 선형적이고, 상기 제2색차이 알고리즘에 의한 색차이 값은 색상파라미터에 대하여 비선형적으로 상기 제1색차이 알고리즘과 상기 제2색차이 알고리즘은 서로 상이하고,The color difference value by the first color difference algorithm is linear with respect to the color parameter, and the color difference value by the second color difference algorithm is non-linear with respect to the color parameter. The algorithms are different from each other,
상기 색상파라미터변환수단은 복수의 보정수치를 포함하는 행렬로 구현이 되고, 상기 행렬의 열 혹은 행의 수는 상기 제1색상파라미터의 요소의 수에 상응하고,The color parameter conversion means is implemented as a matrix including a plurality of correction values, and the number of columns or rows of the matrix corresponds to the number of elements of the first color parameter,
상기 제2색상파라미터는 상기 제1색상파라미터에 대하여 상기 행렬을 행렬곱하여 도출되고, 상기 제4색상파라미터는 상기 제1색상파라미터에 대하여 상기 행렬을 행렬곱을 하여 도출되는, 측색 방법.The second color parameter is derived by matrix multiplying the matrix with respect to the first color parameter, and the fourth color parameter is derived by matrix multiplying the matrix with respect to the first color parameter.
1 이상의 프로세서 및 1 이상의 메모리를 포함하는 컴퓨팅시스템에서 구현되는 측색 장치로서,A colorimetric device implemented in a computing system including one or more processors and one or more memories,
상기 측색 장치는,The colorimetric device is,
원본이미지로부터 얼굴영역에 대한 얼굴이미지 및 패치영역에 대한 패치이미지를 도출하는 이미지도출단계;An image derivation step of deriving a face image for the face area and a patch image for the patch area from the original image;
상기 얼굴이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 얼굴의 형상정보, 및 얼굴이미지에서의 빛의 세기 및 방향을 포함하는 빛정보를 도출하고, 얼굴이미지에 대해 상기 얼굴의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 얼굴의 형상에 의하여 색상이 변화되는 영향을 최소화한 얼굴의 반사율정보를 도출하는 제1감산적용단계;The face image is input to an inference model including one or more artificial neural networks to derive face shape information and light information including the intensity and direction of light in the face image, and the face shape information for the face image and a first subtraction application step of applying the subtraction to the light information to derive reflectance information of the face that minimizes the effect of color change due to the shape of the face to which the light is irradiated.
1 이상의 기준색상 영역을 포함하는 패치이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 패치의 형상정보를 도출하고, 패치이미지에 대해 상기 패치의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 패치의 형상에 의하여 색상이 변화되는 영향을 최소화한 패치의 반사율정보를 도출하는 제2감산적용단계; 및A patch image containing one or more reference color areas is input into an inference model including one or more artificial neural networks, the shape information of the patch is derived, and the shape information of the patch and the light information are subtracted and applied to the patch image, A second subtraction application step of deriving reflectance information of the patch that minimizes the effect of color change due to the shape of the patch to which light is irradiated; and
상기 패치의 반사율정보에서의 기준색상 영역의 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이에 기초하여, 색상파라미터변환수단을 도출하고, 상기 얼굴의 반사율정보에서의 측정부위의 색상정보에 상기 색상파라미터변환수단을 적용하여, 상기 측정부위의 측색색상정보를 도출하는 얼굴색상도출단계;를 수행하는, 측색 장치.Based on the difference between the color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color information of the measurement area in the reflectance information of the face is derived. A colorimetric device that performs a facial color derivation step of deriving colorimetric color information of the measurement area by applying the color parameter conversion means.
하나 이상의 프로세서에 의해 실행되는 복수의 명령들을 포함하는, 컴퓨터-판독가능 매체에 저장된 컴퓨터 프로그램으로서,1. A computer program stored on a computer-readable medium comprising a plurality of instructions executed by one or more processors, comprising:
상기 컴퓨터 프로그램은, The computer program is,
원본이미지로부터 얼굴영역에 대한 얼굴이미지 및 패치영역에 대한 패치이미지를 도출하는 이미지도출단계;An image derivation step of deriving a face image for the face area and a patch image for the patch area from the original image;
상기 얼굴이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 얼굴의 형상정보, 및 얼굴이미지에서의 빛의 세기 및 방향을 포함하는 빛정보를 도출하고, 얼굴이미지에 대해 상기 얼굴의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 얼굴의 형상에 의하여 색상이 변화되는 영향을 최소화한 얼굴의 반사율정보를 도출하는 제1감산적용단계;The face image is input to an inference model including one or more artificial neural networks to derive face shape information and light information including the intensity and direction of light in the face image, and the face shape information for the face image and a first subtraction application step of applying the subtraction to the light information to derive reflectance information of the face that minimizes the effect of color change due to the shape of the face to which the light is irradiated.
1 이상의 기준색상 영역을 포함하는 패치이미지로부터 1 이상의 인공신경망을 포함하는 추론모델에 입력하여, 패치의 형상정보를 도출하고, 패치이미지에 대해 상기 패치의 형상정보 및 상기 빛정보를 감산적용하여, 빛이 조사된 패치의 형상에 의하여 색상이 변화되는 영향을 최소화한 패치의 반사율정보를 도출하는 제2감산적용단계; 및A patch image containing one or more reference color areas is input into an inference model including one or more artificial neural networks, the shape information of the patch is derived, and the shape information of the patch and the light information are subtracted and applied to the patch image, A second subtraction application step of deriving reflectance information of the patch that minimizes the effect of color change due to the shape of the patch to which light is irradiated; and
상기 패치의 반사율정보에서의 기준색상 영역의 색상정보와 기준색상의 그라운드트루스의 색상정보 사이의 차이에 기초하여, 색상파라미터변환수단을 도출하고, 상기 얼굴의 반사율정보에서의 측정부위의 색상정보에 상기 색상파라미터변환수단을 적용하여, 상기 측정부위의 측색색상정보를 도출하는 얼굴색상도출단계;를 포함하는, 컴퓨터 프로그램.Based on the difference between the color information of the reference color area in the reflectance information of the patch and the color information of the ground truth of the reference color, a color parameter conversion means is derived, and the color information of the measurement area in the reflectance information of the face is derived. A computer program comprising; a facial color derivation step of deriving colorimetric color information of the measurement area by applying the color parameter conversion means.
PCT/KR2023/005838 2022-05-04 2023-04-28 Method and device for measuring color in image, and computer-readable medium WO2023214748A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020220055230A KR20230155718A (en) 2022-05-04 2022-05-04 Method, Device and Computer-Readable Medium for Color Estimation in Image
KR10-2022-0055230 2022-05-04

Publications (1)

Publication Number Publication Date
WO2023214748A1 true WO2023214748A1 (en) 2023-11-09

Family

ID=88646655

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/005838 WO2023214748A1 (en) 2022-05-04 2023-04-28 Method and device for measuring color in image, and computer-readable medium

Country Status (2)

Country Link
KR (1) KR20230155718A (en)
WO (1) WO2023214748A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060095173A (en) * 2005-02-28 2006-08-31 한국과학기술원 An illumination reflectance model based image distortion elimination method
KR20170100717A (en) * 2016-02-25 2017-09-05 한국전자통신연구원 Apparatus and method for analyzing skin condition using spectral reflectance estimation
JP2018034064A (en) * 2012-07-27 2018-03-08 ポーラ化成工業株式会社 Skin state discrimination method based on color information obtained by colorimetric device
KR102012219B1 (en) * 2019-06-12 2019-08-21 황인오 Apparatus and method for measuring skin color
KR20210000872A (en) * 2019-06-26 2021-01-06 유수연 Method for deleting an object in image using artificial intelligence

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180082172A (en) 2017-01-10 2018-07-18 트라이큐빅스 인크. Method and system for skin analysis consifering illumination characteristic

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060095173A (en) * 2005-02-28 2006-08-31 한국과학기술원 An illumination reflectance model based image distortion elimination method
JP2018034064A (en) * 2012-07-27 2018-03-08 ポーラ化成工業株式会社 Skin state discrimination method based on color information obtained by colorimetric device
KR20170100717A (en) * 2016-02-25 2017-09-05 한국전자통신연구원 Apparatus and method for analyzing skin condition using spectral reflectance estimation
KR102012219B1 (en) * 2019-06-12 2019-08-21 황인오 Apparatus and method for measuring skin color
KR20210000872A (en) * 2019-06-26 2021-01-06 유수연 Method for deleting an object in image using artificial intelligence

Also Published As

Publication number Publication date
KR20230155718A (en) 2023-11-13

Similar Documents

Publication Publication Date Title
WO2020197241A1 (en) Device and method for compressing machine learning model
WO2020050499A1 (en) Method for acquiring object information and apparatus for performing same
WO2015183050A1 (en) Optical tracking system, and method for calculating posture and location of marker part in optical tracking system
WO2016017970A1 (en) Method and device for encrypting or decrypting content
WO2014148698A1 (en) Display device and method for controlling the same
WO2021162359A1 (en) Image processing method and electronic apparatus
WO2019074339A1 (en) Signal conversion system and signal conversion method
WO2017164717A1 (en) Sensor module and method for operating same
WO2019000466A1 (en) Face recognition method and apparatus, storage medium, and electronic device
EP3714406A1 (en) Electronic apparatus and control method thereof
WO2014178610A1 (en) Optical tracking system and tracking method using same
WO2021040156A1 (en) Body measurement device and control method therefor
WO2016017906A1 (en) Display device, display correction device, display correction system, and display correction method
WO2022158686A1 (en) Electronic device for performing inference on basis of encrypted information by using artificial intelligence model, and operating method therefor
WO2023214748A1 (en) Method and device for measuring color in image, and computer-readable medium
WO2021075910A1 (en) Electronic device and method for operating screen capturing by electronic device
WO2022075820A1 (en) Diffractive optical element architecture of waveguide for augmented reality device
WO2019045320A1 (en) Method and electronic device for predicting electronic structure of material
WO2022014958A1 (en) Electronic device comprising expandable display
WO2023200176A1 (en) Electronic device for displaying 3d image, and method for operating electronic device
WO2024029803A1 (en) Electronic device for placing objects according to space in augmented reality, and method for operating same electronic device
WO2023101276A1 (en) Image processing apparatus and operation method thereof
WO2024029680A1 (en) Method and device for hud calibration using camera inside vehicle
WO2022225150A1 (en) Electronic device and operating method thereof
WO2024049126A1 (en) Electronic device for controlling attribute information of application and method for controlling same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23799630

Country of ref document: EP

Kind code of ref document: A1