WO2019049298A1 - 3dデータシステム及び3dデータ処理方法 - Google Patents

3dデータシステム及び3dデータ処理方法 Download PDF

Info

Publication number
WO2019049298A1
WO2019049298A1 PCT/JP2017/032396 JP2017032396W WO2019049298A1 WO 2019049298 A1 WO2019049298 A1 WO 2019049298A1 JP 2017032396 W JP2017032396 W JP 2017032396W WO 2019049298 A1 WO2019049298 A1 WO 2019049298A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
subject
individuality
unit
classification
Prior art date
Application number
PCT/JP2017/032396
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
英弟 謝
道久 井口
幸久 楠田
道明 後藤
Original Assignee
株式会社Vrc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Vrc filed Critical 株式会社Vrc
Priority to CN201780090568.9A priority Critical patent/CN110637324B/zh
Priority to PCT/JP2017/032396 priority patent/WO2019049298A1/ja
Priority to JP2018532802A priority patent/JP6489726B1/ja
Publication of WO2019049298A1 publication Critical patent/WO2019049298A1/ja

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Definitions

  • the present invention relates to techniques for handling 3D modeling data.
  • Patent Document 1 describes a technique for causing a human 3D model to take a different pose or perform a motion. More specifically, according to Patent Document 1, after taking a basic pose and a modified pose other than the subject, the subject is photographed, 3D modeling data of the subject is generated for each pose, and models are compared between different poses. It is described that the fulcrum of rotation of each part is detected.
  • the present invention provides a technique for giving 3D modeling data a personality other than appearance.
  • the present invention relates to a photographed image obtained by photographing the surface of a subject to be an object, and an image acquiring unit for acquiring distance information indicating a distance from a reference point to the surface, the object obtained from the photographed image and the distance information 3D data having an addition unit for adding personality data indicating dynamic personality of the 3D modeling data to 3D modeling data of the subject, and a storage unit for storing the 3D modeling data to which the personality data is added Provide a system.
  • the 3D data system includes an estimation unit that estimates the individuality of the subject to be targeted using the 3D modeling data, and the addition unit is configured to use data indicating the individuality estimated by the estimation unit as the individuality data. It may be added as
  • the 3D data system includes a feature acquisition unit that acquires a plurality of feature points from the captured image, and a calculation unit that calculates a feature amount related to the positional relationship of the plurality of feature points, and the estimation unit determines the feature An amount may be used to estimate the personality of the subject of interest.
  • the 3D data system may include a classification unit that classifies the subject to be the target into any of a plurality of groups, and the estimation unit may estimate an individuality of the subject based on the classification.
  • the classification unit classifies the plurality of existing subjects into the plurality of groups based on the feature amounts obtained from the photographed images of the plurality of subjects, and the plurality of existing subjects It may be classified into any of a plurality of groups obtained based on the feature amount of the subject.
  • the feature acquisition unit acquires a static feature amount of the subject from the captured image, and the classification unit classifies the subject into any of the plurality of groups using the static feature amount. May be
  • the image acquisition unit acquires a plurality of photographed images each having a different pose with respect to the subject
  • the feature acquisition unit acquires a dynamic feature amount of the subject from the plurality of photographed images
  • the classification unit The subject may be classified into any of the plurality of groups using the dynamic feature amount.
  • Each of the plurality of groups has representative individuality data indicating an individuality representative of the group, and the estimation unit estimates the individuality indicated by the representative individuality data of the group to which the subject belongs as the individuality of the subject.
  • Each of the plurality of groups has a representative feature amount indicating a feature amount representative of the group, and representative individuality data indicating an individuality representative of the group, and the characteristic amount obtained from the representative feature amount and the subject
  • the correction unit corrects the representative individuality data according to the difference with the amount, and the estimation unit estimates the individuality indicated by the representative individuality data corrected by the correction unit as the individuality of the subject It is also good.
  • the classification unit classifies the plurality of existing subjects into the plurality of groups based on the feature quantities obtained from the photographed images of the plurality of subjects, and the estimation unit calculates the plurality of subjects for each of the plurality of groups.
  • the individuality data acquired from a plurality of photographed images with different poses acquired for at least a part of the existing subjects belonging to the group may be used as the representative individuality data.
  • the classification unit classifies the subject according to at least one level selected from among a plurality of levels in which the number of groups after the classification is different, and the estimation unit is configured to calculate a first level of the plurality of levels.
  • the individuality of the subject is estimated based on the difference between the representative value of the feature quantities in the group after the classification and the feature quantity of the subject, and the group after the classification is more than the first level.
  • the individuality of the subject may be estimated as the individuality representative of the group after the classification.
  • a photographed image obtained by photographing the surface of a subject and a step of acquiring distance information indicating a distance from a reference point to the surface, 3D modeling data of the subject obtained from the photographed image and the distance information
  • a 3D data processing method comprising: adding dynamic personality data of the 3D modeling data; and storing the 3D modeling data to which the personality data is added.
  • an image acquisition unit for acquiring a photographed image obtained by photographing a surface of a subject to be a target and distance information indicating a distance from a reference point to the surface;
  • An adding unit for adding bone data indicating a bone structure for giving a motion to the 3D model to metadata on a 3D model of an object to be targeted, metadata indicating an attribute of the 3D model, and the bone data
  • a storage unit for storing the 3D modeling data to which metadata is added.
  • the metadata may include data for limiting a position or an angle of view of a virtual camera used when displaying the 3D model located in a virtual space.
  • the metadata may include audio data of the 3D model.
  • FIG. 1 is a diagram illustrating a functional configuration of a 3D data system 1;
  • FIG. 1 illustrates the configuration of a 3D data input system 10.
  • FIG. 2 is a diagram illustrating a hardware configuration of the 3D data processing system 20.
  • FIG. 3 illustrates an operation according to an embodiment of the 3D data system 1.
  • FIG. 1 The figure which illustrates the classification of 3D modeling data.
  • FIG. 1 is a schematic view of a 3D data system 1 according to an embodiment.
  • the 3D data system 1 generates 3D modeling data using an image (hereinafter referred to as “captured image”) obtained by capturing the surface of an object, stores it, and provides 3D modeling data to an application according to a request. It is.
  • Subject refers to an object for which 3D modeling data is to be generated, and includes living things such as humans and animals, and inanimate things such as dolls and furniture.
  • 3D modeling data refers to data for displaying a 3D model.
  • the 3D model refers to solid data in a 3D virtual space.
  • the 3D model includes at least information on the surface shape of the subject and the color of the surface.
  • the 3D data system 1 includes a 3D data input system 10, a 3D data processing system 20, and an application 30.
  • the 3D data input system 10 generates 3D modeling data from the captured image.
  • the 3D data input system 10 includes, for example, a so-called 3D scanner.
  • the 3D data processing system 20 processes and stores 3D modeling data generated by the 3D data input system 10.
  • the application 30 provides the user with a product or service using 3D modeling data.
  • the 3D data system 1 may have a plurality of at least one of the 3D data input system 10 and the application 30.
  • the 3D data input system 10 is a local system
  • the 3D data processing system 20 is a system on a network, a so-called cloud system.
  • FIG. 2 is a diagram illustrating a functional configuration of the 3D data system 1.
  • the 3D data system 1 includes an acquisition unit 11, a generation unit 12, an addition unit 21, a storage unit 22, and an output unit 25.
  • the acquisition unit 11 acquires a photographed image and distance information (an example of an image acquisition unit).
  • the photographed image is an image obtained by photographing the surface of the subject.
  • the distance information is information indicating the distance from the reference point to the surface of the subject.
  • the acquisition unit 11 acquires a photographed image and distance information from a photographing device such as a camera and a distance sensor (both not shown in FIG. 2).
  • the generation unit 12 generates 3D modeling data of the subject using the captured image and the distance information.
  • the adding unit 21 adds dynamic personality data of the 3D modeling data to the 3D modeling data generated by the generating unit 12.
  • the personality data is data indicating the personality of the 3D model (in many cases, corresponding to the personality of the subject). In this example, the personality data shows a dynamic personality.
  • the storage unit 22 stores various data, for example, 3D modeling data to which individuality data is added.
  • the output unit 25 outputs 3D modeling data and individuality data in response to a request from the application 30.
  • the 3D data system 1 further includes an estimation unit 24.
  • the estimation unit 24 estimates the individuality of the target subject using the 3D modeling data.
  • the personality data that the addition unit 21 adds to the 3D modeling data includes data indicating the personality estimated by the estimation unit 24.
  • the 3D data system 1 further includes an acquisition unit 13 and a calculation unit 14.
  • the acquisition unit 13 acquires a plurality of feature points from 3D modeling data (or from a captured image) (an example of a feature acquisition unit). These feature points are points (which will be described later with reference to FIG. 7 etc.) indicating features relating to the shape of the 3D model (or subject).
  • the calculation unit 14 calculates feature quantities related to the positional relationship of a plurality of feature points.
  • the estimation unit 24 estimates the personality of the subject to be an object using this feature amount.
  • the 3D data system 1 further includes a classification unit 23.
  • the classification unit 23 classifies the subject into one of a plurality of groups.
  • the estimation unit 24 estimates the personality of the subject to be an object based on this classification.
  • the feature amount relating to the positional relationship of the plurality of feature points indicates static features of the subject (3D model).
  • a static feature is a feature obtained from a single pose subject.
  • a dynamic feature refers to a feature (described later with reference to FIG. 11 and the like) for giving a plurality of poses (or motions) to a 3D model.
  • the personality estimated by the estimation unit 24 is a dynamic personality of the subject, that is, a dynamic feature.
  • the estimation unit 24 estimates a dynamic feature using a static feature.
  • the acquisition unit 13 and the calculation unit 14 are implemented in the 3D data input system 10, and the addition unit 21, the storage unit 22, the classification unit 23, and the estimation unit 24 are implemented in the 3D data processing system 20.
  • the sharing of functions in the 3D data input system 10 and the 3D data processing system 20 is merely an example, and the sharing of functions is not limited to the example of FIG. 2.
  • FIG. 3 is a diagram illustrating the configuration of the 3D data input system 10.
  • the 3D data input system 10 is a so-called 3D scanner.
  • the 3D data input system 10 includes a sensor group SD, a stage T, a frame F, and an image processing apparatus 100.
  • the sensor group SD includes a plurality of sets of the camera C and the distance sensor D. In the state where the sensor group SD is fixed, each camera C captures only a limited partial area of the surface of the subject S.
  • the distance sensor D detects the distance from the position (an example of the reference position) where the distance sensor D is installed to the surface of the subject S.
  • the distance sensor D has a projection unit that projects an image of a predetermined pattern figure (for example, a grating) at a wavelength other than visible light such as infrared light, and an imaging unit that reads the projected image.
  • the camera C and the distance sensor D which form a set, are fixed to a common mount base M, and the optical axes of the both are directed to approximately the same position.
  • the frame F (sensor group SD) rotates relative to the stage T.
  • the stage T may rotate in a state in which the frame F is fixed to the installation surface, or the frame F may rotate around the stage T in a state in which the stage T is fixed to the installation surface.
  • the camera C captures the subject S while the frame F and the stage T are relatively rotated.
  • the sensor group SD In the state where the frame F and the stage T are relatively stationary, the sensor group SD covers only a limited partial area of the surface of the subject S. However, the sensor group SD captures the entire area of the surface of the subject S by rotating the frame F and the stage T relative to each other by 360 ° and capturing images continuously.
  • FIG. 3 is merely an example of the configuration of the 3D data input system 10, and the 3D data input system 10 is not limited to this configuration.
  • a sufficient number of sensor groups SD capable of spatially covering the entire area of the surface of the subject S may be installed at appropriate positions.
  • the sensor group SD is fixed to the stage T.
  • the 3D data input system 10 may have a plurality of frames F at predetermined intervals (for example, every 120 °) when the stage T is viewed from above. In this case, if the frame F and the stage T are relatively rotated by 120 °, the sensor group SD can capture the entire area of the surface of the subject S.
  • FIG. 4 is a diagram illustrating the hardware configuration of the 3D data processing system 20.
  • the 3D data processing system 20 is a computer device having a central processing unit (CPU) 201, a memory 202, a storage 203, and a network IF 204.
  • the CPU 201 is a control device that executes processing according to a program and controls other hardware elements of the 3D data processing system 20.
  • the memory 202 is a main storage device that functions as a work area when the CPU 201 executes a program, and includes, for example, a random access memory (RAM).
  • RAM random access memory
  • the storage 203 is a nonvolatile auxiliary storage device that stores various programs and data, and includes, for example, at least one of an HDD (Hard Disk Drive) and an SSD (Solid State Drive).
  • the network IF 204 is an interface for performing communication in accordance with a predetermined communication standard (for example, TCP / IP), and includes, for example, a NIC (Network Interface Card).
  • the storage 203 stores a program for causing a computer device to function as the 3D data processing system 20 (hereinafter referred to as “3D data processing program”).
  • the CPU 201 implements the 3D data processing program, whereby the functions of FIG. 2 are implemented in the computer device.
  • the CPU 201 executing the 3D data processing program is an example of the addition unit 21, the storage unit 22, the classification unit 23, and the estimation unit 24.
  • the application 30 may be anything as long as it uses 3D modeling data.
  • the application 30 includes at least one of an ID card, a business card, virtual communication, a video game, dressing (fitting), sizing, virtual theater, fitness, medical care, and movie production.
  • FIG. 5 is a diagram illustrating an operation (a 3D data processing method) according to an embodiment of the 3D data system 1.
  • FIG. 5 shows an outline of the operation.
  • the 3D data input system 10 generates 3D modeling data of the subject.
  • the 3D data processing system 20 performs processing to add personality data to the 3D modeling data. Given the individuality data, the movement of the 3D modeling data is individualized.
  • the application 30 uses the 3D modeling data and the personality data to provide the product or service to the user.
  • FIG. 6 is a diagram illustrating details of a process of generating 3D modeling data of an object.
  • FIG. 6 shows 3D modeling data for a new subject (hereinafter referred to as “target subject S”) in a state in which the 3D data system 1 has already stored 3D modeling data for each of a plurality of subjects (hereinafter referred to as “existing subjects”). Assume a situation to generate The process of FIG. 6 is started, for example, when the user instructs the 3D data input system 10 to generate 3D modeling data.
  • step S11 the 3D data input system 10 captures a target subject S and obtains a captured image.
  • the target subject S is photographed in a predetermined pose.
  • the 3D data input system 10 acquires an image for measuring the distance to the camera C (for example, an image of an infrared pattern figure, hereinafter referred to as “pattern image”) in addition to the captured image.
  • pattern image an image for measuring the distance to the camera C (for example, an image of an infrared pattern figure, hereinafter referred to as “pattern image”) in addition to the captured image.
  • step S12 the 3D data input system 10 generates 3D modeling data from the captured image.
  • the 3D data input system 10 calculates the distance from the camera C to each point in the pattern image, that is, the three-dimensional shape of the surface of the target subject S using the pattern image.
  • the 3D data input system 10 pastes the captured image on the calculated three-dimensional shape.
  • 3D modeling data of the target subject S is obtained.
  • step S13 the 3D data input system 10 acquires feature points from 3D modeling data.
  • This feature point is obtained from 3D modeling data.
  • the feature points of the shape are, for example, so-called end points of bones in the 3D model.
  • a bone is a virtual structure for giving motion to a 3D model.
  • the 3D data input system 10 has a standard model of bone structure of 3D model.
  • the standard model has, for example, a bone structure according to the skeletal structure of a real human body.
  • Standard models are classified, for example, into categories such as adult males, adult females, elderly men, elderly women, young boys, and young girls.
  • the 3D data input system 10 selects a standard model according to the attribute of the user from among the plurality of standard models.
  • the 3D data input system 10 automatically determines the user's attributes (for example, age and gender) using, for example, the captured image. Alternatively, the 3D data input system 10 may prompt the user to input his / her attribute, and set the user's attribute in accordance with the user's input.
  • the 3D data input system 10 adjusts the selected standard model to the generated 3D model, and fits the adjusted standard model to the 3D model.
  • FIG. 7 is a diagram illustrating adjustment of a standard model.
  • the left of FIG. 7A shows a 3D model and the right shows a standard model of bone.
  • the end points are represented by dots ( ⁇ ).
  • Each of the plurality of end points is an example of a feature point of the 3D model.
  • the 3D data input system 10 stretches and shrinks the height (height) of the standard model so as to be compatible with the 3D model (FIG. 7A). Furthermore, the 3D data input system 10 moves the position of the end point of each bone to the central position in the direction perpendicular to the bone (FIG. 7 (B)).
  • FIG. 7C shows a standard model after adjustment.
  • the end points of each bone in this figure correspond to feature points obtained from 3D modeling data.
  • This figure shows a cross section passing through the virtual center of gravity of the 3D modeling data and parallel to the front of the target subject S.
  • FIG. 7 shows only a single cross section, the standard model may be adjusted to fit the 3D model in a plurality of cross sections having different directions.
  • the 3D data input system 10 calculates a feature amount indicating the positional relationship of the plurality of feature points.
  • the 3D data input system 10 calculates, for a particular feature point, a feature amount indicating the positional relationship with other feature points.
  • the specific feature point is, for example, a feature point at a position corresponding to a knee joint.
  • the positional relationship between a specific feature point and another adjacent feature point refers to the positional relationship of a knee joint with an ankle and a hip joint, so-called O leg, X leg, or XO leg Indicates the degree of etc.
  • FIG. 8 is a diagram illustrating the positional relationship of feature points.
  • FIG. 8 (A) shows a normal leg, (B) shows an O leg, and (C) shows an X leg.
  • the feature amount relating to a specific feature amount is, for example, the distance d1 between the feature point P2 equivalent to the knee and the straight line L with respect to the length L1 of the straight line L connecting the feature point P1 equivalent to the hip joint and the feature point P3 equivalent to the ankle. Defined as a ratio of The definition of the specific feature point and the feature amount related to the specific feature point shown here is merely an example, and may be specifically set in any way.
  • One 3D model may have a plurality of “specific feature points”. For example, toes, knees, elbows, hands, necks, and head tips may be set as specific feature points.
  • the 3D data input system 10 outputs the feature amount related to the feature point and the 3D modeling data to the 3D data processing system.
  • data hereinafter referred to as “attribute data” indicating an attribute (ID, age, gender, etc.) of the subject is further added to the 3D modeling data.
  • FIG. 9 is a diagram illustrating the details of the process of adding individuality data to 3D modeling data. This process is started, for example, when 3D modeling data is input from the 3D data input system 10.
  • step S21 the 3D data processing system 20 classifies the 3D modeling data of the target subject S into any of a plurality of groups. Classification is performed, for example, in any of the following aspects. (1) Classification based on static feature quantities of the subject. (2) Classification based on the dynamic feature of the subject. (3) Classification based on the attributes of the subject. (4) Classification by combination of two or more of the above.
  • Classification based on static feature quantities of a subject refers to classification of 3D modeling data using at least one type of static feature quantity.
  • One type of static feature amount refers to, for example, a feature amount of a positional relationship between a knee joint, an ankle, and a hip joint.
  • the two types of feature amounts refer to, for example, the feature amounts of the positional relationship between the knee joint and the ankle and the hip joint, and the feature amounts of the positional relationship between the elbow joint and the wrist and the shoulder joint. According to this example, it is possible to estimate the dynamic feature amount from the static feature amount of the subject.
  • Classification based on a dynamic feature of a subject refers to classification of 3D modeling data using at least one type of dynamic feature.
  • the feature quantities used for classification are different types of feature quantities from the feature quantities related to the individual to be estimated.
  • Classification based on the attribute of a subject refers to classification of 3D modeling data using at least one type of attribute of a subject.
  • the attributes of the subject mean, for example, the age, sex, race, nationality, occupation, or medical history of the subject. According to this example, it is possible to estimate the dynamic feature amount from the attribute of the subject.
  • the classification by two or more combinations refers to classification using two or more combinations of classification based on static feature quantities, classification based on dynamic feature quantities, and classification based on an attribute of an object. According to this example, more various estimations can be made.
  • the 3D modeling data is first classified based on the attribute of the subject (first stage classification), and is further classified based on the static feature amount of the subject (second stage classification).
  • Classification based on the attributes of the subject is classification based on the age and gender of the subject.
  • the attributes used in selecting a standard model of bone structure are used as they are for classification of 3D modeling data. That is, in the first stage classification, the classification of 3D modeling data and the standard model of bone structure correspond one to one.
  • FIG. 10 is a diagram illustrating classification of 3D modeling data.
  • FIG. 10 shows the classification of the second stage.
  • the second stage classification is performed using N types of feature quantities.
  • the 3D data processing system 20 plots each 3D modeling data in an N-dimensional space with each feature amount as a coordinate axis, and clusters (groups) this plot group according to a mathematical algorithm.
  • N 2
  • the classification of the second stage is performed using two types of feature amounts, feature amount A and feature amount B.
  • the feature amount A is a feature amount on the knee
  • the feature amount B is a feature amount on the thoracic spine.
  • each 3D modeling data is plotted on two-dimensional coordinates with the feature amount A on the vertical axis and the feature amount B on the horizontal axis.
  • One plot corresponds to one 3D modeling data. For example, if 10,000 3D modeling data and feature quantities are obtained in the past, 10,000 plots are obtained.
  • the 3D data processing system 20 divides or groups these plots into subsets using known clustering techniques such as shortest distance or k-means. In the example of FIG. 10, these plots are classified into five groups G1 to G5.
  • the 3D data processing system 20 identifies the plot of the 3D modeling data of the target subject S and the existing plot closest to the distance. , 3D modeling data of the target subject S is classified into the same group as the existing plot. Alternatively, the 3D data processing system 20 classifies the 3D modeling data of the target subject S and the existing 3D modeling data again using the clustering method, and determines to which group the 3D modeling data of the target subject S belongs.
  • the classification of the second stage is equivalent to that performed for each standard model.
  • the clustering method is applied to the adult male population including the existing subjects, and the obtained result is used to classify 3D modeling data of the target subject S.
  • the 3D data processing system 20 estimates the personality of the 3D modeling data of the target subject S (that is, the personality of the target subject S) based on the classification.
  • personality refers to information for defining the (displacement) trajectory of feature points of the 3D model, and more specifically, the highest positions of knees and hands when the 3D model is walked.
  • the individuality data in this example is data indicating the highest position of the knee and hand in the walking motion.
  • the personality data may include data indicating the position in the intermediate state in addition to the highest position of the elbow and hand.
  • FIG. 11 is a diagram illustrating the movement of 3D modeling data. This example shows the relative positional relationship of bones while walking by 3D modeling data.
  • FIG. 11A shows a basic posture.
  • FIG. 11B shows a posture in which the left foot is lifted.
  • FIG. 11C shows a posture in which the left foot is lowered to the ground.
  • FIG. 11D shows a posture in which the right foot is lifted.
  • FIG. 11E shows a posture in which the right foot is lowered.
  • the 3D model can be walked by repeatedly taking these postures as 3D modeling data.
  • 11A to 11E correspond to so-called key frames, and the attitude between the key frames is calculated by interpolation.
  • the attitude of the key frame is defined for each standard model and stored in advance in the 3D data processing system 20.
  • the following data is defined as personality.
  • the position of the left knee and the position of the right hand tip in the posture of FIG. (2) The position of the right knee and the position of the left hand tip in the posture of FIG.
  • the deviation (difference) from the position of the feature point in the key frame defined for the standard model is defined as the individuality.
  • the deviation vector d (P2) from the position P2s of the left knee in the posture of FIG. 11 (B) of the standard model is an example of the individuality data.
  • the movement of the subject is actually photographed at the time of photographing for generating the 3D modeling or at another timing.
  • a subject whose actual movement has been photographed is referred to as a “specific subject”. That is, the individuality data of the specific subject is not estimated but actually measured.
  • the 3D data input system 10 captures an image of a specific subject walking, and acquires an image (an example of a plurality of captured images each having a different pose) corresponding to the key frame illustrated in FIG.
  • the 3D data processing system 20 determines motion data representative of each group from the motion data thus obtained.
  • individuality data (representative individuality data) representative of the group is obtained for each group.
  • Individuality data representing a group is, for example, individuality data of a subject randomly selected from among the specific subjects included in the group.
  • the individuality data representing the group is the individuality data of the subject included in the group and plotted at a position closest to the center coordinates of the group in the N-dimensional coordinate space.
  • individuality data representing a group is a statistical representative value such as an average value of individuality data of a plurality of specific subjects included in the group.
  • the 3D data processing system 20 uses personality data representing a group to which the 3D modeling data of the target subject S belongs as personality data of the 3D modeling data to be the target subject. This corresponds to estimating that the personality of the target subject S is the same as the personality representing the group to which the 3D modeling data of the target subject S belongs. For example, when the target 3D modeling data belongs to the group G4, the 3D data processing system 20 determines to use the personality data representing the group G4 as the personality data of the 3D modeling data of the target subject S.
  • step S23 the 3D data processing system 20 adds personality data indicating the estimated personality to the 3D modeling data of the target subject S.
  • “adding” means storing in a state in which at least the correspondence relationship between the two is known.
  • the application 30 can use 3D modeling data of the target subject S.
  • the individuality data added to one 3D modeling data is not limited to one type. Multiple types of personality data may be added to one 3D modeling data. For example, personality data on a walking motion, personality data on a jumping motion, personality data on a punching motion, and personality data on a kicking motion may be added to 3D modeling data of a certain object. Alternatively, personality data may be defined that comprehensively defines these operations.
  • Hair hardness is personality data that affects how the hair sways. The harder the hair is, the smaller the swing, and the softer the hair, the greater the swing. For example, if the position of the tip of a part of hair is extracted as a feature point, the hardness of the hair can be expressed as the movement of the feature point.
  • FIG. 12 is a diagram illustrating feature points ( ⁇ in the figure) extracted from the face. For example, if the corner of the mouth, the corners of the eyes, and the end of the expression muscle are extracted as feature points, the expression can be expressed as the movement of the feature points.
  • the personality data may include voice data.
  • the audio data does not include information on the miracle of feature points extracted from 3D modeling data, the dynamic personality of the subject in that the voice is caused by the vibration of the vocal cords of the subject and that it involves time change. It can be said that The voice may be correlated with the physical characteristics and attributes of the subject (such as the boy's voice is high), and it is also compatible with the estimation of the personality data.
  • Skin condition A condition such as skin tension or slackness can be said to be a dynamic personality that changes on a relatively long time scale (tens of years). Changes in the condition of the skin can be said to indicate growth and aging. For example, if a feature point is extracted from the unevenness of the face, it is possible to express growth and aging as time change of the feature point.
  • Movement habit The movement other than the individuality in the standard movement such as walking or running, for example, lifting up the hair, covering the head, touching the nose, shaking the poor, etc. represents the individuality of the subject Sometimes. These operations can also be digitized.
  • the 3D model is arranged in a virtual space.
  • a virtual camera is installed in the virtual space.
  • the application 30 displays a two-dimensional image acquired by the virtual camera on a display device. Since the virtual camera is only virtual, in theory, the application 30 can arbitrarily set the position and the angle of view of the viewpoint.
  • data for limiting the position (relative to the 3D model) of the viewpoint of the virtual camera can be used as individuality data. With this information, it is possible to define an area not shown to the user for each 3D model.
  • the movable range of the joint represents the individuality of the subject. For example, a gymnast can open both legs (hip joint) 180 °, but a general person can open only about 90 °. If the movable range of each joint is converted to data, it can be used as individuality data. In this case, the movable range of the joint may be limited due to other circumstances than the actual movable range. For example, in certain 3D models, the range of motion of the hip joint is limited to about 60 °. Thus, data for limiting the movable range of a joint can be used as individuality data.
  • FIG. 13 is a diagram illustrating the details of processing using 3D modeling data. The process of FIG. 13 is started, for example, when the user of the application 30 instructs to acquire 3D modeling data.
  • step S31 the application 30 requests the 3D data processing system 20 for 3D modeling data.
  • This request includes information specifying the 3D modeling data, for example, the ID of the subject.
  • the request may include a search key for searching for desired data from 3D modeling data stored in the 3D data system 1.
  • the 3D data processing system 20 sends the application 30 a list of 3D modeling data that matches the search key.
  • the application 30 selects one 3D modeling data from this list. Information identifying the selected 3D modeling data is output to the 3D data processing system 20.
  • This request may include, in addition to the information identifying the 3D modeling data, information identifying the personality data.
  • the specific application 30 does not necessarily require all the multiple types of individuality data. Therefore, the application 30 requests the 3D data processing system 20 only for the personality data required by itself.
  • step S32 the 3D data processing system 20 outputs the requested 3D modeling data and personality data to the application 30 of the request source. If the application 30 requires only specific personality data, the 3D data processing system 20 outputs only the requested personality data in addition to the 3D modeling data.
  • the application 30 provides a product or service using the 3D modeling data and the personality data acquired from the 3D data processing system 20.
  • the 3D data system 1 different types of applications can use common 3D modeling data and personality data.
  • the application 30 includes, for example, at least one of an ID card, a business card, virtual communication, a video game, changing clothes (fitting on), measurement, virtual theater, fitness, medical care, and movie production.
  • the application 30 may operate on a portable terminal such as a smartphone or may operate on a stationary personal computer.
  • An ID card is an application used for identification of a user.
  • a 3D model is displayed instead of the user's photo.
  • a business card is an application for transmitting user's personal information to other users.
  • Data of the business card includes 3D modeling data of the user.
  • the business card data of the user UA is output to another user UB.
  • the user UB can browse business card data including the 3D model of the user UA on his / her computer device.
  • Virtual communication is an application for communicating with other users in a virtual space.
  • each user is displayed using a so-called avatar.
  • the 3D model according to the present embodiment is used as this avatar.
  • virtual communication for example, a plurality of users located at remote locations can have a meeting. At the meeting, each user's avatar has an individuality, and the reality is increased.
  • a 3D model according to the present embodiment is used as a character appearing in the game.
  • a player can use his 3D model as a player character.
  • This 3D model has movement features (eg, how to run, how to jump, etc.) corresponding to that player.
  • Dressing up is an application for dressing a human body model in a virtual space.
  • a 3D model according to the present embodiment is used.
  • the human body model moves in the virtual space with clothes worn (so-called runway walk).
  • this human body model has a movement feature (for example, how to walk) corresponding to the user.
  • Measurement is an application for measuring the size of the subject S's body (height, chest circumference, waist circumference, etc.).
  • the virtual theater is an application that lets a virtual character (avatar) perform a demonstration (song, theater, dance, etc.) in a virtual space and view the demonstration.
  • a virtual character avatar
  • the demonstration is performed, for example, at a stage on a virtual space.
  • the user can watch this demonstration taken by the virtual camera.
  • the position of the virtual camera (viewpoint) is controlled, for example, in accordance with the user's instruction.
  • the user can move the virtual camera to a specific performer, look over the stage, and freely control the position of the virtual camera.
  • Medical treatment is an application that 3D models and records the subject's body before and after treatment. By comparing the stored 3D models, the effects of treatment, dosing, and rehabilitation can be visually confirmed.
  • Movie production is an application that makes 3D models appear in movies. For example, by acquiring 3D modeling data and personality data of an actor, it is possible to make the actor appear as a Nakano character of a movie without actually letting the actor perform.
  • Operation example 2 In the operation example 1, the example in which the individuality data of the target subject S is estimated has been described. In this example, no personality data estimation is performed. The individuality data is prepared in advance in the 3D data system 1, and the user (for example, the subject S himself) selects desired data from among a plurality of prepared individuality data.
  • the individuality data is attached with a tag for search.
  • This tag includes, for example, age, gender, race, nationality, occupation, or medical history.
  • the user can search individuality data using these attributes as search keys.
  • the 3D data system 1 may apply the designated individuality data to the 3D modeling data of the target subject S and display a preview.
  • the individuality data prepared in advance in the 3D data system 1 may include, for example, individuality data of famous athletes and actors.
  • the personality data is assigned an identifier (name) for identifying a sports player or an actor. For example, in a soccer game, if a professional soccer player kicks a ball using personality data, the user causes a 3D model obtained from his or her shot image to appear as a player character, and the character is famous for this character You can kick in the form of a soccer player.
  • personality data can be added to 3D modeling data without estimating personality.
  • Operation example 2 may be used in combination with operation example 1.
  • the 3D data system 1 may allow the user to select either addition of individuality data by estimation or addition of individuality data prepared in advance, and may add individuality data according to a method selected by the user.
  • FIG. 14 is a view showing an outline of a 3D data system 1 according to the first modification.
  • the 3D data processing system 20 stores multiple sets of 3D modeling data, bone data, and metadata.
  • Bone data is data indicating a bone structure of a 3D model.
  • the bone data may be measured, for example, by photographing the subject in a plurality of poses, or may be obtained by adjusting a standard model of bone structure prepared in advance to a 3D model.
  • Metadata refers to data attached to a 3D model, and is data indicating, for example, an attribute of a subject, an attribute of a photographing apparatus, voice, copy restriction, and the like.
  • the personality data and the attribute data illustrated in the embodiment can also be said to be an example of metadata.
  • the metadata may not include information indicating the 3D model dynamic personality.
  • the 3D data system 1 has the effect of being able to easily provide the application 30 with a set of 3D modeling data, bone data, and metadata.
  • Sets of 3D modeling data, bone data, and metadata may be predetermined in the 3D data processing system 20.
  • sets of 3D modeling data, bone data, and metadata may be rearranged according to the requirements of the application 30 or the user.
  • the 3D data processing system 20 stores a plurality of templates of limited data and audio data. .
  • the 3D data processing system 20 adds the restriction data and audio data selected by the user to the designated 3D modeling data.
  • a user can select desired metadata and add it to 3D modeling data.
  • the data set including 3D modeling data may not include bone data.
  • 3D data processing system 20 may store only 3D modeling data and metadata (eg, restricted data).
  • the method by which the estimation unit 24 estimates the personality is not limited to the one exemplified in the embodiment.
  • Each group after classification may have a feature amount (representative feature amount) representing the group in addition to the individuality data (representative individuality data) representing the group.
  • the 3D data processing system 20 has a correction unit (not shown).
  • the correction unit corrects the representative individuality data in accordance with the difference between the representative feature amount and the feature amount obtained from the 3D modeling data of the target subject S.
  • the correction unit adds the representative personality data by adding the result (vector) obtained by multiplying the representative personality data by the difference between the representative feature and the feature obtained from the 3D modeling data of the target subject S and the coefficient. to correct.
  • the classification by the classification unit 23 may be divided into a plurality of levels. Each of the plurality of levels has a different number of groups after classification. For example, classification based on the same type of feature amount is divided into low level (small number of groups), middle level (medium number of groups), and high level (large number of groups). It is determined, for example, by the instruction of the user or by the request of the application 30 which classification of the plurality of divisions to use. For example, when low-level (one example of the first level) classification is used, the estimation unit 24 uses the individuality data based on the difference between the representative feature of the group and the feature of the target subject S as in the second modification. Estimate (correct). When, for example, high-level (one example of the second level) classification is used, the estimation unit 24 uses the representative individuality data of the group as it is as individuality data of the 3D model of the target subject S.
  • the individuality data is defined as a deviation vector of dynamic feature quantities from standard data.
  • the individuality data is not limited to the displacement vector, and may be data representing a dynamic feature amount itself (absolute value). In this case, there is no need to define standard data.
  • the deviation vector if standard data is held on the application 30 side, only the deviation vector needs to be transmitted compared to the case of transmitting the standard data and the deviation vector. Amount of data can be reduced.
  • representative personality data of the group is not limited to the measured data.
  • the personality data obtained by estimation may be used as representative personality data of the group.
  • the sharing of functions in the 3D data input system 10 and the 3D data processing system 20 is not limited to that illustrated in FIG. 2.
  • a function corresponding to the generation unit 12 may be implemented in the 3D data processing system 20, that is, a cloud.
  • a function corresponding to the addition unit 21 or the classification unit 23 may be implemented in the 3D data input system 10.
  • the 3D data input system 10, the 3D data processing system 20, and the application 30 may each be implemented on one or more different computing devices, or at least some of the functions may be implemented on a common computing device. Also, a plurality of computer devices may physically function as the 3D data input system 10, the 3D data processing system 20, or the application 30.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
PCT/JP2017/032396 2017-09-08 2017-09-08 3dデータシステム及び3dデータ処理方法 WO2019049298A1 (ja)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201780090568.9A CN110637324B (zh) 2017-09-08 2017-09-08 三维数据系统以及三维数据处理方法
PCT/JP2017/032396 WO2019049298A1 (ja) 2017-09-08 2017-09-08 3dデータシステム及び3dデータ処理方法
JP2018532802A JP6489726B1 (ja) 2017-09-08 2017-09-08 3dデータシステム及び3dデータ処理方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/032396 WO2019049298A1 (ja) 2017-09-08 2017-09-08 3dデータシステム及び3dデータ処理方法

Publications (1)

Publication Number Publication Date
WO2019049298A1 true WO2019049298A1 (ja) 2019-03-14

Family

ID=65633719

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/032396 WO2019049298A1 (ja) 2017-09-08 2017-09-08 3dデータシステム及び3dデータ処理方法

Country Status (3)

Country Link
JP (1) JP6489726B1 (zh)
CN (1) CN110637324B (zh)
WO (1) WO2019049298A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6829922B1 (ja) * 2020-07-27 2021-02-17 株式会社Vrc 情報処理装置、3dモデル生成方法、及びプログラム
JP7484293B2 (ja) 2020-03-25 2024-05-16 カシオ計算機株式会社 アニメーション生成装置、アニメーション生成方法及びプログラム

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020261403A1 (ja) * 2019-06-26 2020-12-30 日本電気株式会社 身長推定装置、身長推定方法及びプログラムが格納された非一時的なコンピュータ可読媒体
JP6791530B1 (ja) 2019-09-05 2020-11-25 株式会社Vrc 3dデータシステム、サーバ及び3dデータ処理方法
JP6799883B1 (ja) 2020-07-27 2020-12-16 株式会社Vrc サーバ及び情報処理方法
JP6826747B1 (ja) * 2020-07-27 2021-02-10 株式会社Vrc 情報処理装置及び情報処理方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002058045A (ja) * 2000-08-08 2002-02-22 Komatsu Ltd 現実の物体をバーチャル3次元空間に登場させるためのシステム及び方法
JP2005523488A (ja) * 2001-08-14 2005-08-04 パルス エンターテインメント インコーポレイテッド 自動3dモデリングシステム及び方法
JP2012528390A (ja) * 2009-05-29 2012-11-12 マイクロソフト コーポレーション キャラクターにアニメーションまたはモーションを加えるシステムおよび方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3472238B2 (ja) * 2000-05-19 2003-12-02 由和 伊丹 効果的身体運動の提供方法
US20020024517A1 (en) * 2000-07-14 2002-02-28 Komatsu Ltd. Apparatus and method for three-dimensional image production and presenting real objects in virtual three-dimensional space
CN101515374B (zh) * 2008-02-20 2010-12-01 中国科学院自动化研究所 基于图像的个性化真实感虚拟人物造型方法
EP2385483B1 (en) * 2010-05-07 2012-11-21 MVTec Software GmbH Recognition and pose determination of 3D objects in 3D scenes using geometric point pair descriptors and the generalized Hough Transform
JP2013200867A (ja) * 2012-02-23 2013-10-03 Tokyo Kogei Univ アニメーション作成装置、カメラ
JP5837860B2 (ja) * 2012-06-11 2015-12-24 Kddi株式会社 動き類似度算出装置、動き類似度算出方法およびコンピュータプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002058045A (ja) * 2000-08-08 2002-02-22 Komatsu Ltd 現実の物体をバーチャル3次元空間に登場させるためのシステム及び方法
JP2005523488A (ja) * 2001-08-14 2005-08-04 パルス エンターテインメント インコーポレイテッド 自動3dモデリングシステム及び方法
JP2012528390A (ja) * 2009-05-29 2012-11-12 マイクロソフト コーポレーション キャラクターにアニメーションまたはモーションを加えるシステムおよび方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SAYAKA IMAI ET AL.: "Human Body/Motion Modeling and Database Design based on the Mediator Concept", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 41, 15 February 2000 (2000-02-15), pages 100 - 108, ISSN: 0387-5806 *
TAKAKO SATO ET AL.: "Design and Construction of Human-body Motion Database Using Human Skeleton CG Model", IEICE TECHNICAL REPORT, vol. 100, no. 31, 2 May 2000 (2000-05-02), pages 73 - 80, ISSN: 0913-5685 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7484293B2 (ja) 2020-03-25 2024-05-16 カシオ計算機株式会社 アニメーション生成装置、アニメーション生成方法及びプログラム
JP6829922B1 (ja) * 2020-07-27 2021-02-17 株式会社Vrc 情報処理装置、3dモデル生成方法、及びプログラム
WO2022024199A1 (ja) * 2020-07-27 2022-02-03 株式会社Vrc 情報処理装置、3dモデル生成方法、及びプログラム
CN114503161A (zh) * 2020-07-27 2022-05-13 株式会社威亚视 资讯处理装置、3d模型生成方法、及程序

Also Published As

Publication number Publication date
CN110637324B (zh) 2021-04-16
CN110637324A (zh) 2019-12-31
JP6489726B1 (ja) 2019-03-27
JPWO2019049298A1 (ja) 2019-11-07

Similar Documents

Publication Publication Date Title
JP6489726B1 (ja) 3dデータシステム及び3dデータ処理方法
US20220366627A1 (en) Animating virtual avatar facial movements
US11682155B2 (en) Skeletal systems for animating virtual avatars
US11763510B2 (en) Avatar animation using markov decision process policies
US11308673B2 (en) Using three-dimensional scans of a physical subject to determine positions and/or orientations of skeletal joints in the rigging for a virtual character
US20210358214A1 (en) Matching meshes for virtual avatars
Wei et al. Videomocap: Modeling physically realistic human motion from monocular video sequences
US11670032B2 (en) Pose space dimensionality reduction for pose space deformation of a virtual character
US20200005138A1 (en) Methods and systems for interpolation of disparate inputs
JP2022503776A (ja) 視覚ディスプレイの補完的なデータを生成するためのシステム及び方法
US11275433B2 (en) Head scan alignment using ocular registration
TW202209264A (zh) 伺服器及資訊處理方法
US20230260156A1 (en) Methods and systems for interpolation of disparate inputs
JP6585866B1 (ja) 位置データ処理装置およびプログラム
TWI821710B (zh) 資訊處理裝置及資訊處理方法
WO2024069944A1 (ja) 情報処理装置、情報処理方法、及びプログラム
JP2024052519A (ja) 情報処理装置、情報処理方法、及びプログラム
Biswas A human motion database: The cognitive and parametric sampling of human motion

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018532802

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17924023

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17924023

Country of ref document: EP

Kind code of ref document: A1