WO2022211766A1 - A method used in 3 dimensional (3d) modelling programs - Google Patents

A method used in 3 dimensional (3d) modelling programs Download PDF

Info

Publication number
WO2022211766A1
WO2022211766A1 PCT/TR2022/050284 TR2022050284W WO2022211766A1 WO 2022211766 A1 WO2022211766 A1 WO 2022211766A1 TR 2022050284 W TR2022050284 W TR 2022050284W WO 2022211766 A1 WO2022211766 A1 WO 2022211766A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
designer
error
data
screen
Prior art date
Application number
PCT/TR2022/050284
Other languages
French (fr)
Inventor
Engin KAPKIN
Original Assignee
Eski̇şehi̇r Tekni̇k Üni̇versi̇tesi̇
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eski̇şehi̇r Tekni̇k Üni̇versi̇tesi̇ filed Critical Eski̇şehi̇r Tekni̇k Üni̇versi̇tesi̇
Publication of WO2022211766A1 publication Critical patent/WO2022211766A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/004Annotating, labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the invention relates to a method which automatically detects perceived ratio- proportion differences and dimensional deformations that occur virtually on the modeled object, which is visualized according to the camera lens properties and camera position parameters selected in the 3 dimensional (3D) modeling and animation programs, wherein the method presents those deformations to a user (or a designer) and is run on current 3D modeling programs.
  • Various products, goods, cartoon films and visual presentations, visual environment animations, object visualizations and CAD-CAM programs are prepared on a computer environment using three dimensional modeling and animation programs. Users of those programs are the designers. Designers visualize articles, objects and places, which are not yet produced or do not exist, using these programs in realistic or illustrative ways. While modeling in these programs, data such as the actual dimension, size and ratio- proportion of the objects should be input into the program. The program forms a three- dimensional perspective view of the object based on these data. Then, an appropriate material is assigned to these models and a rendering process (virtual image production by a computer) is performed.
  • a rendering process virtual image production by a computer
  • the objects modeled in three dimensions in the virtual environment are reduced to two dimensional plane and converted into moving images or pictures of high photographic quality.
  • a virtual camera is defined in such programs before rendering process. The position of this camera is needed to be determined relative to the object, and the lens type and its properties should be selected.
  • the program calculates the basic perspective rules according to the selected position, angle and lens type and properties of the camera using the dimensions of the object in the physical world, such as width, length, height, etc. and visualizes the product on the scene.
  • the position of the objects modeled in the 3 dimensional virtual universe according to the camera and the selection of the camera lens affect the ability of the viewers to correctly or incorrectly identify and perceive these objects in terms of size and ratio-proportion.
  • the main factor which causes a difference in perception is the perspective drawing technique.
  • This technique includes rules based on projection methods to represent a three-dimensional object in two dimensional plane. These rules are monitored algorithmically by various modeling and animation programs. As in almost every projection method, there exist virtually ratio-proportion differences and dimensional deformations on the model due to the selected position and lens type and properties of the camera. In particularly, the dimensions and ratios of the object or article (or place) are different from the real dimensions thereof in the image created depending on the selected lens type and the distance of the object, the image frame of which will be taken, from the camera.
  • Said thesis collected data in order to determine how images visualized of a place from various angles and with different lens properties affect the viewers emotionally (pleasant, legible, exciting, original, complex and calming (section 3.4 of the thesis).
  • the inferences to be provided from the thesis are limited to the selected place. This limitation is specified in the thesis (section 5 and page 140 of the thesis).
  • the selected data analysis methods also showed differences as the data collection method of the thesis is different from the study of the patent.
  • the study of the patent application used a cube or rectangular prism, also known as the perspective box, which is used in all three-dimensional modeling programs and includes the outermost dimensions of the modeled object, as a data collection tool.
  • the perspective box produced in the shape of a cube is visualized according to different perspectives and lens types.
  • the perspective box the real dimensions of which are a cube, is perceived as a cube in certain perspectives and lens types, while it is deformed at certain perspectives and lens angles and is perceived as a rectangular prism.
  • the data were collected on how this virtual dimensional deformation is perceived by the viewers at which perspective, at which position, and at which lens angles.
  • the object of the invention is to achieve a plug-in method which automatically detects the dimensional deformations and virtual ratio-proportion differences caused by the selected position of the camera (above the eye level, at eye level, below the eye level) and lens angle of the camera (0°, 10°, 20°, 30°, 40°, 50°, 60°, 70°) during the visualization of the objects drawn in 3 dimensional (3D) modeling programs, warns the designer when deformations begin, offers suggestions to reduce deformations, and works on various existing 3D modeling and animation programs.
  • the method how accurate and sensitive the object, the 3D model of which is created, is perceived by the viewers in terms of dimensions and sizes, according to the selected camera parameters is shown to a designer via an interface.
  • the method of the invention is a method which is able to automatically detect deformations before rendering process, warn the designer, and may be integrated into various animation, visualization and CAD programs, that is, into the programs performing 3D modeling. Said method may be adapted to the software development toolkit (SDK) of these programs.
  • SDK software development toolkit
  • the invention allows the designers who use such programs at the entry and intermediate level to see the dimension and ratio-proportion deformations caused by the perspective drawing via a warning mechanism, and develops a proposal to minimize such deformations.
  • the method of the invention may automatically detect the deformations mentioned in the state of the art and warn the designer.
  • the designer will have received a feedback on how sensitive and accurate the object or article (or place) will be perceived by the viewers in terms of ratio-proportion and dimension, while still in the rendering process.
  • Figure 1 is a flow chart of the method according to the invention.
  • the invention relates to a method (100) which allows information on how sensitive and accurate object drawn in a computer environment will be perceived by the viewer in terms of ratio -proportion and dimension depending on the position and/or angle and/or lens type and properties of the virtual camera selected by the designer of the object (or article ) modeled in 3 dimensional (3D) programs to be displayed to a user (or a designer) performing modeling, wherein the method runs on hardware.
  • the method (100) informs the user of deformations on said object (or article), if any, and suggests a solution.
  • Said hardware may be any electronic device, such as a computer, a tablet, and a phone, on which the CAD-CAM applications runs and the method may be executed.
  • the hardware on which the method (100) of the invention is executed includes at least one control unit which displays an error on a screen in case of a deformation during the visualization of the object in a three-dimensional scene virtually according to the virtual camera created in at least one 3D modeling or animation application and said virtual camera data (the position and/or angle and/or selected lens type and properties of the camera), wherein the control unit allows the input of the method (100) parameters, at least one interface comprising a warning image so that the user (or designer) may see the created image or a possible error (or deformation) in the image, and at least one data storage unit (or a database) where the data is drawn and stored.
  • Said warning image is an interface element such as color button or bar, or a color code, but is not limited thereto in practice.
  • the user creates a virtual 3D camera in the modeling or animation application. In many applications, this camera is already ready in the scene as soon as the user opens a new file.
  • the method (100) of the invention compares the camera parameters in the program, the distance of the modeled object to the camera, and the numerical data about the dimensions of the object with the data in the database in the method (100) of the present invention, after the object is modeled. Upon that comparison, the method (100) forms the warning image on how accurately and sensitively the viewers may perceive the image to be created.
  • parameters such as the position and/or lens type and properties of the virtual 3D camera may be adjusted via the control unit, or manually by the user. Furthermore, upon warning, the designer may optionally adjust the selected virtual 3D camera parameters according to the safe position where the deformation is minimum, or adjust the lens type and properties, by triggering the algorithm related to a button on the control unit.
  • Said database consists of perceptual data collected from the viewers as a step of an academic study with an experimental approach.
  • the database includes numerical data on how accurate and sensitive a model drawn and displayed according to the position (above the eye level, at eye level, below the eye level) of the virtual 3D camera, lens angle of the camera (0°, 10°, 20°, 30°, 40°, 50°, 60°, 70°) and the perspective type thereof (one and two point perspective) is perceived by the users (or designers) and viewers in terms of ratio-proportion and dimensional deformation.
  • the data in the database is previously created and embedded in the method. However, the database is able to update itself as the users provide an input to the control unit.
  • Data which are not directly included in the database will be calculated using statistical estimation methods based on the data in the database. Furthermore, the database will be presented in a more precise, detailed and comprehensive way in new versions of the method.
  • the control unit and warning image contained in the hardware on which said method is executed warns the users via the screen in the modeling program.
  • the warning image will change in real-time when the user provide an input to the control unit, or changes the position of the object in the scene.
  • the error will be detected before rendering process, and actions will be taken to correct the error.
  • the user input width and height values in the 3D modeling program and drew a box. While creating an image of this box (rendering process), the user created a virtual camera and selected a lens of 50mm (approximately 27°). He/she located the camera so as to see the object from above.
  • the method (100) of the invention receives the position information and lens parameters of the camera in the scene and compares them with the data in the database developed in the method (100). According to the data in the database, it indicates whether there is (or how much deformation is there) deformation in the relevant image by a simple warning indicator on the interface on the screen, for example with a bar. If the bar on the screen is red, the user may determine the presence of a deformation, and if it is green, the user may see that no deformation has occurred.
  • Said application is an example application and is not limited to said parameters.
  • the users looking at the images are able to see the real-time perception of an object or article (or a place) in terms of dimension and ratio-proportion.
  • the users (designers) visualize the objects, they may provide more realistic and accurate information about the dimensions and ratio-proportion of the object to the viewers.
  • the position information of the camera in the scene and the lens parameters are compared with the data in the database developed in the method (100).
  • a simple warning indication e.g. a bar
  • the data in the database of the invention are provided in Table-1 and Table-2. Said data are exemplary applications, and given all applications, they should not be considered as limited to these exemplary applications.
  • Table-1 It shows the identification accuracy and standard deviation values of the articles and objects drawn with the one-point perspective method.
  • Table-2 It shows the identification accuracy and standard deviation values of the articles and objects drawn with the two-point perspective method.
  • the viewers detect the deformations at a moderate level for all camera lens values selected in a range of 0° to 10° degrees, but this is at an acceptable level. If the lens value of the camera is selected between 10° and 20°, an area which is perceptually more safe is switched. The image to be formed with the selections made between 20° and 50° will be perceived in the most accurate way by the viewers. In this range, the deformations are not observable by the viewers. When performing lens selections between 50° and 60°, high validation rates suddenly turn into low validation levels. Deformations at values close to 60° were seriously observed wrongly by the viewers.
  • the method (100) of the invention is applicable to industry in order to be used in all sectors which performs a computer-aided 3D modeling, animation and production process.

Abstract

The invention relates to a method (100) which is to be run in hardware and which provides a visual communication (or perception) between an object or article (or a place) and a user in a most reliable (or safest and/or most correct) way by render images created from a three dimensional (3D) model, wherein the method comprises the following steps: - Inputting (101) the real dimension and/or size and/or data, such as ratio and proportion, of the articles and/or objects by a designer on a screen, - Creating (102) a perspective view of the article and/or object based on the input data, - Identifying (103) a virtual 3D camera and determining the position and lens type and property selection of the camera according to the article or object, - Presenting (104) the data obtained from the camera for said article or object to a user (a designer) on a screen via a control unit, - Comparing (105) the values from the camera and article and/or object with the data in a database developed by a method of the invention and identifying the results - If an error is detected, presenting (106) the relevant error (or deformation) again by a warning image to the designer on the screen, - If an error is not detected, providing (107) a positive feedback by the warning image, - Presenting (108) the intermediate values ranging from error to inaccuracy by the warning image, - Upon presentation of the error on the screen, setting (109) the angle and/or position and/or lens type and properties of the virtual 3D camera automatically by the control unit, or manually by the user.

Description

A METHOD USED IN 3 DIMENSIONAL (3D) MODELLING PROGRAMS
Technical Field
The invention relates to a method which automatically detects perceived ratio- proportion differences and dimensional deformations that occur virtually on the modeled object, which is visualized according to the camera lens properties and camera position parameters selected in the 3 dimensional (3D) modeling and animation programs, wherein the method presents those deformations to a user (or a designer) and is run on current 3D modeling programs.
Prior Art
Various products, goods, cartoon films and visual presentations, visual environment animations, object visualizations and CAD-CAM programs are prepared on a computer environment using three dimensional modeling and animation programs. Users of those programs are the designers. Designers visualize articles, objects and places, which are not yet produced or do not exist, using these programs in realistic or illustrative ways. While modeling in these programs, data such as the actual dimension, size and ratio- proportion of the objects should be input into the program. The program forms a three- dimensional perspective view of the object based on these data. Then, an appropriate material is assigned to these models and a rendering process (virtual image production by a computer) is performed. At the end of this process, the objects modeled in three dimensions in the virtual environment are reduced to two dimensional plane and converted into moving images or pictures of high photographic quality. A virtual camera is defined in such programs before rendering process. The position of this camera is needed to be determined relative to the object, and the lens type and its properties should be selected. During the rendering process, the program calculates the basic perspective rules according to the selected position, angle and lens type and properties of the camera using the dimensions of the object in the physical world, such as width, length, height, etc. and visualizes the product on the scene. The position of the objects modeled in the 3 dimensional virtual universe according to the camera and the selection of the camera lens affect the ability of the viewers to correctly or incorrectly identify and perceive these objects in terms of size and ratio-proportion. The main factor which causes a difference in perception is the perspective drawing technique. This technique includes rules based on projection methods to represent a three-dimensional object in two dimensional plane. These rules are monitored algorithmically by various modeling and animation programs. As in almost every projection method, there exist virtually ratio-proportion differences and dimensional deformations on the model due to the selected position and lens type and properties of the camera. In particularly, the dimensions and ratios of the object or article (or place) are different from the real dimensions thereof in the image created depending on the selected lens type and the distance of the object, the image frame of which will be taken, from the camera. This is the same situation as the fact that the train tracks close to the viewer are perceived as long, while the ones far from the viewer and at the vanishing point are perceived as very short or even a point in daily life. The proportional and dimensional variability which occurs virtually on the object is defined as deformation. Although the physical dimensional data of an object is constant, deformations cause them to be perceived as larger or smaller than it should be, in other words, the ratio-proportion of the object is perceived as very different and wrong. In traditional perspective drawing techniques made by hand, various suggestions are known to control and minimize the deformation in drawings. However, these suggestions were not tested on a viewer’s perception, and their applications were not carried out on modeling programs on a computer environment.
When such deformations occur, the designer makes decisions based on his/her own experience and eyes, and minimizes the deformations by modifying the angle, position and lens type and properties of the camera. Experienced designers are able to see these deformations and intervene manually. However, entry-level and intermediate users cannot see or are aware of these deformations. They do not know how to intervene even though they see. Consequently, images which are proportionally and dimensionally deformed may be perceived as wrong or incomplete by the viewers with whom the user (or designer) communicates. At this point, the image which does not actually exist on the object, but is created as a result of the rendering process, may mislead the viewer. Thus, a method should be developed, which may automatically detect aforementioned deformations before rendering process, warn the designer, and may be integrated into various animation, visualization and CAD programs.
In the state of the art, a master thesis entitled “Using of Three Dimensional Modeling Technique on Visual Assesment in Landscape Architecture” by Meiek Ozgeiik, 2010, Istanbul University, Institute of Science, Landscape Architecture Program”, discloses the assessment of the perceptional effect on a place and viewer by render images created from a three dimensional model. In said document, a method is disclosed, which presents data to the viewer according to the perspective and position of the camera (Figure 3.21 ) and also to the lens parameter variables of the camera (section 3.2.4. of the thesis). In the thesis, a place was chosen as the subject. Images of this place were created in the computer environment according to the selected various positions and lens types of the camera, and how these images were perceived by the viewers (pleasant, legible, exciting, original, complex and calming (section 3.4 of the thesis)) was measured. The perceptual qualities in the thesis have spiritual characteristics. The study of this patent application and said thesis also used similar independent variables (the position, perspective and lens type and properties of the camera). However, said thesis addressed the place and the emotional perception of the place, while the study of the patent application is related to the determination of the extent to which the dimensional deformations which occur virtually during the modeling of the objects in the computer environment are perceived and noticed by the viewers. Thus, two studies show significant differences in terms of the type of the images created and the subject selected. Said thesis collected data in order to determine how images visualized of a place from various angles and with different lens properties affect the viewers emotionally (pleasant, legible, exciting, original, complex and calming (section 3.4 of the thesis). Thus, the inferences to be provided from the thesis are limited to the selected place. This limitation is specified in the thesis (section 5 and page 140 of the thesis). The selected data analysis methods also showed differences as the data collection method of the thesis is different from the study of the patent. The study of the patent application used a cube or rectangular prism, also known as the perspective box, which is used in all three-dimensional modeling programs and includes the outermost dimensions of the modeled object, as a data collection tool. The perspective box produced in the shape of a cube is visualized according to different perspectives and lens types. The perspective box, the real dimensions of which are a cube, is perceived as a cube in certain perspectives and lens types, while it is deformed at certain perspectives and lens angles and is perceived as a rectangular prism. In the study of the patent application, the data were collected on how this virtual dimensional deformation is perceived by the viewers at which perspective, at which position, and at which lens angles. Brief Description of the Invention:
The object of the invention is to achieve a plug-in method which automatically detects the dimensional deformations and virtual ratio-proportion differences caused by the selected position of the camera (above the eye level, at eye level, below the eye level) and lens angle of the camera (0°, 10°, 20°, 30°, 40°, 50°, 60°, 70°) during the visualization of the objects drawn in 3 dimensional (3D) modeling programs, warns the designer when deformations begin, offers suggestions to reduce deformations, and works on various existing 3D modeling and animation programs. Thus, by the method, how accurate and sensitive the object, the 3D model of which is created, is perceived by the viewers in terms of dimensions and sizes, according to the selected camera parameters is shown to a designer via an interface.
Thus, the method of the invention is a method which is able to automatically detect deformations before rendering process, warn the designer, and may be integrated into various animation, visualization and CAD programs, that is, into the programs performing 3D modeling. Said method may be adapted to the software development toolkit (SDK) of these programs.
The invention allows the designers who use such programs at the entry and intermediate level to see the dimension and ratio-proportion deformations caused by the perspective drawing via a warning mechanism, and develops a proposal to minimize such deformations. Thus, the method of the invention may automatically detect the deformations mentioned in the state of the art and warn the designer. Thus, the designer will have received a feedback on how sensitive and accurate the object or article (or place) will be perceived by the viewers in terms of ratio-proportion and dimension, while still in the rendering process.
Description of Figures
Figure 1 is a flow chart of the method according to the invention.
Description of the References in the Figures In order to provide a better understanding of the invention, the numerals in the figures are provided below:
100. Method Detailed Description of the Invention:
The invention relates to a method (100) which allows information on how sensitive and accurate object drawn in a computer environment will be perceived by the viewer in terms of ratio -proportion and dimension depending on the position and/or angle and/or lens type and properties of the virtual camera selected by the designer of the object (or article ) modeled in 3 dimensional (3D) programs to be displayed to a user (or a designer) performing modeling, wherein the method runs on hardware. The method (100) informs the user of deformations on said object (or article), if any, and suggests a solution. Said hardware may be any electronic device, such as a computer, a tablet, and a phone, on which the CAD-CAM applications runs and the method may be executed.
The hardware on which the method (100) of the invention is executed includes at least one control unit which displays an error on a screen in case of a deformation during the visualization of the object in a three-dimensional scene virtually according to the virtual camera created in at least one 3D modeling or animation application and said virtual camera data (the position and/or angle and/or selected lens type and properties of the camera), wherein the control unit allows the input of the method (100) parameters, at least one interface comprising a warning image so that the user (or designer) may see the created image or a possible error (or deformation) in the image, and at least one data storage unit (or a database) where the data is drawn and stored. Said warning image is an interface element such as color button or bar, or a color code, but is not limited thereto in practice. The user creates a virtual 3D camera in the modeling or animation application. In many applications, this camera is already ready in the scene as soon as the user opens a new file. The method (100) of the invention compares the camera parameters in the program, the distance of the modeled object to the camera, and the numerical data about the dimensions of the object with the data in the database in the method (100) of the present invention, after the object is modeled. Upon that comparison, the method (100) forms the warning image on how accurately and sensitively the viewers may perceive the image to be created.
After the warning image is displayed on the screen, parameters such as the position and/or lens type and properties of the virtual 3D camera may be adjusted via the control unit, or manually by the user. Furthermore, upon warning, the designer may optionally adjust the selected virtual 3D camera parameters according to the safe position where the deformation is minimum, or adjust the lens type and properties, by triggering the algorithm related to a button on the control unit.
Said database consists of perceptual data collected from the viewers as a step of an academic study with an experimental approach. The database includes numerical data on how accurate and sensitive a model drawn and displayed according to the position (above the eye level, at eye level, below the eye level) of the virtual 3D camera, lens angle of the camera (0°, 10°, 20°, 30°, 40°, 50°, 60°, 70°) and the perspective type thereof (one and two point perspective) is perceived by the users (or designers) and viewers in terms of ratio-proportion and dimensional deformation. The data in the database is previously created and embedded in the method. However, the database is able to update itself as the users provide an input to the control unit. Data which are not directly included in the database will be calculated using statistical estimation methods based on the data in the database. Furthermore, the database will be presented in a more precise, detailed and comprehensive way in new versions of the method. In case of an image deformation or an error, the control unit and warning image contained in the hardware on which said method is executed, warns the users via the screen in the modeling program. The warning image will change in real-time when the user provide an input to the control unit, or changes the position of the object in the scene. Thus, if there is a perceptual error in the image to be formed at the end of the rendering process, the error will be detected before rendering process, and actions will be taken to correct the error.
In an exemplary embodiment of the invention, the user (designer) input width and height values in the 3D modeling program and drew a box. While creating an image of this box (rendering process), the user created a virtual camera and selected a lens of 50mm (approximately 27°). He/she located the camera so as to see the object from above. The method (100) of the invention receives the position information and lens parameters of the camera in the scene and compares them with the data in the database developed in the method (100). According to the data in the database, it indicates whether there is (or how much deformation is there) deformation in the relevant image by a simple warning indicator on the interface on the screen, for example with a bar. If the bar on the screen is red, the user may determine the presence of a deformation, and if it is green, the user may see that no deformation has occurred. Said application is an example application and is not limited to said parameters.
By the method (100) of the invention and the hardware on which said method (100) is executed, the users looking at the images are able to see the real-time perception of an object or article (or a place) in terms of dimension and ratio-proportion. Thus, while the users (designers) visualize the objects, they may provide more realistic and accurate information about the dimensions and ratio-proportion of the object to the viewers.
Briefly, the method (100) of the invention is performed according to the following steps:
- Inputting (101 ) the real dimension and/or size and/or data, such as ratio and proportion, of the articles and/or objects by a designer on a screen,
- Creating (102) a perspective view of the article and/or object based on the input data,
- Identifying (103) a virtual 3D camera and determining the position and lens type and property selection of the camera by a designer according to the article or object,
- Presenting (104) the data obtained from the camera for said article or object to a user (a designer) on a screen via a control unit,
- Comparing (105) the values from the camera and article or object with the data in a database developed by a method of the invention and identifying the results
- If an error is detected, presenting (106) the relevant error (or deformation) again by a warning image to the designer on the screen,
- If an error is not detected, providing (107) a positive feedback by the warning image,
- Presenting (108) the intermediate values ranging from error to inaccuracy by the warning image,
- Upon presentation of the error on the screen, setting (109) the angle and/or position and/or lens type and properties of the virtual 3D camera automatically by the control unit, or manually by the user
For informing the user (designer) of said error in step 105, the position information of the camera in the scene and the lens parameters are compared with the data in the database developed in the method (100). As a result of that comparison, whether there is a deformation on the image of the modeled article or object, or the degree of deformation are provided to the user by a simple warning indication (e.g. a bar) on the interface. The data in the database of the invention are provided in Table-1 and Table-2. Said data are exemplary applications, and given all applications, they should not be considered as limited to these exemplary applications.
Table-1 : It shows the identification accuracy and standard deviation values of the articles and objects drawn with the one-point perspective method.
Figure imgf000010_0001
Figure imgf000010_0002
Figure imgf000011_0001
Figure imgf000011_0003
Table-2: It shows the identification accuracy and standard deviation values of the articles and objects drawn with the two-point perspective method.
Figure imgf000011_0002
Figure imgf000012_0001
Figure imgf000012_0002
An exemplary application to understand and interpret the data in the database given in Table-1 and Table-2 is as follows:
If 3D modeling is visualized with one-point perspective and the camera position is determined above the eye level, the viewers detect the deformations at a moderate level for all camera lens values selected in a range of 0° to 10° degrees, but this is at an acceptable level. If the lens value of the camera is selected between 10° and 20°, an area which is perceptually more safe is switched. The image to be formed with the selections made between 20° and 50° will be perceived in the most accurate way by the viewers. In this range, the deformations are not observable by the viewers. When performing lens selections between 50° and 60°, high validation rates suddenly turn into low validation levels. Deformations at values close to 60° were seriously observed wrongly by the viewers. Deformations were clearly detected in the selections between 60° and 70°. Inferences are specific to this exemplary application, and inferences may differ according to the other parameters provided in the table. Thus, by the invention, the visual communication between the user and the viewer is established in the most reliable (or safe) way with the rendered images created from the three-dimensional (3D) model. In other words, a safe visual perception is obtained with the method (100) of the invention and the hardware on which said method (100) is executed, and consequently, the visual effect of the articles or objects on the user and the viewer is provided in the most accurate way. Industrial Applicability of the Invention
The method (100) of the invention is applicable to industry in order to be used in all sectors which performs a computer-aided 3D modeling, animation and production process.
The invention is not limited to the above exemplary embodiments, and a person skilled in the art may easily present other different embodiments of the invention. These should be considered within the scope of protection of the invention claimed in the claims.

Claims

1. A computer-implemented method (100) which detects the ratio-proportion differences and dimensional deformations, which occur depending on the position and/or angle of the virtual camera and/or the selected lens type and property selected by a designer during the visualization of the objects drawn in 3 dimensional (3D) modeling programs and provides the designer who performs modeling with the information on how sensitive and accurate said object will be perceived by a viewer in terms of ratio-proportion and dimension, characterized in that the method comprises the following process steps:
• Inputting (101 ) the real dimension and/or size and/or data, such as ratio and proportion, of the articles and/or objects by a user (a designer) on a screen,
• Creating (102) a perspective view of the article/object based on the input data
• Identifying (103) a virtual 3D camera and determining the position and lens type and property selection of the camera with respect to the object,
• Presenting (104) the data obtained from the camera for said object to a designer on at least one screen via a control unit,
• Comparing (105) the values from the camera and the object with the data in at least one database and identifying the results,
• If an error is detected, presenting (106) the relevant error again by a warning image to the designer on the screen,
• If an error is not detected, presenting (107) information on there is no error to the designer by a warning image on said screen,
• Presenting (108) the intermediate values ranging from error to inaccuracy by the warning image,
• Upon presentation of the error on said screen, setting (109) the angle and/or position and/or lens type and properties of the virtual 3D camera automatically by the control unit, or manually by the designer.
2. A method according to claim 1 , characterized in that the method comprises the steps of determining (105) whether there is a deformation on the object modeled according to the data in the database by comparing the values from the camera and the object with the data in at least one database, and presenting this result to the designer by at least one warning image via an interface on the screen.
3. Hardware for detecting the ratio-proportion differences and dimensional deformations, which occur depending on the position and/or angle of the virtual camera and/or the selected lens type and property selected by a designer during the visualization of the objects drawn in 3 dimensional (3D) modeling programs and providing the designer performing modeling with the information on how sensitive and accurate said object will be perceived by a viewer in terms of ratio- proportion and dimension, characterized in that the equipment comprises the following:
• at least one control unit which displays an error on a screen in case of a deformation during the visualization of the object in a three-dimensional scene virtually according to the virtual camera created in at least one 3D modeling or animation program and said virtual camera data, wherein the control unit allows the input of the method parameters,
• at least one interface comprising a warning image so that the designer may see the created image or a possible error in the image,
• at least one data storage unit where the data is drawn and stored.
4. A method or hardware according to claim 3, characterized in that it is an electronic device such as a computer, a tablet, a phone and the like.
PCT/TR2022/050284 2021-03-31 2022-03-31 A method used in 3 dimensional (3d) modelling programs WO2022211766A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TR202105788 2021-03-31
TR2021/005788 2021-03-31

Publications (1)

Publication Number Publication Date
WO2022211766A1 true WO2022211766A1 (en) 2022-10-06

Family

ID=83459751

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/TR2022/050284 WO2022211766A1 (en) 2021-03-31 2022-03-31 A method used in 3 dimensional (3d) modelling programs

Country Status (1)

Country Link
WO (1) WO2022211766A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004362128A (en) * 2003-06-03 2004-12-24 Shimizu Corp Method for correcting three-dimensional attitude in model image collation
WO2009100538A1 (en) * 2008-02-13 2009-08-20 Dirtt Environmental Solutions, Ltd. Rendering and modifying cad design entities in object-oriented applications
US20200057831A1 (en) * 2017-02-23 2020-02-20 Siemens Mobility GmbH Real-time generation of synthetic data from multi-shot structured light sensors for three-dimensional object pose estimation
CN111932673A (en) * 2020-09-22 2020-11-13 中国人民解放军国防科技大学 Object space data augmentation method and system based on three-dimensional reconstruction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004362128A (en) * 2003-06-03 2004-12-24 Shimizu Corp Method for correcting three-dimensional attitude in model image collation
WO2009100538A1 (en) * 2008-02-13 2009-08-20 Dirtt Environmental Solutions, Ltd. Rendering and modifying cad design entities in object-oriented applications
US20200057831A1 (en) * 2017-02-23 2020-02-20 Siemens Mobility GmbH Real-time generation of synthetic data from multi-shot structured light sensors for three-dimensional object pose estimation
CN111932673A (en) * 2020-09-22 2020-11-13 中国人民解放军国防科技大学 Object space data augmentation method and system based on three-dimensional reconstruction

Similar Documents

Publication Publication Date Title
US10229483B2 (en) Image processing apparatus and image processing method for setting an illumination environment
US10074179B2 (en) Image measurement device
US10304206B2 (en) Selecting feature patterns and corresponding layout positions for viewpoint measurement
US10775166B2 (en) Shape evaluation method and shape evaluation apparatus
US10769437B2 (en) Adaptive sampling of training views
WO2016033085A1 (en) Method of making a personalized animatable mesh
CN109587477A (en) A kind of image capture device selection method, device, electronic equipment and storage medium
US20200107004A1 (en) Information processing apparatus, information processing method, and storage medium
KR20180123302A (en) Method and Apparatus for Visualizing a Ball Trajectory
WO2022211766A1 (en) A method used in 3 dimensional (3d) modelling programs
JP2020098421A (en) Three-dimensional shape model generation device, three-dimensional shape model generation method and program
Bernhard et al. The accuracy of gauge-figure tasks in monoscopic and stereo displays
KR102169041B1 (en) Apparatus and method for determining the state of area including water in an image using color labeling
TR2021005788A1 (en) A METHOD USED IN 3D (3D) MODELING PROGRAMS
US20140172144A1 (en) System and Method for Determining Surface Defects
KR20230101469A (en) A method for learning a target object by detecting an edge from a digital model of the target object and setting sample points, and a method for augmenting a virtual model on a real object implementing the target object using the same
JP2020528626A (en) How to overlay web pages on 3D objects, devices and computer programs
JP2008140297A (en) Animation generation method and system
Sakurai et al. Retrieval of similar behavior data using kinect data
JP6473872B2 (en) Video construction device, pseudo-visual experience system, and video construction program
WO2023054661A1 (en) Gaze position analysis system and gaze position analysis method
EP3568835A1 (en) A system for manufacturing personalized products by means of additive manufacturing doing an image-based recognition using electronic devices with a single camera
CN115908334A (en) Art resource detection method and device, electronic equipment and storage medium
JP6526605B2 (en) Virtual camera image generating device
US20210368161A1 (en) Information processing apparatus and measurable region simulation method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22781787

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE