CN107484428B - Method for displaying objects - Google Patents

Method for displaying objects Download PDF

Info

Publication number
CN107484428B
CN107484428B CN201680018299.0A CN201680018299A CN107484428B CN 107484428 B CN107484428 B CN 107484428B CN 201680018299 A CN201680018299 A CN 201680018299A CN 107484428 B CN107484428 B CN 107484428B
Authority
CN
China
Prior art keywords
model
image
coordinates
texture
acquired image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201680018299.0A
Other languages
Chinese (zh)
Other versions
CN107484428A (en
Inventor
维塔利·塔利耶维奇·埃弗亚诺夫
安德烈·瓦勒耶维奇·柯米萨罗夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Devar Entertainment Ltd
Original Assignee
"laboratory 24" Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by "laboratory 24" Co ltd filed Critical "laboratory 24" Co ltd
Publication of CN107484428A publication Critical patent/CN107484428A/en
Application granted granted Critical
Publication of CN107484428B publication Critical patent/CN107484428B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/42Analysis of texture based on statistical description of texture using transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/529Depth or shape recovery from texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Abstract

The present invention relates to techniques for processing and generating image data and techniques for visualizing three-dimensional (3D) images. The technical result makes it possible to depict the real texture of a photographic or video image of an object on a visual image. According to the claimed method, a 3D model is generated, a photo image or a video image of an object is generated, the 3D model is visualized, the 3D model is stored in a memory of a display device together with a reference pattern and coordinates of a textured part, the coordinates of the textured part corresponding to polygons of the 3D model, at least one frame of the photo image or the video image of the object is generated, the object in the frame is identified based on the reference pattern, if more than one frame is present, a selection is made from the frame from the viewpoint of image quality, a matrix is generated that converts the photo image coordinates into dedicated coordinates, a texture of an image sensing area is generated by using a coordinate transformation matrix and data interpolation to color the elements of the 3D model with the corresponding elements of the photo image, and then the texture of the 3D model is specified.

Description

Method for displaying objects
Technical Field
The present invention relates to the processing and generation of image data, the analysis of images and their textures, and the rendering of 3D (three-dimensional) images, including the texture display of 3D images.
Background
The closest technical essence is the way texture is generated on a real-time scale, comprising the following steps: acquiring the position of an observer; calculating a field of view; determining a required resolution for visualization; acquiring a position map of a subject object; acquiring parameters of a subject object; forming a subject mask; receiving image data from a subject object; preparing a theme texture of the theme object; texturing the subject object with a mask; placing the textured subject object on the texture map; acquiring a location map of an image of a 3D object; acquiring 3D object parameters; determining the type of the object; forming a 3D object model; acquiring textures of a 3D object; texturing a 3D object; rendering the 3D object; a mask to form an image of the 3D object; forming a point element image or a mnemonic image of the 3D object; forming a dot element image mask or mnemonic image mask of the 3D object; the 3D object image is placed on the texture map and visualized (see RU 2295772C 1, cl. G06T 11/60).
Known methods can be implemented for the visualization of topographical images of terrain, which use parametric data of subject objects to construct the texture of their images.
A disadvantage of the known method is that the bounding set of conditional textures is defined in advance for each particular object. The known methods do not provide for the transfer of a real picture of the object surface of the output image.
Disclosure of Invention
The technical result achieved herein is the ability to provide for the display of an output image having the real texture of a photo or video image, simplifying the implementation by eliminating the need for a database storing reference textures for objects, texturing 3D model regions that are not visible on 2D objects.
The result is achieved by a method of displaying an object according to option 1, the method comprising: forming a 3D model, taking a photo or video image of the object, visualizing the 3D model, storing the 3D model in a memory of the display device together with the reference image and coordinates of the textured segment corresponding to polygons of the 3D model; receiving at least one image or video frame of an image of an object based on a reference image, identifying the object on the frame based on the reference image, selecting based on image quality if there is more than one frame, forming a transformation matrix adapted to convert coordinates of the photo image into its own coordinates, rendering elements of the 3D model with the color of the corresponding photo elements by forming a texture of the image area being scanned, further using the coordinate transformation matrix and interpolation data, and then setting the texture of the 3D model such that the corresponding polygons are covered by the corresponding texture areas according to the coordinates determined in the texturing stage, at least parts of the 3D model not present on the photo image of the object being textured according to a predetermined order, wherein the object is a two-dimensional or perceived as a two-dimensional image, the 3D model being formed with respect to at least parts of the two-dimensional image, the 3D model is visualized on the video stream using an augmented reality tool and/or computer vision algorithms.
Further, a 3D model represented by a polygon is formed;
forming a coordinate transformation matrix to convert the photo image coordinates into its own coordinates, i.e., cartesian (Decart) coordinates characterized by orthogonality of coordinate axes;
wherein the segments of the 3D model that do not exist on the image of the object are portions of the back of the image detail;
wherein texturing the 3D model according to the predetermined order comprises generating texture coordinates such that an area of the back side of the model has the same coordinates on texture as a corresponding segment of the front side;
wherein segments of the 3D model that do not exist on the image of the object are textured based on data extrapolation of the visible portion of the image;
wherein the 3D model is animated;
wherein the object perceived as a two-dimensional image is a graphic image executed on a curved surface.
The technical result provides the ability to display the true texture of a photo or video image of an object on an output image, provides a child with training capabilities in a drawing program, simplifies implementation by eliminating the need to store a reference texture database of objects, textures 3D models that are not visible on 2D objects, and simplifies the use of the texturing process by providing untrained users with the potential to use common techniques for drawing 3D models.
Displaying the object to achieve the result according to option 2, comprising: forming a 3D model, taking a photo or video image of the object, saving the 3D model in a memory of the display device together with the coordinates of the reference image and the textured segment, the coordinates of the textured segment corresponding to a range of the 3D model, acquiring at least one image or video image frame of the object, identifying the object on the frame from the reference image, if there is more than one frame, the selection is made according to the image quality, a coordinate transformation matrix is formed, the coordinate transformation matrix is adapted to transform the picture image coordinates into image coordinates of itself, to perform color scanning at predetermined picture image points using the coordinate transformation matrix, on the basis of which the elements of the 3D model are rendered with corresponding photo elements by determining the color of the 3D model material, colors are then assigned to the respective 3D model materials, at least some portions of the 3D model that disappear from the photographic image of the object being textured according to a predetermined order. The object is two-dimensional or perceived as a two-dimensional image, a 3D model is formed with respect to at least a portion of the two-dimensional image, and the 3D model is rendered over a series of video frames using an augmented reality tool and/or a computer vision algorithm.
Further, a 3D model represented by a polygon is formed;
forming a transformation matrix for transforming the coordinates of the photographic image into its own coordinates, i.e. cartesian (Decart) coordinates, characterized by the orthogonality of the coordinate axes;
wherein the segments of the 3D model that do not exist on the image of the object are portions of the back of the image detail;
wherein texturing the 3D model according to a predetermined order means generating texture coordinates in such a way that an area of the back side of the model has the same coordinates on texture as a corresponding segment of the front side;
wherein segments of the 3D model that do not exist on the image of the object are textured based on data extrapolation of the visible portion of the image;
wherein the 3D model is animated;
wherein the object perceived as a two-dimensional image is a graphic image executed on a curved surface.
Drawings
Fig. 1 depicts a block diagram of a PC-based display device and a remote server for storing reference images and the 3D model described in example 2.
Fig. 2 shows an image of an original object, which is a two-dimensional graphic image before coloring, corresponding to a reference image of the object.
Fig. 3 is a rendered original graphical image and a 3D model rendered on a screen of a display device, visualized on a screen of a picture.
FIG. 4 is a block diagram of a computing assistant of the display device.
The following reference numerals are used in the drawings: 1-video camera or camera, 2-computer aided tool, 3-server, 4-monitor, 5-internet, 6-input initial data: 3D model, texture coordinates, reference image, video stream, 7-video stream analysis, 8-verifying that the video stream contains the conditions of the reference image, 9-frame analysis, 10-verifying framing conditions, 11-generating a photo image taking into account the coordinate transformation matrix, 12-texture scanning in the assigned segment-texturing the segment, 13-entering the video camera, checking the identification conditions of the object on the video image, 14-outputting to a monitor, visualization of a 3D model on video, 15-end of program, 16-printer, 17-raw object-two-dimensional graphical image, 18-user drawn two-dimensional graphical image, 19-display device (smartphone), 20-visualization of a 3D model on monitor display device, 21-visualization of a 3D model background part.
Detailed Description
A method of displaying an object comprising a two-dimensional image according to option 1, the method comprising, in order: forming a reference image of an object and storing the reference image of the object in a memory of a display device, the reference image of the object having a textured region and being represented by 3D model polygons, wherein coordinates of the polygons correspond to coordinates of the textured region; receiving at least one image frame or video frame of an object; identifying an object on the photographic image based on the reference image; selecting frames that meet image quality requirements (e.g., sharpness, detail, signal-to-noise ratio, etc.); forming a coordinate transformation matrix for transforming the coordinates of the photographic image into its own coordinates, the system of which points orthogonally to the coordinate axes; forming image texture of the image scanning area by using a coordinate transformation matrix and data interpolation, and rendering the color of the corresponding photo element on the 3D model element; the 3D model structure is then replaced with the acquired image of the scanned area, so that the corresponding polygons are covered by the respective textured areas according to the coordinates pre-formed in the texturing stage. Then, a 3D model visualization is presented. At the same time, at least some parts of the 3D model (such as parts of the back of the pattern) are rendered according to a predetermined order and the 3D model is formed with respect to at least parts of the two-dimensional images, e.g. with respect to the most important aggregate of the plurality of images.
After recognition, from the perspective of scanning data, the frame with the most information is selected among the captured frames. Such a frame can be the frame with the sharpest image, the most detail, etc. Visualization of 3D models is achieved on video (video stream) using augmented reality and/or computer vision algorithms.
Rendering the 3D model according to the predetermined order comprises generating texture coordinates in such a way that regions of the back side of the model have the same coordinates on texture as corresponding segments of the front side; or coloring a segment of the back of the model based on data extrapolation of the visible portion of the image.
The 3D model is animated.
The method of displaying an object according to option 1 works as follows. The objects for display are graphical two-dimensional objects such as drawings, charts, schematics, maps, etc. The method assumes the process of recognizing a graphical object on a photographic image by means of the computing means of a display device equipped with a video recorder, camera or other scanning device, and a monitor. Such devices can be mobile phones, smart phones, tablets, personal computers, etc.
Circles (i.e., markers) of the two-dimensional object are created in advance and juxtaposed with a curve-correlated (plot-correlated) 3D model represented by polygons and a reference image. Each two-dimensional image is associated with a reference image and a 3D model, which are stored in a memory of the display device. The reference image is used for identification of the object and formation of a coordinate transformation matrix. After the 3D model is rendered, the 3D model is visualized on a specific background, which can be a video stream formed when output by a video camera, or a photo image received after taking a picture of the subject, or other background.
The formation of the 3D model includes a process of generating texture coordinates.
The recognition is performed by comparing the photographic image of the object with its reference image, which is also stored in the memory of the display device. When the threshold value of the correlation coefficient of the photographic image and the threshold value of the correlation coefficient of one of the reference images overflow, the image is considered to be recognized, or other known recognition algorithms are used.
The object capture can be performed within a specific range of angles and distances, such that after the object is identified on the photographic image, a correlation matrix (i.e., coordinate transformation matrix) of the photographic image coordinates and its own coordinates is formed, which is characterized by orthogonality of coordinate axes.
The coordinates of the segments textured and juxtaposed to the corresponding 3D model polygon are stored in a memory of the device displaying the object.
After the object is identified, the texture of the image scanning area is formed based on the values of the coordinate transformation matrix and data interpolation. Then, a 3D texture pattern is assigned to the acquired image of the scanned area, so that the corresponding range is covered by the corresponding texture area according to the coordinates pre-formed in the texturing stage.
Texturing of the 3D model assumes that texture is assigned to one or more 3D model materials. The material of the 3D model includes a set of information identified according to a generally accepted convention, which is related to the way the fragments of the model are displayed and assigned to the model. The material of the 3D model may include texture, color, and the like.
The process of texturing the 3D model also includes transferring the color to portions of the 3D model that are not visible on the 2D graphical image, such "invisible" portions can be, for example, the back of the image element, a side, top or bottom view thereof. The color of such "invisible" parts is transferred to the polygons of the 3D model, e.g. based on a symmetric structuring of the 3D model on both sides, or rendering the "invisible" areas with darker hues, or based on other algorithms (including using extrapolation methods).
After texturing of the 3D model, i.e. creating texture coordinates of the 3D model, the 3D model is displayed on a monitor screen of a display device, either immediately or under the instruction of the user.
The output image comprises a video image at which a model (including an animated model) is drawn on a background, such as a video (video stream) received from a video camera, so that a realistic design thereof is created.
Thus, the method of displaying an object allows a user to apply a texture to a virtual object, which is scanned from real space in the manner of a camera or video camera. During visualization, the user is given the opportunity to control the model in space, i.e. rotate, move, zoom, etc., including by moving the input device of the display device, or by using gestures in the focus of the video camera.
The computing means of the display device are made on the basis of a processor and contain a memory for storing an operating program of the processor and the necessary data, including the reference image and the 3D model.
A method of displaying an object, the object being a two-dimensional image according to option 2, the method comprising, in order: forming and storing in a memory of a device a reference image of an object having a textured region and a 3D model represented by polygons, wherein coordinates of the polygons correspond to coordinates of the textured region; receiving at least one image frame or video image of an object; identifying an object on the photographic image based on a reference image; selecting frames that meet image quality requirements (e.g., sharpness, detail, signal-to-noise ratio, etc.); forming a matrix for converting the coordinates of the photographic image into its own coordinates, wherein the coordinate axes are orthogonal; performing color scanning on predetermined image points by using a coordinate transformation matrix, and drawing the colors of the corresponding photo elements on the 3D model elements by determining the colors of the color materials of the 3D model based on the color scanning; and then assigning colors to the respective 3D model materials. Then 3D model visualization is performed.
At the same time, at least some segments of the 3D model (e.g., segments of the back of the picture) are rendered according to a predetermined order, and the 3D model is formed with respect to at least a portion of the two-dimensional image, e.g., with respect to the most important aggregate plurality of images.
After recognition, from a scanning perspective, the frame with the most information is selected among the captured frames. Such a frame can be the frame with the sharpest image, the most detail.
Visualization of 3D models is performed on video (video stream) using augmented reality and/or computer vision algorithms.
Rendering the 3D model is performed according to a predetermined order and as generating texture coordinates in such a way that the area of the back side of the model has the same coordinates in texture as the corresponding segment of the front side, or coloring the segment of the back side of the model based on data extrapolation of the visible image portion.
The 3D model is implemented as animated.
The method of displaying the object according to option 2 works as follows. The objects for display are graphical two-dimensional objects such as drawings, charts, schematics, maps, etc. The method assumes the process of recognizing a graphical object on a photographic image by means of the computing means of a display device equipped with a video recorder, camera or other scanning device, and with a monitor. Such devices can be mobile phones, smart phones, tablets, personal computers, etc.
A circle of the object, i.e. a marker, in the form of a two-dimensional image is created in advance and juxtaposed with a corresponding three-dimensional model (3D model), which is represented by a polygon, and a reference image. Each two-dimensional image is associated with a reference image and a 3D model, which are stored in a memory of the display device. The reference image is used to identify the object and form a coordinate transformation matrix. After rendering, the 3D model is visualized on a specific background, which can be a video stream formed at the output of the camera, or a photo image received after taking a picture of the subject, or a different background.
The formation of the 3D model includes a process of generating texture coordinates. The recognition is performed by comparing a photo image of the object with its reference image, which is also stored in the memory of the display device, wherein the photo image should be considered recognized or other known recognition algorithms should be used when a threshold value of the photo image correlation coefficient of the photo and a threshold value of the correlation coefficient of one of the reference images overflow.
Object capture can be performed within a specific range of angles and distances, such that after an object is identified on a photographic image, a ratio matrix (i.e., coordinate transformation matrix) of the photographic image coordinates and the own coordinates is formed, the ratio matrix being characterized by orthogonality of coordinate axes.
The coordinates of the textured segment are stored in memory of the display device for the object, and the corresponding 3D model range is mapped to the textured segment.
After the object is identified, the texture of the image scanning area is formed based on the values of the coordinate transformation matrix and the data interpolation. Then, the color of the specific area is recognized on the photo image, and the structure of the surface color of the 3D model becomes adapted to the color of the sensed object due to the rigid connection between the segments and the 3D model range, so that the material directly assigned to the segments of the model without using texture is directly drawn.
3D model texturing includes assigning textures to one or more 3D model materials. The material of the 3D model includes a set of information identified according to a generally accepted convention, the identified set of information relating to the manner in which the fragments of the model are displayed and assigned to the model. The material of the 3D model may include texture, color, and the like.
The process of 3D model texturing comprises transferring colors to parts of the 3D model that are not visible on the 2D graphical image, such "invisible" parts can be, for example, the back of an image element, a side, top or bottom view thereof. The transfer of the colors on the "invisible" part to the extent of the 3D model is based, for example, on a symmetric structuring of the 3D model from both sides, or on the drawing of darker shades on the "invisible" part, or on other algorithms, including the use of extrapolation methods.
After texturing of the 3D model, i.e. creating texture coordinates of the 3D model, the 3D model is displayed on a monitor screen of a display device, either immediately or under the instruction of the user.
The output image is a video image on which a model (including an animated model) is drawn over a background (such as video (video stream) received from a video camera) so that a realistic design thereof is created.
Thus, the method of displaying an object allows a user to apply a texture to a virtual object, which is sensed from a real space in the manner of a camera or a video camera.
During visualization, the user is given the opportunity to control the model (i.e., rotate, move, zoom, etc.) in space, including by moving the input device of the display device, or by using gestures in the focus of the video camera.
The computing means of the display device for implementing the method according to any of options 1 or 2 is processor-based and contains a memory for storing a processor operating program and the necessary data, including the reference image and the 3D model.
A block diagram of a processor operating program is shown in fig. 4 and includes the following major elements. The initial data 6 stored in the memory for the program comprises a pre-formed 3D model, texture coordinates, reference images of the object and a video stream formed at the output of the video camera. The term "video stream" as used herein is equivalent to the term "video series". The program analyzes the video stream to select a frame or frames that meet the requirements of image sharpness, framing (blurring), exposure, focus, etc. The frames are classified and analyzed until a frame meeting specific requirements is found, the analysis being performed sequentially in two stages. First, 7, 8: selecting from the video sequence a frame containing an object to be displayed, on which frame the object is identified, and then 9, 10: a frame meeting the accuracy and frame requirements is selected from the selected group of frames.
Next, a coordinate transformation matrix 11 is formed, and the coordinates of the photo image frame are applied to cartesian coordinates of the strict front view of the subject. The texture coordinates in the specified textured area are scanned. The material is assigned to 3D model texture coordinates 12. To determine whether an object is present in a frame, a video stream from the camera output is analyzed, and if so, the model is visualized on the video stream (video sequence) obtained from the camera output.
Once the object is no longer identified on the video frame, the program is terminated.
Alternatively, in addition to terminating the program, the following operations can be performed: returning to the beginning of the program, or switching the device to a short standby mode to await the fact of recognition, or notifying the user that the subject image acquisition was lost, or other operation.
Example 1
The object comprises a drawing from a development group of outline coloring drawings of children, the drawing being a simple drawing (fig. 2) comprising a rectangular outline drawn on a standard drawing sheet with drawing elements for coloring. Each picture comprises one or more primary elements located in the centre of the paper and secondary background elements located at the edges as a criterion.
Each picture is associated with a pre-created reference image, the coordinates of the color detection areas of the object and an animated 3D model having selected ranges, which animated 3D model corresponds to these areas by means of polygons. The 3D model reflects the stereoscopic vision of the main elements of the picture, which is associated with the coordinates of these elements in the image.
The display device is a smartphone equipped with a video camera, a computing device with corresponding software, a monitor, etc.
After the outline picture is colored by the user, the smart phone is placed so that the whole picture is suitable for the frame, and the picture is photographed or recorded. The smartphone, using the computing means, recognizes the image directly on the selected frame, that is to say the smartphone finds the pre-created 3D model corresponding to the image and selects the frame with the most information, which, if repeated several times, will also form the coordinate matrix of the image element on the photographic image, which corresponds to its own coordinates in the cartesian system. As a result, the coordinates of the color recognition area of the drawn picture are matched with the coordinates of the corresponding segment on the photo image.
The colors of the drawn regions are scanned on the photographic image and after necessary analysis, matching and color correction transfer the coloring of the segments to the corresponding 3D model polygons, i.e. the acquired colors are assigned directly to the model material.
The next step is the visualization of the 3D model (fig. 3), which is displayed on the background, formed by the secondary elements of the picture on the photographic image, or by the video sequence acquired by the capturing means of the smartphone. The 3D model can be made mobile with additional elements not shown in the figure.
The rendered 3D model is interactive and can respond to user actions.
Example 2
The display device includes a personal computer to which a network camera and a monitor have been connected and a remote server (fig. 1). The monitor or display may be any visualization device, including a projector or hologram forming device. The reference image and the 3D model of the object are stored on a remote server, which is accessed during the display of the graphical two-dimensional object.
The calculations in the recognition process are carried out by means of a personal computer, with the aid of which the material of the 3D model is also colored and rendered.
The computer is connected to the server via the internet or other network, including a local network.
The mapping process proceeds as follows. The user accesses a corresponding web site over the internet that contains a theme atlas for printing and subsequent rendering. The website is provided with a suitable interface for accessing the reference images and storing these images and the 3D model corresponding to the pattern in the atlas.
The user prints the selected atlas at his portal with the help of the printer and colors his favorite painting. The user can also take the printed picture in different ways, for example by newsletter. Further, in the interface of the website, the user instructs the network camera in such a way that the main part of the drawn picture is contained in the frame. The user's computer, executing the appropriate instructions of the program, accesses the remote server, and the user's computer receives the reference image for the identified drawing from the remote server. After the recognition of the pattern is completed, a coordinate transformation matrix is generated by means of a personal computer, said program supplying the colors of the drawn areas of the pattern to be sensed and the colors of the corresponding 3D model material to be dispensed.
The image of the textured 3D model is output to a monitor over the background of the video sequence, which is obtained from the webcam output. The method of displaying objects can be implemented using standard equipment and components including computer-based devices and communication means between them, the computer-based devices being processor-based, camera and/or video camera, monitor or other visualization device.
Thus, the method of displaying objects according to either option 1 or 2 provides the ability to display the true texture of a photo or video image on an output image, provides a child with training capabilities on a drawing program, simplifies the implementation by eliminating the need to store any reference object texture, provides the ability to texture areas of a 3D model that are not visible on a 2D object. The method also simplifies the use of the texturing process by providing untrained users with the ability to apply common techniques to render 3D models.

Claims (14)

1. A method for displaying a virtual object on a computing device comprising a memory, a camera and a display, said memory being adapted to store at least one reference image and at least one 3D model, said method comprising:
initially forming a reference image, storing the reference image in a memory, the reference image having a textured region,
initially forming a 3D model, the 3D model being associated with a reference image, storing the 3D model in a memory,
the image is taken from the camera and,
a virtual object is identified on the acquired image based on the reference image,
forming a transformation matrix for juxtaposing the coordinates of the acquired image with the coordinates of the 3D model,
juxtaposing the coordinates of the textured segment of the acquired image to the corresponding segment of the 3D model,
confirming colors of the corresponding segments of the acquired image by using the transformation matrix, forming textures of the corresponding segments of the acquired image,
rendering segments of the 3D model using the color and texture of the corresponding segments of the acquired image, an
The 3D model is displayed on the video stream using an augmented reality tool and/or computer vision algorithms.
2. The method of claim 1, wherein the 3D model is represented by a polygon; and
the transformation matrix is adapted to juxtapose coordinates of the textured segment of the acquired image with coordinates of a corresponding polygon of the 3D model.
3. The method of claim 1, further comprising the step of:
forming a portion of the 3D model that is not visible on the acquired image by converting the interpolated data of the matrix;
applying a texture to the 3D model by overlaying respective polygons of the 3D model with the texture of the respective segments according to the determined coordinates, wherein at least some parts of the 3D model that are not visible on the acquired image are rendered according to a predetermined order.
4. The method of claim 3, wherein the portion of the 3D model that is not visible on the received image represents a back side of the virtual object; and/or
Wherein portions of the 3D model that are not visible on the received image of the object are textured based on an extrapolation of portions of the received image that are visible.
5. The method according to claim 1, wherein the transformation matrix is adapted to transform the photo image coordinates into cartesian coordinates of the 3D model, said coordinates being characterized by an orthogonality of coordinate axes.
6. The method of claim 1, wherein the 3D model is animated; and/or
Wherein the virtual object is a graphic image made on a curved surface.
7. The method of claim 1, wherein the captured image comprises a video frame selected from a video stream based on the quality of the video frame image.
8. A computing device adapted to display a virtual object, the computing device comprising a memory, a camera and a display, the memory being adapted to store at least one reference image and at least one 3D model, wherein each reference image is associated with a 3D model, the computing device being adapted to:
initially forming a reference image, storing the reference image in a memory, the reference image having a textured region,
initially forming a 3D model, the 3D model being associated with a reference image, storing the 3D model in a memory,
the image is taken from the camera and,
a virtual object is identified on the acquired image based on the reference image,
forming a transformation matrix for juxtaposing the coordinates of the acquired image with the coordinates of the 3D model,
juxtaposing the coordinates of the textured segment of the acquired image to the corresponding segment of the 3D model,
confirming colors of the corresponding segments of the acquired image by using the transformation matrix, forming textures of the corresponding segments of the acquired image,
rendering segments of the 3D model using the color and texture of the corresponding segments of the acquired image, an
The 3D model is displayed on the video stream using an augmented reality tool and/or computer vision algorithms.
9. The computing device of claim 8, wherein the 3D model is represented by a polygon; and
wherein the transformation matrix is adapted to juxtapose coordinates of the textured segment of the acquired image with coordinates of a corresponding polygon of the 3D model.
10. The computing device of claim 8, further adapted to:
forming a portion of the 3D model that is not visible on the acquired image by converting the interpolated data of the matrix;
applying a texture to the 3D model by overlaying respective polygons of the 3D model with the texture of the respective segments according to the determined coordinates, wherein at least some parts of the 3D model that are not visible on the acquired image are rendered according to a predetermined order.
11. The computing device of claim 10, wherein the portion of the 3D model that is not visible on the received image represents a back side of the virtual object; and
wherein portions of the 3D model that are not visible on the received image of the object are textured based on an extrapolation of portions of the received image that are visible.
12. The computing device of claim 8, wherein the transformation matrix is adapted to transform the photo image coordinates into cartesian coordinates of the 3D model, the coordinates characterized by orthogonality of coordinate axes.
13. The computing device of claim 8, wherein the 3D model is animated; and wherein the virtual object is a graphic image executed on a curved surface.
14. The computing device of claim 8, wherein the captured image comprises a video frame selected from a video stream based on a quality of the video frame image.
CN201680018299.0A 2015-03-25 2016-02-25 Method for displaying objects Expired - Fee Related CN107484428B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
RU2015111132 2015-03-25
RU2015111132/08A RU2586566C1 (en) 2015-03-25 2015-03-25 Method of displaying object
PCT/RU2016/000104 WO2016153388A1 (en) 2015-03-25 2016-02-25 Method for depicting an object

Publications (2)

Publication Number Publication Date
CN107484428A CN107484428A (en) 2017-12-15
CN107484428B true CN107484428B (en) 2021-10-29

Family

ID=56115496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680018299.0A Expired - Fee Related CN107484428B (en) 2015-03-25 2016-02-25 Method for displaying objects

Country Status (6)

Country Link
US (1) US20180012394A1 (en)
EP (1) EP3276578A4 (en)
KR (1) KR102120046B1 (en)
CN (1) CN107484428B (en)
RU (1) RU2586566C1 (en)
WO (1) WO2016153388A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10397555B2 (en) * 2017-08-25 2019-08-27 Fourth Wave Llc Dynamic image generation system
WO2019123422A1 (en) * 2017-12-22 2019-06-27 Sr Labs S.R.L. Mapping method and system for mapping a real environment
US11282543B2 (en) * 2018-03-09 2022-03-22 Apple Inc. Real-time face and object manipulation
CN109191369B (en) * 2018-08-06 2023-05-05 三星电子(中国)研发中心 Method, storage medium and device for converting 2D picture set into 3D model
CN109274952A (en) * 2018-09-30 2019-01-25 Oppo广东移动通信有限公司 A kind of data processing method, MEC server, terminal device
CN109446929A (en) * 2018-10-11 2019-03-08 浙江清华长三角研究院 A kind of simple picture identifying system based on augmented reality
US11068325B2 (en) * 2019-04-03 2021-07-20 Dreamworks Animation Llc Extensible command pattern
US10891766B1 (en) * 2019-09-04 2021-01-12 Google Llc Artistic representation of digital data
JP7079287B2 (en) * 2019-11-07 2022-06-01 株式会社スクウェア・エニックス Viewing system, model configuration device, control method, program and recording medium
CN111182367A (en) * 2019-12-30 2020-05-19 苏宁云计算有限公司 Video generation method and device and computer system
CN111640179B (en) * 2020-06-26 2023-09-01 百度在线网络技术(北京)有限公司 Display method, device, equipment and storage medium of pet model
CN111882642B (en) * 2020-07-28 2023-11-21 Oppo广东移动通信有限公司 Texture filling method and device for three-dimensional model
CN113033426B (en) * 2021-03-30 2024-03-01 北京车和家信息技术有限公司 Dynamic object labeling method, device, equipment and storage medium
WO2023136366A1 (en) * 2022-01-11 2023-07-20 엘지전자 주식회사 Device and method for providing augmented reality service
CN114071067B (en) * 2022-01-13 2022-03-29 深圳市黑金工业制造有限公司 Remote conference system and physical display method in remote conference

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887589A (en) * 2010-06-13 2010-11-17 东南大学 Stereoscopic vision-based real low-texture image reconstruction method
CN104268922A (en) * 2014-09-03 2015-01-07 广州博冠信息科技有限公司 Image rendering method and device

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999015945A2 (en) * 1997-09-23 1999-04-01 Enroute, Inc. Generating three-dimensional models of objects defined by two-dimensional image data
RU2216781C2 (en) * 2001-06-29 2003-11-20 Самсунг Электроникс Ко., Лтд Image-based method for presenting and visualizing three-dimensional object and method for presenting and visualizing animated object
CA2575704C (en) * 2004-07-30 2014-03-04 Extreme Reality Ltd. A system and method for 3d space-dimension based image processing
US7542034B2 (en) * 2004-09-23 2009-06-02 Conversion Works, Inc. System and method for processing video images
US7415152B2 (en) * 2005-04-29 2008-08-19 Microsoft Corporation Method and system for constructing a 3D representation of a face from a 2D representation
RU2295772C1 (en) * 2005-09-26 2007-03-20 Пензенский государственный университет (ПГУ) Method for generation of texture in real time scale and device for its realization
CN101361041B (en) * 2006-02-01 2012-03-21 富士通株式会社 Object relation display program and method
KR100914845B1 (en) * 2007-12-15 2009-09-02 한국전자통신연구원 Method and apparatus for 3d reconstructing of object by using multi-view image information
WO2011047360A1 (en) * 2009-10-15 2011-04-21 Ogmento, Inc. Systems and methods for tracking natural planar shapes for augmented reality applications
RU2453922C2 (en) * 2010-02-12 2012-06-20 Георгий Русланович Вяхирев Method of displaying original three-dimensional scene based on results of capturing images in two-dimensional projection
CN105229703B (en) * 2013-05-23 2018-02-09 谷歌有限责任公司 System and method for generating threedimensional model using the position data of sensing
US9652895B2 (en) * 2014-03-06 2017-05-16 Disney Enterprises, Inc. Augmented reality image transformation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887589A (en) * 2010-06-13 2010-11-17 东南大学 Stereoscopic vision-based real low-texture image reconstruction method
CN104268922A (en) * 2014-09-03 2015-01-07 广州博冠信息科技有限公司 Image rendering method and device

Also Published As

Publication number Publication date
RU2586566C1 (en) 2016-06-10
EP3276578A1 (en) 2018-01-31
WO2016153388A1 (en) 2016-09-29
KR20170134513A (en) 2017-12-06
CN107484428A (en) 2017-12-15
US20180012394A1 (en) 2018-01-11
KR102120046B1 (en) 2020-06-08
EP3276578A4 (en) 2018-11-21

Similar Documents

Publication Publication Date Title
CN107484428B (en) Method for displaying objects
JP6638892B2 (en) Virtual reality based apparatus and method for generating a three-dimensional (3D) human face model using image and depth data
US10255482B2 (en) Interactive display for facial skin monitoring
JP6425780B1 (en) Image processing system, image processing apparatus, image processing method and program
WO2019035155A1 (en) Image processing system, image processing method, and program
CN109978984A (en) Face three-dimensional rebuilding method and terminal device
EP3533218B1 (en) Simulating depth of field
CN106797458A (en) The virtual change of real object
WO2023066120A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN111742352B (en) Method for modeling three-dimensional object and electronic equipment
US11080920B2 (en) Method of displaying an object
RU2735066C1 (en) Method for displaying augmented reality wide-format object
WO2023151271A1 (en) Model presentation method and apparatus, and electronic device and storage medium
JP2002083286A (en) Method and device for generating avatar, and recording medium recorded with program therefor
CN115345927A (en) Exhibit guide method and related device, mobile terminal and storage medium
US11120606B1 (en) Systems and methods for image texture uniformization for multiview object capture
CN114066715A (en) Image style migration method and device, electronic equipment and storage medium
JP7476511B2 (en) Image processing system, image processing method and program
CN117315154A (en) Quantifiable face model reconstruction method and system
CN117557722A (en) Reconstruction method and device of 3D model, enhancement realization device and storage medium
CN114581608A (en) Three-dimensional model intelligent construction system and method based on cloud platform
CN116503578A (en) Virtual object generation method, electronic device and computer readable storage medium
JP2023153534A (en) Image processing apparatus, image processing method, and program
CN115375832A (en) Three-dimensional face reconstruction method, electronic device, storage medium, and program product
CN117078827A (en) Method, device and equipment for generating texture map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220412

Address after: Cyprus Nicosia

Patentee after: DEVAR ENTERTAINMENT Ltd.

Address before: Russian Federation

Patentee before: LIMITED LIABILITY COMPANY "LABORATORY 24"

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211029

CF01 Termination of patent right due to non-payment of annual fee