CN111383343A - Home decoration-oriented augmented reality image rendering and coloring method based on generation countermeasure network technology - Google Patents

Home decoration-oriented augmented reality image rendering and coloring method based on generation countermeasure network technology Download PDF

Info

Publication number
CN111383343A
CN111383343A CN201811634717.6A CN201811634717A CN111383343A CN 111383343 A CN111383343 A CN 111383343A CN 201811634717 A CN201811634717 A CN 201811634717A CN 111383343 A CN111383343 A CN 111383343A
Authority
CN
China
Prior art keywords
model
coloring
network
augmented reality
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811634717.6A
Other languages
Chinese (zh)
Other versions
CN111383343B (en
Inventor
吕李娜
刘镇
梅向东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Cudatec Co ltd
Original Assignee
Jiangsu Cudatec Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Cudatec Co ltd filed Critical Jiangsu Cudatec Co ltd
Priority to CN201811634717.6A priority Critical patent/CN111383343B/en
Publication of CN111383343A publication Critical patent/CN111383343A/en
Application granted granted Critical
Publication of CN111383343B publication Critical patent/CN111383343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention aims to provide an augmented reality image rendering and coloring method based on a generation countermeasure network technology and oriented to home decoration design. The coloring method has two effects of high coloring speed and smoothness, and the colored augmented reality target object can also ensure the integrity of colors in translation and zooming; the coloring method supports machine learning, can learn the composition, drawing and color using habits of different designers to form a style model, and renders and colors according to the style influence; the coloring method adopts a parallel design, and can realize quick rendering and coloring; the method of fusing the virtual object and the video stream background can realize tracking and positioning the object, thereby achieving real-time coloring. A user can simulate and decorate a house according to own preference, and the method can realize rendering of three-dimensional household vivid images in a heterogeneous network environment.

Description

Home decoration-oriented augmented reality image rendering and coloring method based on generation countermeasure network technology
Technical Field
The invention belongs to the field of computer digital images, and relates to an augmented reality image coloring method based on a generative confrontation network technology and oriented to home decoration design.
Background
In the augmented reality display software, there are two common ways of coloring, one is microsoft surfview coloring, and the other is OpenGL coloring. The Microsoft SurfaceView uses the Microsoft application programming interface, the function support of coloring is perfect, and a good smooth effect can be achieved after coloring. But when the augmented reality object to be colored continuously moves and scales in the background, the coloring speed of the SurfaceView cannot keep up with the geometric space transformation speed. The phenomenon of jamming can be caused, and the experience of the user is influenced. Particularly, when global movement and scaling are performed, each picture needs to be traversed, each picture is colored and updated on the view of the surface view one by one, the whole coloring speed is slowed down along with the increase of the number of the images, and the pictures are more and more stuck. The OpenGL coloring speed is very high, and the OpenGL coloring method is widely applied to games and some animation effect application scenes. Even on the order of milliseconds. Especially for coloring pictures, the picture texture data can be stored in a display memory, and the OpenGL coloring hardly consumes time. Therefore, OpenGL does not suffer from the stuck phenomenon. However, OpenGL has no rounding effect on the lines. The colored edges have color spots when the lines are thick. In the prior art, the requirements of production and application are difficult to meet only by using a single coloring mode. In recent years, deep learning is more and more emphasized by enterprises and research and development personnel. The generated countermeasure network can be used for playing games by utilizing the generation network and the countermeasure network to complete coloring tasks. The production of coloured antagonistic networks also has its limitations. A significant amount of model pre-training time is required.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the coloring of the generated countermeasure network has two effects of high coloring speed and smoothness, so that the integrity of the colored augmented reality target object can be ensured in translation and zooming, and vivid coloring of different styles can be carried out according to different requirements. The coloring method supports machine learning, can learn the composition, drawing and color using habits of different designers to form a style model, and renders and colors according to the style influence; the coloring method adopts a parallel design, and can realize quick rendering and coloring; the method of fusing the virtual object and the video stream background can realize tracking and positioning the object, thereby achieving real-time coloring.
In order to achieve the above object, the present invention provides a method for coloring an augmented reality image based on a generative confrontation network technology, which is designed for home decoration, and comprises the following steps:
s101: collecting a real-time video;
s102: scanning the digital marker;
s103: identifying the marker by an augmented reality program;
s104: matching the marker with the three-dimensional virtual object;
s105: adjusting the position of the three-dimensional virtual model according to the position of the marker;
s106: determining style requirements;
s107: matching a pre-training coloring model library;
s108: fusing the virtual object with the video stream background;
s109: the virtual object is colored into a video stream.
In the invention, a first implementation method for matching a pre-training coloring model library comprises the following steps:
s201: inputting vertex coordinates of a three-dimensional model to be colored;
s202: placing the three-dimensional model at a position for three-dimensional scene verification;
s203: setting the angle and the visual angle of a camera;
s204: setting parameters of illumination position, color and direction;
s205: setting color parameters of the three-dimensional model;
s206: inputting the colored model to generate a confrontation network model;
s207: and storing the model passing through the discrimination network into a pre-training model library.
In the invention, a second method for realizing matching of the pre-training coloring model library comprises the following steps:
s401: inputting different types of images of different known authors;
s402: generating an antagonistic network model from the input image;
s403: and storing the model passing through the discrimination network into a pre-training model library.
In the invention, the method for fusing the virtual object and the video stream background comprises the following steps:
s601: identifying the outline of the background object of the video stream by using an identification program;
s602: extracting the position coordinates of the background object of the video stream;
s603: and displaying the virtual object on the background object of the video stream in an overlapping way by taking the position coordinates as a reference point.
The invention relates to a home decoration design-oriented augmented reality image coloring method based on a generation countermeasure network technology, which has the characteristics and beneficial effects that:
1. the method used in the invention has the advantages that the coloring time is millisecond-scale, and the coloring can be realized rapidly;
2. the method in the invention can realize tracking and positioning the object by using a method of fusing the virtual object and the video stream background, thereby realizing real-time coloring;
3. the method can realize the style coloring of different authors according to different requirements;
4. the method uses the generation of the confrontation network pre-training coloring model, and can quicken calling compared with manual feature coloring.
Drawings
Fig. 1 is a flowchart of an augmented reality image coloring method based on a generation countermeasure network technology of the invention.
Fig. 2 is a flowchart of a first implementation method for matching a pre-trained coloring model library according to the present invention.
FIG. 3 is a flow chart of generating a confrontation network model from colored model input in the present invention.
Fig. 4 is a flowchart of a second implementation method for matching a pre-trained coloring model library according to the present invention.
FIG. 5 is a flow chart of the present invention for generating an antagonistic network model from an input image.
Fig. 6 is a flow chart of the method for fusing the virtual object with the background of the video stream according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The method for coloring the augmented reality image based on the technology of generating the confrontation network, which is designed for home decoration, comprises the following steps as shown in figure 1:
s101, collecting real-time video streams by using video collecting equipment;
s102, scanning the digital identifier;
s103, identifying the marker through an augmented reality program, and primarily determining the position and the direction of the three-dimensional virtual object;
s104, matching the identifier with the three-dimensional virtual object;
s105, adjusting the position of the three-dimensional virtual model again according to the position of the marker;
s106, determining style requirements;
s107, matching a pre-training coloring model library;
s108, fusing the virtual object with the video stream background by using a contour method, determining the position, and realizing tracking and positioning of the object;
and S109, coloring the virtual object into a video stream.
The coloring method provided by the invention uses a pre-training mode to accelerate the coloring speed, and can quickly call the existing model by matching with a pre-training coloring model library compared with a calling mode of storing the texture in a video memory.
In the present invention, a first implementation method for matching a pre-trained coloring model library includes the following steps, as shown in fig. 2:
s201, inputting vertex coordinates needed by a three-dimensional model to be colored;
s202, placing the three-dimensional model at a three-dimensional scene verification position;
s203, setting the angle and the visual angle of a camera in a scene;
s204, setting parameters of illumination position, color and direction;
s205, setting color parameters of the three-dimensional model;
s206, inputting the colored model to generate a confrontation network model;
and S207, storing the model passing through the discriminant network into a pre-training model library.
The generation of the confrontation network model by inputting the colored model comprises the following steps as shown in fig. 3:
s301, firstly inputting a colored three-dimensional model;
s302, storing the information into a discrimination network model library, wherein the discrimination network stores a coloring model with set parameters;
s303, generating a single three-dimensional model of network output;
s304, judging a similarity value between the generated model and the model library calculated by the network;
s305, if the similarity value between the generated model and the model library is larger than or equal to a preset threshold value, the generated colored three-dimensional model is judged to be close to a real model. And if the similarity value is smaller than a preset threshold value, judging to generate a non-true model of the three-dimensional model colored by the network. Repeating the steps S303 and S304 until the generated coloring model given by the network is judged to be real;
s306, outputting the three-dimensional coloring model passing through the step S405.
In the present invention, the second implementation method for matching the pre-trained coloring model library includes the following steps, as shown in fig. 4:
s401, inputting different types of images of different known authors;
s402, generating a confrontation network model from the input image;
and S403, storing the model passing through the discriminant network into a pre-training model library.
Generating the input image into the confrontation network model comprises the following steps, as shown in fig. 5:
s501, inputting a finished product image model;
s502, storing the information into a discrimination network model library, wherein the discrimination network stores a coloring model with set parameters;
s503, generating a network output single model;
s504, judging a similarity value between the generated model and the model library calculated by the network;
and S505, if the similarity value between the generated model and the model library is greater than or equal to a preset threshold value, judging that the generated colored image model is close to a real model. And if the similarity value is smaller than a preset threshold value, judging that the generated image model colored by the network is not a true model. And repeating the steps S503 and S504 until the coloring model generated by the network is judged to be real.
The pre-training coloring model library provided by the invention is realized by a generation countermeasure network based on deep learning. The coloring method provided by the invention can track the real-time fusion and scene of the target.
The method for fusing the virtual object and the video stream background provided by the invention comprises the following steps, as shown in fig. 6:
s601, recognizing the background object outline of the video stream by using a recognition program;
s602, extracting position coordinates of a background object of the video stream;
s603, overlapping and implementing the virtual object on the video stream background object by taking the position coordinate as a reference point;
in addition to the above-described embodiments, the present invention may have other embodiments, and any technical solutions formed by equivalent replacement or equivalent transformation are within the scope of the present invention.

Claims (6)

1. An augmented reality image coloring method based on a generation countermeasure network technology and oriented to home decoration design is characterized by comprising the following steps:
s101: collecting a real-time video;
s102: scanning the digital marker;
s103: identifying the marker by an augmented reality program;
s104: matching the marker with the three-dimensional virtual object;
s105: adjusting the position of the three-dimensional virtual model according to the position of the marker;
s106: determining style requirements;
s107: matching a pre-training coloring model library;
s108: fusing the virtual object with the video stream background;
s109: the virtual object is colored into a video stream.
2. The method for coloring augmented reality images based on generative confrontation network technology for home decoration design according to claim 1, wherein the method for matching the pre-trained coloring model library comprises the following steps:
s201: inputting vertex coordinates of a three-dimensional model to be colored;
s202: placing the three-dimensional model at a position for three-dimensional scene verification;
s203: setting the angle and the visual angle of a camera;
s204: setting parameters of illumination position, color and direction;
s205: setting color parameters of the three-dimensional model;
s206: inputting the colored model to generate a confrontation network model;
s207: and storing the model passing through the discrimination network into a pre-training model library.
3. The method for coloring augmented reality images based on the network generation technology for home decoration design according to claim 2, wherein the method for inputting the colored model into the network generation model comprises the following steps:
s301: inputting the colored three-dimensional model;
s302: storing the color model into a discrimination network model library, wherein the discrimination network stores the coloring model with set parameters;
s303: generating a three-dimensional model after network output generation;
s304: the discrimination network calculates the similarity value of the generated model and the model base;
s305: if the similarity value of the generated model and the model library is larger than or equal to a preset threshold value, judging that the generated colored three-dimensional model is close to a real model; if the similarity value is smaller than a preset threshold value, judging to generate a non-true model of the three-dimensional model colored by the network; repeating the steps S303 and S304 until the generated coloring model given by the network is judged to be real;
s306: and outputting the three-dimensional coloring model passing through the discrimination network.
4. The method for coloring augmented reality images based on generative confrontation network technology for home decoration design according to claim 1, wherein the second method for matching the pre-trained coloring model library comprises the following steps:
s401: inputting different types of images of different known authors;
s402: generating an antagonistic network model from the input image;
s403: and storing the model passing through the discrimination network into a pre-training model library.
5. The method for coloring augmented reality images based on the network generation technology for home decoration design according to claim 1, wherein the method for generating the input images into the network model for home decoration comprises the following steps:
s501: inputting a finished product image model;
s502: storing the color model into a discrimination network model library, wherein the discrimination network stores the coloring model with set parameters;
s503: generating an image model after network output generation;
s504: the discrimination network calculates the similarity value of the generated model and the model base;
s505: if the similarity value of the generated model and the model library is larger than or equal to a preset threshold value, judging that the generated colored image model is close to a real model; if the similarity value is smaller than the preset threshold value, judging that the generated network coloring image model is not true, and repeating the steps S503 and S504 until the generated coloring model given by the network is true.
6. The method for coloring augmented reality image based on generative countermeasure network technology for home decoration design according to claim 1, wherein the virtual object is fused with video stream background, comprising the following steps:
s601: identifying the outline of the background object of the video stream by using an identification program;
s602: extracting the position coordinates of the background object of the video stream;
s603: and displaying the virtual object on the background object of the video stream in an overlapping way by taking the position coordinates as a reference point.
CN201811634717.6A 2018-12-29 2018-12-29 Home decoration design-oriented augmented reality image rendering coloring method based on generation countermeasure network technology Active CN111383343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811634717.6A CN111383343B (en) 2018-12-29 2018-12-29 Home decoration design-oriented augmented reality image rendering coloring method based on generation countermeasure network technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811634717.6A CN111383343B (en) 2018-12-29 2018-12-29 Home decoration design-oriented augmented reality image rendering coloring method based on generation countermeasure network technology

Publications (2)

Publication Number Publication Date
CN111383343A true CN111383343A (en) 2020-07-07
CN111383343B CN111383343B (en) 2024-01-16

Family

ID=71220562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811634717.6A Active CN111383343B (en) 2018-12-29 2018-12-29 Home decoration design-oriented augmented reality image rendering coloring method based on generation countermeasure network technology

Country Status (1)

Country Link
CN (1) CN111383343B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223186A (en) * 2021-07-07 2021-08-06 江西科骏实业有限公司 Processing method, equipment, product and device for realizing augmented reality
CN113379869A (en) * 2021-07-23 2021-09-10 浙江大华技术股份有限公司 License plate image generation method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN106814457A (en) * 2017-01-20 2017-06-09 杭州青杉奇勋科技有限公司 Augmented reality glasses and the method that household displaying is carried out using the glasses
CN108597030A (en) * 2018-04-23 2018-09-28 新华网股份有限公司 Effect of shadow display methods, device and the electronic equipment of augmented reality AR
CN108711138A (en) * 2018-06-06 2018-10-26 北京印刷学院 A kind of gray scale picture colorization method based on generation confrontation network
CN108805648A (en) * 2017-04-19 2018-11-13 苏州宝时得电动工具有限公司 Virtual reality system and its control method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN106814457A (en) * 2017-01-20 2017-06-09 杭州青杉奇勋科技有限公司 Augmented reality glasses and the method that household displaying is carried out using the glasses
CN108805648A (en) * 2017-04-19 2018-11-13 苏州宝时得电动工具有限公司 Virtual reality system and its control method
CN108597030A (en) * 2018-04-23 2018-09-28 新华网股份有限公司 Effect of shadow display methods, device and the electronic equipment of augmented reality AR
CN108711138A (en) * 2018-06-06 2018-10-26 北京印刷学院 A kind of gray scale picture colorization method based on generation confrontation network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223186A (en) * 2021-07-07 2021-08-06 江西科骏实业有限公司 Processing method, equipment, product and device for realizing augmented reality
CN113223186B (en) * 2021-07-07 2021-10-15 江西科骏实业有限公司 Processing method, equipment, product and device for realizing augmented reality
CN113379869A (en) * 2021-07-23 2021-09-10 浙江大华技术股份有限公司 License plate image generation method and device, electronic equipment and storage medium
CN113379869B (en) * 2021-07-23 2023-03-24 浙江大华技术股份有限公司 License plate image generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111383343B (en) 2024-01-16

Similar Documents

Publication Publication Date Title
US10325407B2 (en) Attribute detection tools for mixed reality
WO2019223463A1 (en) Image processing method and apparatus, storage medium, and computer device
CN106157359B (en) Design method of virtual scene experience system
CN110163942B (en) Image data processing method and device
CN104050859A (en) Interactive digital stereoscopic sand table system
CN108876886B (en) Image processing method and device and computer equipment
US20020149581A1 (en) Method for occlusion of movable objects and people in augmented reality scenes
EP3533218B1 (en) Simulating depth of field
CN110442245A (en) Display methods, device, terminal device and storage medium based on physical keyboard
CN104331164A (en) Gesture movement smoothing method based on similarity threshold value analysis of gesture recognition
CN110110412A (en) House type full trim simulation shows method and display systems based on BIM technology
CN111383343B (en) Home decoration design-oriented augmented reality image rendering coloring method based on generation countermeasure network technology
Wang et al. Wuju opera cultural creative products and research on visual image under VR technology
Zhang et al. The Application of Folk Art with Virtual Reality Technology in Visual Communication.
CN109712246B (en) Augmented reality image coloring method based on generation countermeasure network technology
US20140306953A1 (en) 3D Rendering for Training Computer Vision Recognition
CN107871338B (en) Real-time, interactive rendering method based on scene decoration
US11600041B2 (en) Computing illumination of an elongated shape having a noncircular cross section
CN112435316B (en) Method and device for preventing mold penetration in game, electronic equipment and storage medium
CN114942737A (en) Display method, display device, head-mounted device and storage medium
Tao A VR/AR-based display system for arts and crafts museum
JP2023512129A (en) How to infer the fine details of skin animation
KR20100138193A (en) The augmented reality content providing system and equipment for the user interaction based on touchscreen
CN114066715A (en) Image style migration method and device, electronic equipment and storage medium
CN114185431B (en) Intelligent media interaction method based on MR technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant