CN116912648A - Method, device, equipment and storage medium for generating material parameter identification model - Google Patents

Method, device, equipment and storage medium for generating material parameter identification model Download PDF

Info

Publication number
CN116912648A
CN116912648A CN202310901269.6A CN202310901269A CN116912648A CN 116912648 A CN116912648 A CN 116912648A CN 202310901269 A CN202310901269 A CN 202310901269A CN 116912648 A CN116912648 A CN 116912648A
Authority
CN
China
Prior art keywords
image
feature map
parameter
target
material parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310901269.6A
Other languages
Chinese (zh)
Inventor
俞江
陈有鑫
郑顺默
吴龙海
陈洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN202310901269.6A priority Critical patent/CN116912648A/en
Publication of CN116912648A publication Critical patent/CN116912648A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure refers to a method, apparatus, device and storage medium for generating a material parameter identification model. The method comprises the steps of obtaining a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter label of the first object is used for representing parameters of materials of the surface of the first object; training the target image pair, the target text and the material parameter label of the first object to obtain a material parameter identification model; the multi-mode model design can be realized based on the feature images of the target image pair and the target text, training samples are enriched, and the recognition precision of the material parameter recognition model is improved, so that the material parameters can be accurately analyzed based on the material parameter recognition model.

Description

Method, device, equipment and storage medium for generating material parameter identification model
Technical Field
The present disclosure relates to the field of computers, and in particular, to the field of computer vision and material identification technologies, and more particularly, to a method, apparatus, device, and storage medium for generating a material parameter identification model.
Background
The material parameters can be used to characterize various properties of the material, and are widely used. If the material is analyzed, accurate material parameters are obtained, and the maximization of the material performance can be realized.
Currently, analysis of materials is focused mainly on analyzing the class of materials. Although there is analysis of material parameters, the analysis of material parameters by computer vision is rarely studied, so that the result of the analysis of material parameters cannot be ensured. Moreover, there are differences in the parameters possessed by different classes of materials, even the same class of materials; thus, there is a need for a method of accurately identifying material parameters.
Disclosure of Invention
The embodiment of the disclosure provides a method, a device, equipment and a storage medium for generating a material parameter identification model.
In a first aspect, an embodiment of the present disclosure proposes a method for generating a material parameter identification model, including: acquiring a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter label of the first object is used for representing the parameter of the material of the surface of the first object; training the object image pair, the object text and the material parameter label of the first object according to the object image pair, and obtaining a material parameter identification model.
In some embodiments, training the object image, the object text, and the material parameter tag of the first object to obtain a material parameter identification model includes: encoding the target image pair and the target text respectively to obtain a feature map of the target image pair and a feature map of the target text; fusing the feature images of the target image pair and the feature images of the target text to obtain a fused feature image; training based on the fusion feature map and the material parameter label of the first object to obtain a material parameter identification model.
In some embodiments, fusing the feature map of the target image pair with the feature map of the target text to obtain a fused feature map includes: and fusing the feature map of the target image pair and the feature map of the target text by adopting an attention mechanism to obtain a fused feature map.
In some embodiments, the target text further comprises: a class of material of the first object.
In some embodiments, the feature map of the target image pair and the feature map of the target text are fused using an attention mechanism to obtain a fused feature map, comprising:
fusing the feature map of the first image and the feature map of the second image to obtain a first feature map; fusing the characteristic diagram of the environmental parameter and the characteristic diagram of the material class of the first object to obtain a second characteristic diagram;
Performing feature intersection on the first feature map and the second feature map by adopting a cross attention mechanism to obtain a first cross feature map and a second cross feature map;
and fusing the first cross feature map and the second cross feature map to obtain a fused feature map.
In some embodiments, fusing the feature map of the first image and the feature map of the second image to obtain a first feature map; and fusing the feature map of the environmental parameter and the feature map of the material class of the first object to obtain a second feature map, including:
fusing the feature map of the first image and the feature map of the second image to obtain a first fused feature map; processing the first fused feature map by adopting a self-attention mechanism to obtain a first feature map; fusing the characteristic diagram of the environmental parameter and the characteristic diagram of the material class of the first object to obtain a second fused characteristic diagram; processing the second fused feature map by adopting a self-attention mechanism to obtain a second feature map;
and/or the number of the groups of groups,
fusing the first cross feature map and the second cross feature map to obtain a fused feature map, including: fusing the first cross feature map and the second cross feature map to obtain a third fused feature map; and processing the third fused feature map by adopting a self-attention mechanism to obtain a fused feature map.
In some embodiments, the material class of the first object is determined based on the steps of: projecting preset light on the surface of a first object through a projection device, and acquiring a third image through a shooting device; and inputting the third image into a preset material classification model to obtain the material class of the first object.
In some embodiments, inputting the third image into a preset material classification model to obtain a material class of the first object includes: acquiring a fourth image from the third image according to a preset mask area, wherein the fourth image comprises an image of the first object; and inputting the fourth image into a preset material classification model to obtain the material class of the first object.
In some embodiments, the mask area is determined based on the steps of: projecting the first image on the surface of the first object by a projection device, and obtaining a second image by a shooting device; image segmentation is carried out on the second image to obtain a segmented image, wherein the segmented image is an image comprising a first object; comparing the segmented image with the first image to determine a projection area; a mask region is generated from the projection region.
In some embodiments, the environmental parameters are determined based on the steps of: environmental parameters are acquired by sensors.
In a second aspect, an embodiment of the present disclosure provides a method for identifying a material parameter, including: acquiring a first image pair, wherein the first image pair comprises a third image and a fourth image, and the fourth image is an image obtained by projecting the third image on the surface of the second object, wherein the environment parameter of the second object in the first image pair and the material class of the second object are obtained; and inputting the first image pair, the environmental parameter of the second object and the material category of the second object into a pre-trained material parameter identification model to obtain the material parameter of the second object.
In some embodiments, the method of identifying a material parameter further comprises: and determining the material loss of the second object according to the material parameters of the second object.
In some embodiments, the method of identifying a material parameter further comprises: determining display parameters according to the material parameters of the second object;
and optimally displaying the image to be enhanced according to the display parameters.
In some embodiments, the image to be enhanced for display includes: projecting an image presented on a surface of a second object; or, a virtual reality image in virtual reality.
In a third aspect, an embodiment of the present disclosure proposes an apparatus for generating a material parameter identification model, including: the image acquisition module is configured to acquire a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter label of the first object is used for representing parameters of materials of the surface of the first object; the model training module is configured to train the target image pair, the target text and the material parameter label of the first object to obtain a material parameter identification model.
In some embodiments, the model training module comprises:
the encoding unit is configured to encode the target image pair and the target text respectively to obtain a feature map of the target image pair and a feature map of the target text;
the fusion unit is configured to fuse the feature images of the target image pair and the feature images of the target text to obtain a fused feature image;
and the training unit is configured to train based on the fusion characteristic diagram and the material parameter label of the first object to obtain a material parameter identification model.
In some embodiments, the fusion unit is further configured to: and fusing the feature map of the target image pair and the feature map of the target text by adopting an attention mechanism to obtain a fused feature map.
In some embodiments, the target text further comprises: a class of material of the first object.
In some embodiments, the fusion unit comprises:
the first fusion subunit is configured to fuse the feature map of the first image and the feature map of the second image to obtain a first feature map; fusing the characteristic diagram of the environmental parameter and the characteristic diagram of the material class of the first object to obtain a second characteristic diagram;
the feature intersection subunit is configured to perform feature intersection on the first feature map and the second feature map by adopting a cross attention mechanism to obtain a first cross feature map and a second cross feature map;
And the second fusion subunit is configured to fuse the first cross feature map and the second cross feature map to obtain a fusion feature map.
In some embodiments, the first fusion subunit is further configured to:
fusing the feature map of the first image and the feature map of the second image to obtain a first fused feature map; processing the first fused feature map by adopting a self-attention mechanism to obtain a first feature map; fusing the characteristic diagram of the environmental parameter and the characteristic diagram of the material class of the first object to obtain a second fused characteristic diagram; processing the second fused feature map by adopting a self-attention mechanism to obtain a second feature map;
and/or the number of the groups of groups,
a second fusion subunit further configured to: fusing the first cross feature map and the second cross feature map to obtain a fused feature map, including: fusing the first cross feature map and the second cross feature map to obtain a third fused feature map; and processing the third fused feature map by adopting a self-attention mechanism to obtain a fused feature map.
In some embodiments, the apparatus for generating a material parameter identification model further comprises:
an image acquisition module configured to project preset light onto a surface of a first object by a projection device and acquire a third image by a photographing device;
The category identification module is configured to input a third image into a preset material classification model to obtain a material category of the first object.
In some embodiments, the category identification module is further configured to: acquiring a fourth image from the third image according to a preset mask area, wherein the fourth image comprises an image of the first object; and inputting the fourth image into a preset material classification model to obtain the material class of the first object.
In some embodiments, the apparatus for generating a material parameter identification model further comprises:
an image acquisition module configured to project a first image on a surface of a first object by a projection device and obtain a second image by a photographing device;
the image segmentation module is configured to carry out image segmentation on the second image to obtain a segmented image, wherein the segmented image is an image comprising a first object;
the image comparison module is configured to compare the segmented image with the first image and determine a projection area;
the region generation module is configured to generate a mask region according to the projection region.
In some embodiments, the apparatus for generating a material parameter identification model further comprises: and a parameter acquisition module configured to acquire the environmental parameter by the sensor.
In a fourth aspect, an embodiment of the present disclosure provides a device for identifying a material parameter, including: the image acquisition module is configured to acquire a first image pair, an environmental parameter of a second object in the first image pair and a material category of the second object, wherein the first image pair comprises a third image and a fourth image, and the fourth image is an image obtained by projecting the third image on the surface of the second object; the parameter identification module is configured to input the first image pair, the environmental parameter of the second object and the material category of the second object into a pre-trained material parameter identification model to obtain the material parameter of the second object.
In some embodiments, the identification means of the material parameter further comprises: and a loss determination module configured to determine a loss of material of the second object based on the material parameter of the second object.
In some embodiments, the identification means of the material parameter further comprises: a display parameter determination module configured to determine a display parameter based on a material parameter of the second object; and the optimization adjustment module is configured to optimally display the image to be enhanced and displayed according to the display parameters.
In some embodiments, the image to be enhanced for display includes: projecting an image presented on a surface of a second object; or, a virtual reality image in virtual reality.
In a fifth aspect, an embodiment of the present disclosure proposes an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the first or second aspect.
In a sixth aspect, embodiments of the present disclosure provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method as described in the first or second aspect.
The method, the device, the equipment and the storage medium for generating the material parameter identification model provided by the embodiment of the disclosure acquire a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter label of the first object is used for representing parameters of a material of the surface of the first object; training the target image pair, the target text and the material parameter label of the first object to obtain a material parameter identification model; the multi-mode model design can be realized based on the feature images of the target image pair and the target text, training samples are enriched, and the recognition precision of the material parameter recognition model is improved, so that the material parameters can be accurately analyzed based on the material parameter recognition model.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings. The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram to which the present disclosure may be applied;
FIG. 2 is a flow chart of one embodiment of a method of generating a material parameter identification model according to the present disclosure;
FIG. 3 is a flow chart of one embodiment of a method of generating a material parameter identification model according to the present disclosure;
FIG. 4 is a flow chart of one embodiment of a method of generating a material parameter identification model according to the present disclosure;
FIG. 5 is a schematic illustration of identifying a material class of a first object;
FIG. 6 is a schematic diagram of generating a projection area;
FIG. 7 is a schematic diagram of capturing projected content;
FIG. 8 is a flow chart of one embodiment of a method of generating a material parameter identification model according to the present disclosure;
FIG. 9 is a schematic diagram of a material parameter identification model;
FIG. 10 is a flow chart of one embodiment of a method of identifying material parameters according to the present disclosure;
FIG. 11 is an application scenario diagram of a method of identifying material parameters according to the present disclosure;
FIG. 12 is an application scenario diagram of a method of identifying material parameters according to the present disclosure;
FIG. 13 is an application scenario diagram of a method of identifying material parameters according to the present disclosure;
FIG. 14 is an application scenario diagram of a method of identifying material parameters according to the present disclosure;
FIG. 15 is a schematic structural view of one embodiment of an apparatus for generating a material parameter identification model according to the present disclosure;
FIG. 16 is a schematic structural view of one embodiment of an identification device of material parameters according to the present disclosure;
fig. 17 is a block diagram of an electronic device for implementing a method of generating a material parameter identification model or a method of identifying material parameters in accordance with an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of methods and apparatus for generating a material parameter identification model or identification of material parameters of the present disclosure may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, a network 103, and a server 104. The network 103 is the medium used to provide communication links between the terminal devices 101, 102 and the server 104. The network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 104 via the network 103 using the terminal device 101, 102 to receive or send the target image pair, the material parameters of the first object, etc. Various client applications, such as shooting software, etc., may be installed on the terminal devices 101, 102.
The terminal devices 101 and 102 may be hardware or software. When the terminal devices 101, 102 are hardware, they may be a variety of electronic devices including, but not limited to, smartphones, tablets, laptop and desktop computers, and the like. When the terminal devices 101, 102 are software, they can be installed in the above-described electronic devices. Which may be implemented as a plurality of software or software modules, or as a single software or software module. The present invention is not particularly limited herein.
The server 104 may provide various services. For example, the server 104 may tag the material parameters of the object image pair, the object text, and the first object acquired from the terminal device 101, 102 and train to obtain a material parameter identification model.
The server 104 may be hardware or software. When the server 104 is hardware, it may be implemented as a distributed server cluster formed by a plurality of servers, or as a single server. When server 104 is software, it may be implemented as multiple software or software modules (e.g., to provide distributed services), or as a single software or software module. The present invention is not particularly limited herein.
It should be noted that, the method for generating the material parameter identification model or the method for identifying the material parameter provided by the embodiments of the present disclosure is generally executed by the server 104, and accordingly, the device for identifying the material parameter or the device for identifying the material parameter is generally disposed in the server 104.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method of generating a material parameter identification model according to the present disclosure is shown. The method of generating a material parameter identification model may comprise the steps of:
step 201, acquiring a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter label of the first object is used for representing parameters of materials of the surface of the first object.
In this embodiment, an execution subject (e.g., the server 104 shown in fig. 1) of the method of generating a material parameter identification model may acquire a target image pair, a target text, and a material parameter tag of a first object from a terminal device (e.g., the terminal devices 101, 102 shown in fig. 1); or, the above-described execution subject (e.g., the terminal devices 101, 102 shown in fig. 1) may acquire the target image pair, the target text, and the material parameter tag of the first object locally.
Here, the material parameter tag of the first object may be used to characterize a parameter of the material to which the surface of the first object belongs.
In this embodiment, the identification of the material parameters may be used to determine material loss; after the material loss is determined, corresponding maintenance measures can be adopted according to the material loss so as to delay the loss speed of the material; the identification of the material parameters may also be used to optimize images displayed on the material, images in virtual reality, etc., to enhance the user's look and feel.
In one example, the first image may be an image corresponding to a projection screen (i.e., a screen of a projection source).
In one example, the second image may be an image resulting from the projection of the first image onto the surface of the first object by the projection device.
Correspondingly, in this example, a first image is projected on the surface of the first object, and an image presented on the surface of the first object, i.e. a second image, is acquired. For example, the image on the surface of the first object is acquired by an acquisition module in the first object, where the acquisition module may be a module with functions of image acquisition, scanning, recording, or the like.
Here, the projection device may be a device having a projection function, such as a projector, a projector. The surface of the first object may be a plane for projection presentation of the first image.
It should be noted that the acquiring module in the first object may be an image acquiring module in the display.
Correspondingly, in this example, the second image may also be acquired by the camera after the first image is projected on the surface of the first object. The photographing device may be a device having a photographing function, for example, a video camera, a still camera, a video recorder, or the like. In the actual shooting, the shooting device shoots the image presented on the surface of the first object and the background where the first object is located together, and in the subsequent process, the first object can be segmented from the second image for the recognition precision of the model.
It should be noted that the first object may be an object for presenting the first image, for example, a projection screen, a curtain, a display, a touch screen, a display screen of an electronic device, an electronic whiteboard, a wall, or the like.
Here, the target text may be used to characterize all features of the target text, and the target text may include an environmental parameter, which may be an environmental parameter in which the first object is located, e.g., a light parameter, a temperature parameter, a humidity parameter, etc.
Step 202, training the object image pair, the object text and the material parameter label of the first object according to the object image pair, and obtaining a material parameter identification model.
In this embodiment, the execution body may train the initial material parameter identification model by using the target image pair, the target text and the material parameter label of the first object as training samples, to obtain a trained material parameter identification model. The material parameter identification model can be used to identify material parameters, such as color parameters, reflectivity, refractive index, gain, viewing angle, of multiple colors.
It should be noted that, the initial parameter identification model may be an untrained model, or a model at any stage before the training is completed. The training may be completed for step 202 to obtain a time corresponding to the trained material parameter identification model.
The method for generating a material parameter identification model provided by the embodiment of the disclosure includes the steps of firstly acquiring a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter label of the first object is used for representing parameters of a material to which the surface of the first object belongs; training the target image pair, the target text and the material parameter label of the first object to obtain a material parameter identification model; the multi-mode model design can be realized based on the feature images of the target image pair and the target text, training samples are enriched, and the recognition precision of the material parameter recognition model is improved, so that the material parameters can be accurately analyzed based on the material parameter recognition model.
With continued reference to FIG. 3, a flow 300 of one embodiment of a method of generating a material parameter identification model according to the present disclosure is shown. The method of generating a material parameter identification model may comprise the steps of:
step 301, acquiring a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter label of the first object is used for representing a parameter of a material to which the surface of the first object belongs.
In this embodiment, an execution subject (e.g., the server 104 shown in fig. 1) of the method of generating a material parameter identification model may acquire a target image pair, a target text, and a material parameter tag of a first object from a terminal device (e.g., the terminal devices 101, 102 shown in fig. 1); or, the above-described execution subject (e.g., the terminal devices 101, 102 shown in fig. 1) may acquire the target image pair, the target text, and the material parameter tag of the first object locally.
It should be noted that, step 301 corresponds to step 201 in the foregoing embodiment, and the specific implementation may refer to the foregoing description of step 201, which is not repeated herein.
And 302, encoding the target image pair and the target text respectively to obtain a feature map of the target image pair and a feature map of the target text.
In this embodiment, the execution body may perform feature extraction on the target image pair by using image encoding corresponding to the target image pair, so as to obtain a feature map of the target image pair; and extracting the characteristics of the target text by adopting the text codes corresponding to the target text so as to obtain the characteristic diagram of the target text.
Here, the feature map of the target image pair may include a feature map of the first image and a feature map of the second image.
Here, the feature map of the target text may be used to characterize all features of the target text, and the target text may include an environmental parameter, which may be an environmental parameter in which the first object is located, for example, a light parameter, a temperature parameter, a humidity parameter, and the like.
Step 303, fusing the feature map of the target image pair and the feature map of the target text to obtain a fused feature map.
In this embodiment, the executing body may perform feature stitching on the feature map of the target image pair and the feature map of the target text, so as to obtain a fused feature map. The fused feature map may be a feature map obtained by feature fusion of a feature map of the target image pair and a feature map of the target text.
In one example, feature fusion may include the following: feature fusion between the feature map of the target image pair and the feature map of the target text is realized through a full-connection layer in a preset neural network; or fusing the feature map of the target image pair and the feature map of the target text through an attention mechanism.
It should be noted that, in this embodiment, any manner of fusing the feature map of the target image pair and the feature map of the target text may be included in the protection scope of the present disclosure.
And step 304, training based on the fusion feature map and the material parameter label of the first object to obtain a material parameter identification model.
In this embodiment, the execution body may train the initial material parameter identification model by using the material parameter label fused with the feature map and the first object as a training sample, to obtain a trained material parameter identification model.
It should be noted that, the initial parameter identification model may be an untrained model, or a model at any stage before the training is completed. The training may be completed for the training of step 304 to obtain a time corresponding to the trained material parameter identification model.
The method for generating a material parameter identification model provided by the embodiment of the disclosure includes the steps of firstly acquiring a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter label of the first object is used for representing parameters of a material to which the surface of the first object belongs; encoding the target image pair and the target text respectively to obtain a feature map of the target image pair and a feature map of the target text; then fusing the feature map of the target image pair and the feature map of the target text to obtain a fused feature map; training based on the fusion feature map and the material parameter label of the first object to obtain a material parameter identification model; the multi-mode model design can be realized based on the feature map of the target image pair and the feature map of the target text, and the feature between the feature map of the target image pair and the feature map of the target text is shared by fusing the features, so that the fitting capacity of the material parameter identification model is enhanced, the identification precision of the material parameter identification model is improved, and the material parameters can be accurately analyzed based on the material parameter identification model.
In some alternative implementations of the present embodiment, the environmental parameters are determined based on the steps of: environmental parameters are acquired by sensors.
In this implementation, the environmental parameter may be an environmental parameter in which the first object is located, for example, the first object placed in the dark, the first object placed under the light.
In one example, environmental parameters may be acquired by various sensors, such as light sensors, humidity sensors, temperature sensors, and the like.
In the implementation manner, the sensor is used for acquiring the environmental parameter so as to further analyze the association between the environmental parameter and the material of the surface of the first object, so that the accuracy of the data input into the initial material parameter identification model is high, and the identification accuracy of the material parameter identification model obtained based on the initial material parameter identification model training is improved.
With continued reference to fig. 4, fig. 4 illustrates a flow 400 of one embodiment of a method of generating a material parameter identification model according to the present disclosure. The method of generating a material parameter identification model may comprise the steps of:
step 401, acquiring a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter label of the first object is used for representing a parameter of a material to which the surface of the first object belongs.
In this embodiment, an execution subject (e.g., the server 104 shown in fig. 1) of the method of generating a material parameter identification model may acquire a target image pair, a target text, and a material parameter tag of a first object from a terminal device (e.g., the terminal devices 101, 102 shown in fig. 1); or, the above-described execution subject (e.g., the terminal devices 101, 102 shown in fig. 1) may acquire the target image pair, the target text, and the material parameter tag of the first object locally.
It should be noted that, step 401 corresponds to step 301 in the foregoing embodiment, and specific implementation may refer to the foregoing description of step 301, which is not repeated herein.
And step 402, encoding the target image pair and the target text respectively to obtain a feature map of the target image pair and a feature map of the target text.
In this embodiment, the execution body may perform feature extraction on the target image pair by using image encoding corresponding to the target image pair, so as to obtain a feature map of the target image pair; and extracting the characteristics of the target text by adopting the text codes corresponding to the target text so as to obtain the characteristic diagram of the target text.
It should be noted that, step 402 corresponds to step 302 in the foregoing embodiment, and the specific implementation may refer to the foregoing description of step 302, which is not repeated herein.
And step 403, fusing the feature map of the target image pair and the feature map of the target text by adopting an attention mechanism to obtain a fused feature map.
In this embodiment, the executing body may use an attention mechanism to splice the feature images of the target image pair and the target text, so as to obtain a fused feature image. The above-mentioned fusion feature map fuses the features in the feature map of target image pair and target text.
Here, the attention mechanism can enhance the characteristic which can better represent the material parameter of the first object in the process of fusing the characteristic, so that the material parameter of the first object can be accurately represented through the fusion characteristic diagram obtained by fusing the characteristic, and the recognition accuracy of the material parameter recognition model obtained by training based on the fusion characteristic diagram is further improved.
Step 404, training based on the fusion feature map and the material parameter label of the first object to obtain a material parameter identification model.
In this embodiment, the execution body may train the initial material parameter identification model by using the material parameter label fused with the feature map and the first object as a training sample, to obtain a trained material parameter identification model.
Note that, the present invention is not limited to the above-described embodiments. Step 404 corresponds to step 304 of the foregoing embodiment, and the specific implementation may refer to the foregoing description of step 304, which is not repeated here.
The method for generating a material parameter identification model provided by the embodiment of the disclosure includes the steps of firstly acquiring a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter label of the first object is used for representing parameters of a material to which the surface of the first object belongs; encoding the target image pair and the target text respectively to obtain a feature map of the target image pair and a feature map of the target text; then adopting an attention mechanism to fuse the feature images of the target image pair and the feature images of the target text to obtain a fused feature image; training based on the fusion feature map and the material parameter label of the first object to obtain a material parameter identification model; the multi-mode model design can be realized based on the feature images of the target image pair and the feature images of the target text, the features are fused through the attention mechanism to share the features between the feature images of the target image pair and the feature images of the target text, and the material parameter features which can better represent the first object are enhanced in the process of sharing the features, so that the material parameters of the first object can be accurately represented through the fused feature images obtained through fusion of the features, the fitting capacity of a material parameter identification model is further improved, the identification precision of the material parameter identification model is improved, and the material parameters can be accurately analyzed based on the material parameter identification model.
In some optional implementations of this embodiment, the target text further includes: a class of material of the first object.
In this implementation, the target text may further include a material class of the first object. The class of materials of the first object may be used to characterize the class of materials to which the first object belongs, e.g., paint, wallpaper, marble, fiberglass curtain, metal curtain, wood board, etc.
It should be noted that, the material category of the first object may be a result obtained by identifying through a preset material classification model; or, the results obtained are manually noted by the user.
In one example, training may be performed by including an image of the first object and a material tag of the first object to obtain a preset material classification model.
In the implementation manner, the characteristics in the characteristic images of the target text are enriched by introducing the material category of the first object into the target text, so that the characteristics in the fused characteristic images are more attached to the material of the first object in the fusion process of the characteristic images of the target image pair and the characteristic images of the target text, and the recognition accuracy of the material parameter recognition model is further improved.
In some alternative implementations of the present embodiment, the material class of the first object is determined based on: projecting preset light on the surface of a first object through a projection device, and acquiring a third image through a shooting device; and inputting the third image into a preset material classification model to obtain the material class of the first object.
In this implementation manner, the execution body may project preset light onto the surface of the first object through the projection device, and acquire an image presented on the surface of the first object through the photographing device to acquire a third image; and then, inputting the third image into a preset material classification model to obtain the material class of the first object.
Correspondingly, in this example, the image presented on the surface of the first object is photographed, previewed, recorded, etc. by the photographing device to obtain the third image.
It should be noted that, the preset light may be different from the light with a color presented by the surface of the first object, so as not to interfere with the effect presented by the surface of the first object when the preset light is projected. The preset light may be various white lights.
In one example, training may be performed through the third image and the material tag of the first object to obtain a preset material classification model.
In the implementation manner, the material class of the first object is accurately identified through the material classification model, and then the material class of the first object is introduced into the target text, so that the features in the feature images of the target text are enriched, and the features in the fused feature images are enabled to be more attached to the material to which the surface of the first object belongs in the process of fusing the feature images of the target image pair and the feature images of the target text, so that the identification precision of the material parameter identification model is further improved, and the material parameters can be accurately analyzed based on the material parameter identification model.
In some optional implementations of the present embodiment, inputting the third image into a preset material classification model to obtain the material class of the first object may include:
acquiring a fourth image from the third image according to a preset mask area, wherein the fourth image can be an image comprising the first object;
and inputting the fourth image into a preset material classification model to obtain the material class of the first object.
In this implementation manner, the execution body may extract the fourth image from the third image according to a preset mask area; and inputting the fourth image into a preset material classification model to obtain the material class of the first object. The mask area may be an area that has been occluded by other layers, for example, by layers of different transparency, or different textures.
In one example, in fig. 5, white light is projected on the surface of the first object by the projection device, and then content is acquired from the content presented on the surface of the first object by the photographing device to obtain a third image; then, combining the mask area, extracting content from the third image to obtain a fourth image, wherein the fourth image is an image comprising the first object; and inputting the fourth image into a preset material classification model, and classifying the materials through the material classification model to obtain the material class of the first object.
It should be noted that, the above-mentioned masking region may be an automatically marked region; or, a mask area generated by future or existing means.
In the implementation manner, the fourth image is extracted from the third image through the mask area, and then the fourth image is input into a preset material classification model to obtain the material class of the first object, so that the interference of the area of the third image except the area where the first object is located (namely, the fourth image under white light) can be avoided, and the accuracy of identifying the material class of the first object is further improved; and then, introducing the material category of the first object into the target text, enriching the features in the feature map of the target text, so that the features in the fused feature map are more attached to the material of the first object in the process of fusing the feature map of the target image pair and the feature map of the target text, and the recognition precision of the material parameter recognition model is further improved, and the material parameters can be accurately analyzed based on the material parameter recognition model.
In some alternative implementations of the present embodiment, the mask area is determined based on the steps of:
projecting the first image on the surface of the first object through a projection device, and acquiring a second image through a shooting device; image segmentation is carried out on the second image to obtain a segmented image, wherein the segmented image is an image comprising a first object; comparing the segmented image with the first image to determine a projection area; a mask region is generated from the projection region.
In this implementation manner, the execution body may project the first image onto the surface of the first object through the projection device, and acquire the second image from the image presented on the surface of the first object through the projection device; then, carrying out image segmentation on the second image to obtain a segmented image; then, comparing the segmented image with the first image to determine a projection area; then, a mask region is generated from the projection region.
It should be noted that the segmented image generally includes an area that needs to be preserved in the image, for example, an area where the first object is located.
In one example, in fig. 6, a first image is projected on a surface of a first object by a projection device, and content acquisition is performed on the image presented on the surface of the first object by a camera to obtain a second image; then, image segmentation is carried out on the second image so as to obtain a segmented image; then, comparing the segmented image with the first image to obtain a projection area; finally, a mask region is generated from the projection region.
Correspondingly, in this example, image segmentation of the second image to obtain a segmented image may include: and dividing the second image by an image dividing algorithm or a preset dividing model to obtain a divided image.
Correspondingly, in this example, comparing the segmented image with the first image, determining the projection area may include:
the segmented image is analytically compared with parameters of the first image to determine a projection area, which may be an area where the image is presented on the surface of the first object.
In this implementation manner, the fourth image is extracted from the third image through the mask region, and then the fourth image is input into the preset material classification model to obtain the material class of the first object, so that interference of regions of the third image except the region where the first object is located (i.e., the fourth image under white light) can be avoided, and accuracy of identifying the material class of the first object is further improved.
In one example, generating a mask region from the projection region may include:
shielding the projection area through the shielding area to generate a shielding area; or, the mask region is formed by changing the transparency of the projection region to block the region displayed by the projection region.
Here, the mask region may be used to block the content displayed by the projection region, for example, different from the transparency of the projection region, or different from the texture of the projection region to block the projection region.
In one example, in fig. 7, a first image (i.e., an image corresponding to a picture of a projection source) is projected on a surface of a first object by a projection device; shooting the surface of the first object through a shooting device to obtain a second image; then, according to the mask area, projection content corresponding to the area where the first object is located is extracted from the second image.
In the implementation mode, a first image is projected on the surface of a first object through a projection device, and a second image is acquired through a shooting device; then, carrying out image segmentation on the second image to obtain a segmented image; then comparing and analyzing the segmented image with the first image to determine a projection area; then generating a mask area according to the projection area; and extracting a fourth image from the third image based on the mask region, inputting the fourth image into a preset material classification model to obtain the material category of the first object, so that the interference of the region of the third image except the region where the first object is located can be avoided, the accuracy of identifying the material category of the first object is further improved, the accurate material category of the first object is introduced into the target text, the features in the feature map of the target text are enriched, and the features in the fused feature map are enabled to be more attached to the material of the first object in the fusion process of the feature map of the target image pair and the feature map of the target text, the identification accuracy of the material parameter identification model is further improved, and the material parameters can be accurately analyzed based on the material parameter identification model.
With continued reference to fig. 8, fig. 8 illustrates a flow 800 of one embodiment of a method of generating a material parameter identification model according to the present disclosure. The method of generating a material parameter identification model may comprise the steps of:
step 801, acquiring a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter label of the first object is used for representing a parameter of a material to which the surface of the first object belongs.
In this embodiment, an execution subject (e.g., the server 104 shown in fig. 1) of the method of generating a material parameter identification model may acquire a target image pair, a target text, and a material parameter tag of a first object from a terminal device (e.g., the terminal devices 101, 102 shown in fig. 1); or, the above-described execution subject (e.g., the terminal devices 101, 102 shown in fig. 1) may acquire the target image pair, the target text, and the material parameter tag of the first object locally.
It should be noted that, step 801 corresponds to step 301 in the foregoing embodiment, and specific implementation may refer to the foregoing description of step 301, which is not repeated herein.
Step 802, encoding the target image pair and the target text respectively to obtain a feature map of the target image pair and a feature map of the target text.
In this embodiment, the execution body may perform feature extraction on the target image pair by using image encoding corresponding to the target image pair, so as to obtain a feature map of the target image pair; and extracting the characteristics of the target text by adopting the text codes corresponding to the target text so as to obtain the characteristic diagram of the target text.
It should be noted that, step 802 corresponds to step 302 in the foregoing embodiment, and the specific implementation may refer to the foregoing description of step 302, which is not repeated herein.
Step 803, fusing the feature map of the first image and the feature map of the second image to obtain a first feature map; and fusing the feature map of the environmental parameter and the feature map of the material class of the first object to obtain a second feature map.
In this embodiment, the executing body may perform feature stitching on the feature map of the first image and the feature map of the second image to obtain a first feature map; and splicing the characteristic diagram of the environment parameter and the characteristic diagram of the material class of the first object to obtain a second characteristic diagram.
And step 804, performing feature intersection on the first feature map and the second feature map by adopting an intersection attention mechanism to obtain a first intersection feature map and a second intersection feature map.
In this embodiment, the executing body may perform feature intersection on the first feature map and the second feature map through a cross attention mechanism, so as to obtain a first cross feature map and a second cross feature map. The first cross feature map and the second cross feature map may be feature maps obtained by performing feature cross on the first feature map and the second feature map by using a cross attention mechanism.
Specifically, the first feature map may be used as a query vector, and the second feature map may be used as a key vector and a value vector to be input into the cross attention module to obtain a first cross feature map; and taking the second feature map as a query vector, and inputting the first feature map as a key vector and a value vector into a cross attention module to obtain a second cross feature map.
And step 805, fusing the first cross feature map and the second cross feature map to obtain a fused feature map.
In this embodiment, the executing body may splice the first cross fusion feature map and the second cross fusion feature map to obtain the fusion feature map.
Step 806, training based on the fusion feature map and the material parameter label of the first object to obtain a material parameter identification model.
In this embodiment, the execution body may train the initial material parameter identification model by using the material parameter label fused with the feature map and the first object as a training sample, to obtain a trained material parameter identification model.
Note that, the present application is not limited to the above-described embodiments. Step 806 corresponds to step 304 of the foregoing embodiment, and the specific implementation may refer to the foregoing description of step 304, which is not repeated here.
The method for generating the material parameter identification model comprises the steps of obtaining a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises environmental parameters, and the material parameter label of the first object is used for representing parameters of materials of the surface of the first object; encoding the target image pair and the target text respectively to obtain a feature map of the target image pair and a feature map of the target text; then respectively fusing the feature map of the first image and the feature map of the second image to obtain a first feature map, and fusing the feature map of the environmental parameter and the feature map of the material class of the first object to obtain a second feature map; then, a cross attention mechanism is adopted to perform feature cross on the first feature map and the second feature map, and the first cross feature map and the second cross feature map are obtained; then fusing the first cross feature map and the second cross feature map to obtain a fused feature map; training based on the fusion feature map and the material parameter label of the first object to obtain a material parameter identification model; the multi-mode model design can be realized based on the feature images of the target image pair and the feature images of the target text, and the features between the feature images of the target image pair and the feature images of the target text are shared in the process of feature fusion through a cross attention mechanism, so that the material parameter features which can more represent the first object are enhanced, the material parameters of the first object can be accurately represented through the fusion feature images obtained through feature fusion, the fitting capacity of a material parameter identification model is further improved, the identification precision of the material parameter identification model is improved, and the material parameters can be accurately analyzed based on the material parameter identification model.
In some optional implementations of the present embodiment, the feature map of the first image and the feature map of the second image are fused to obtain a first feature map; and fusing the feature map of the environmental parameter and the feature map of the material class of the first object to obtain a second feature map, including:
fusing the feature map of the first image and the feature map of the second image to obtain a first fused feature map; processing the first fused feature map by adopting a self-attention mechanism to obtain a first feature map; fusing the characteristic diagram of the environmental parameter and the characteristic diagram of the material class of the first object to obtain a second fused characteristic diagram; processing the second fused feature map by adopting a self-attention mechanism to obtain a second feature map;
and/or the number of the groups of groups,
fusing the first cross feature map and the second cross feature map to obtain a fused feature map, including: fusing the first cross feature map and the second cross feature map to obtain a third fused feature map; and processing the third fused feature map by adopting a self-attention mechanism to obtain a fused feature map.
It should be noted that, the first feature map may be a feature map obtained by enhancing features of the first fused feature map by using a self-attention mechanism. The second feature map may be a feature map obtained by enhancing features of the second fused feature map by using a self-attention mechanism.
In one example, fusing the feature map of the target image pair and the feature map of the target text using an attention mechanism to obtain a fused feature map may include: fusing the feature map of the first image and the feature map of the second image to obtain a first fused feature map; processing the first fused feature map by adopting a self-attention mechanism to obtain a first feature map; fusing the characteristic diagram of the environmental parameter and the characteristic diagram of the material class of the first object to obtain a second fused characteristic diagram; processing the second fused feature map by adopting a self-attention mechanism to obtain a second feature map; performing feature intersection on the first feature map and the second feature map by adopting a cross attention mechanism to obtain a first cross feature map and a second cross feature map; and fusing the first cross feature map and the second cross feature map to obtain a fused feature map.
In one example, fusing the feature map of the target image pair and the feature map of the target text using an attention mechanism to obtain a fused feature map may include: fusing the feature map of the first image and the feature map of the second image to obtain a first feature map; fusing the characteristic diagram of the environmental parameter and the characteristic diagram of the material class of the first object to obtain a second characteristic diagram; performing feature intersection on the first feature map and the second feature map by adopting a cross attention mechanism to obtain a first cross feature map and a second cross feature map; fusing the first cross feature map and the second cross feature map to obtain a third fused feature map; and processing the third fused feature map by adopting a self-attention mechanism to obtain a fused feature map.
In one example, fusing the feature map of the target image pair and the feature map of the target text using an attention mechanism to obtain a fused feature map may include: fusing the feature map of the first image and the feature map of the second image to obtain a first fused feature map; processing the first fused feature map by adopting a self-attention mechanism to obtain a first feature map; fusing the characteristic diagram of the environmental parameter and the characteristic diagram of the material class of the first object to obtain a second fused characteristic diagram; processing the second fused feature map by adopting a self-attention mechanism to obtain a second feature map; performing feature intersection on the first feature map and the second feature map by adopting a cross attention mechanism to obtain a first cross feature map and a second cross feature map; fusing the first cross feature map and the second cross feature map to obtain a third fused feature map; and processing the third fused feature map by adopting a self-attention mechanism to obtain a fused feature map.
Correspondingly, in this example, as shown in fig. 9, the first image (i.e., the source image), the second image (i.e., the projected content image) are image-coded to obtain a feature map of the first image, and features of the second image; and encoding the environmental parameters (e.g., illumination parameters) and the material class (i.e., material class) of the first object to obtain a feature map of the environmental parameters and a feature map of the material class of the first object; then fusing the feature map of the first image and the feature map of the second image to obtain a first fused feature map; then, a self-attention mechanism is adopted to process the first fused feature map, and a first feature map is obtained; fusing the characteristic diagram of the environmental parameter and the characteristic diagram of the material class of the first object to obtain a second fused characteristic diagram; and then, processing the second fused feature map by adopting a self-attention mechanism to obtain a second feature map. Then, the first feature map (i.e. the feature map 1) is used as a query vector (Q1), the second feature map (i.e. the feature map 2) is used as a key vector (K2) and a value vector (V2) to be input into a cross attention module to obtain a first cross feature map; the second feature map is used as a query vector (Q2), and the first feature map is used as a key vector (K1) and a value vector (V1) to be input into the cross attention module to obtain the second cross feature map. And then fusing the first cross feature map and the second cross feature map to obtain a third fused feature map. And then, processing the third fused feature map by adopting a self-attention mechanism to obtain a fused feature map. Then, the material parameters (i.e., the material parameters) of the first object are obtained by the material parameter identification model.
In the implementation manner, the characteristics between the characteristic diagram of the target image pair and the characteristic diagram of the target text are shared in the process of fusing the characteristics through the self-attention mechanism and the cross-attention mechanism, so that the characteristic of the material parameter which can more represent the first object is enhanced, the material parameter of the first object can be accurately represented through the fused characteristic diagram obtained through the fusion of the characteristics, the fitting capacity of a material parameter identification model is further improved, the identification precision of the material parameter identification model is improved, and the material parameter can be accurately analyzed based on the material parameter identification model.
With continued reference to fig. 10, fig. 10 illustrates a flow 1000 of one embodiment of a method of identifying material parameters according to the present disclosure. The method for identifying the material parameters can comprise the following steps:
step 1001, acquiring a first image pair, an environmental parameter in which a second object in the first image pair is located, and a material class of the second object in the first image pair.
In this embodiment, the execution body of the material parameter identification method (for example, the terminal device 101, 102 or the server 104 shown in fig. 1) may acquire a first image pair through a photographing device, acquire an environmental parameter in which a second object in the first image pair is located through a sensor, and identify a material class of the second object through a preset material classification model, where the first image pair includes a third image and a fourth image, and the fourth image is an image obtained by projecting the third image on the surface of the second object.
It should be noted that the preset material classification model may be the model for identifying the material class in the above embodiment.
Step 1002, inputting the first image pair, the environmental parameter where the second object is located, and the material class of the second object in the first image pair into a pre-trained material parameter identification model, to obtain the material parameter of the second object in the first image pair.
In this embodiment, the executing body may input the environmental parameters of the first image pair and the second object and the material class of the second object in the first image pair into a pre-trained material parameter identification model, so as to obtain the material parameter of the second object in the first image pair, and accurately analyze the material parameter based on the material parameter identification model. The second object and the first object may be the same or different objects.
Here, the pre-trained material parameter identification model may be a model generated by the above-described method of generating a material parameter identification model. The material parameters of the second object are material related parameters such as reflectivity, refractive index, gain, viewing angle.
According to the material parameter identification method provided by the embodiment of the disclosure, the material parameters of the second object in the first image pair are identified through the material parameter identification model trained in advance, so that the material parameters can be accurately analyzed based on the material parameter identification model.
With continued reference to FIG. 11, FIG. 11 is an application scenario diagram of a method of generating a material parameter identification model. As shown in fig. 11, the method for generating a material parameter identification model may include the steps of:
the first step, a first image and illumination parameters are acquired.
A second step of projecting a first image on a surface of a first object; content extraction is then performed from the image presented on the surface of the first object to obtain a second image.
Thirdly, performing image segmentation on the second image to obtain a segmented image; then comparing and analyzing the segmented image with the first image to determine a projection area; then, a mask region is generated from the projection region.
A fourth step of projecting preset light on the surface of the first object, and acquiring content from the acquired third image through a shooting device to obtain a fourth image; then, inputting the fourth image into a preset material classification model, and classifying the materials to obtain the material class of the first object.
And fifthly, encoding the first image, the second image, the illumination parameters and the material category of the first object to obtain a characteristic diagram of the first image, a characteristic diagram of the second image, a characteristic diagram of the illumination parameters and a characteristic diagram of the material category of the first object.
And sixthly, fusing the feature map of the first image, the feature map of the second image, the feature map of the illumination parameter and the feature map of the material class of the first object to obtain a fused feature map.
And seventh, training based on the fusion characteristic diagram and the material parameter label of the first object to obtain a material parameter identification model.
Eighth, a first image pair, environmental information of a second object in the first image pair and a material class of the second object are obtained.
And a ninth step of inputting the first image pair, the environmental information of the second object in the first image pair and the material category of the second object into the material parameter identification model in the sixth step to obtain the material parameter of the second object.
The specific implementation manners of the first to seventh steps may refer to the embodiments corresponding to the method for generating the material parameter identification model. The specific implementation manners of the eighth step to the ninth step can refer to the corresponding embodiments of the method for the material parameters.
In the present embodiment, there are the following technical effects:
(1) A second image obtained by projecting the first image on the surface of the first object by means of the projection means; the segmented image obtained based on the segmentation of the second image is then compared with the first image, projected on the surface of the object as a result of the projection being used, and the difference from the first image is calculated. The calculation cost is low.
(2) The images are combined with the text to realize multi-mode model training, and the shared image pair and the text have respective internal characteristics and are richer in semantic information, so that the material parameters can be accurately identified.
(3) The multi-mode model is an end-to-end training and predicting process, and is convenient to use and high in instantaneity.
(4) The multi-mode model can analyze material parameters in real time, is low in cost based on projection, is simple in model training and accurate and rapid in prediction.
In some optional implementations of this embodiment, the method for identifying a material parameter further includes:
and determining the material loss of the second object according to the material parameters of the second object.
In this implementation, the material loss of the second object may be determined by a material parameter of the second object. The material parameters can be used to determine the degree of loss of the material and estimate the life of the material.
The lifetime loss may be used to characterize the degree of loss of the material to which the surface of the second object belongs.
In one example, a correspondence between the material parameter and the material loss lifetime may be pre-constructed, and after obtaining the material parameter of the second object, the material loss of the second object may be determined according to the correspondence.
In one example, the material loss of the second object may be input into a preset material loss identification model, to obtain the material loss of the second object.
It should be noted that the material loss model may be obtained through training of material parameters and material loss.
In fig. 12, the first image pair, the environmental parameter in which the second object in the first image pair is located, and the material class (i.e., material class) of the second object in the first image pair are input into a pre-trained material parameter identification model, so as to obtain the material parameter (i.e., material parameter) of the second object; and then estimating the loss of the material parameter according to the material parameter to determine the material loss.
It should be noted that, the description of the material class of the first object may be determined with reference to the description of the material class of the second object in the above embodiment.
In the implementation mode, the identified material parameters can be applied to material detection, and the loss and the service life of the material can be judged through the material parameters of the second object, so that the material can be used and protected more reasonably.
In some optional implementations of this embodiment, the method for identifying a material parameter further includes:
Determining display parameters according to the material parameters of the second object;
and optimally displaying the image to be enhanced according to the display parameters.
In this implementation manner, the display parameter corresponding to the material parameter of the second object may be determined according to the material parameter of the second object; and then, optimally adjusting the display parameters of the image to be enhanced and displayed according to the display parameters so as to enhance the display.
Here, the display parameter may be used to optimally display the image to be enhanced for display enhancement. The image to be enhanced and displayed may be an image whose actual display effect does not conform to the expectation. The expectation may be set according to the needs (or preferences) of the user.
In one example, determining the display parameter based on the material parameter of the second object may include: pre-establishing a corresponding relation between material parameters and display parameters of the second object; then, a display parameter is determined based on the correspondence and the material parameter of the second object.
In one example, determining the display parameter based on the material parameter of the second object may include: acquiring demand information of a user; establishing a corresponding relation between material parameters of the second object and the requirement information and display information of the user; and determining corresponding display information according to the corresponding relation, the material parameters of the second object and the requirement information of the user.
In this implementation manner, the identified material parameter may be applied in an application scenario of enhanced display, and the display parameter may be determined by the material parameter of the second object, so as to implement optimization adjustment of the display parameter of the image to be enhanced for enhanced display.
In some optional implementations of the present embodiment, the image to be enhanced for display may include: projecting the resulting image on the surface of the second object by means of a projection device; or, a virtual reality image in virtual reality.
In the implementation manner, the enhanced display effect under different scenes can be realized according to different images to be enhanced and displayed.
In one example, the image projected by the projection device onto the surface of the second object may be optimized, i.e. the projection parameters of the projection device are adjusted, achieving an enhanced display effect in the projected scene.
In fig. 13, the first image pair, the environmental parameter in which the second object in the first image pair is located, and the material class of the second object in the first image pair are input into a pre-trained material parameter identification model, so as to obtain the material parameter of the second object; and then, the projection parameters of the projector are optimized and adjusted in real time according to the material parameters so as to achieve the optimal projection effect.
It should be noted that many projectors are now equipped with a camera, by which a first image pair projected on the surface of a second object is captured, and then the material class of the second object in the first image pair. Ambient light parameters are acquired by the light sensor. And inputting the data into a material parameter identification model to obtain the material parameters.
In one example, the enhanced display effect in the virtual display scene may be achieved by optimizing a virtual reality image in the virtual reality.
In fig. 14, the first image pair, the environmental parameter in which the second object in the first image pair is located, and the material class of the second object in the first image pair are input into a pre-trained material parameter identification model, so as to obtain the material parameter of the second object; the texture parameters are then used to render in the virtual world, making the virtual world more realistic (i.e., the rendered image in fig. 14).
In the implementation mode, the image displayed in virtual world rendering is optimized by calculating the material parameters in actual display so as to improve the rendering effect.
In the implementation mode, after the accurate material parameters are obtained through analysis, the projection device can achieve the best projection effect, the virtual world can be rendered more vividly, and the use of materials can be more standard.
With further reference to fig. 15, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of an apparatus for generating a material parameter identification model, where the apparatus embodiment corresponds to the method embodiment shown in fig. 2, and the apparatus is particularly applicable to various electronic devices.
As shown in fig. 15, the apparatus 1500 for generating a material parameter identification model of the present embodiment may include: an image acquisition module 1501 and a model training module 1502. Wherein the image acquisition module 1501 is configured to acquire a target image pair, a target text and a material parameter tag of a first object, wherein the target image pair includes a first image and a second image, the second image is an image obtained by projecting the first image on a surface of the first object, the target text includes an environmental parameter, and the material parameter tag of the first object is used for characterizing a parameter of a material to which the surface of the first object belongs; the model training module 1502 is configured to train the object image pair, the object text and the material parameter tag of the first object to obtain a material parameter identification model.
In the present embodiment, in the apparatus 1500 for generating a material parameter identification model: the specific processing of the image acquisition module 1501 and the model training module 1502 and the technical effects thereof may refer to the relevant descriptions of steps 201 to 202 in the corresponding embodiment of fig. 2, and are not repeated herein.
In some alternative implementations of the present embodiment, model training module 1502 includes:
the encoding unit is configured to encode the target image pair and the target text respectively to obtain a feature map of the target image pair and a feature map of the target text;
the fusion unit is configured to fuse the feature images of the target image pair and the feature images of the target text to obtain a fused feature image;
and the training unit is configured to train based on the fusion characteristic diagram and the material parameter label of the first object to obtain a material parameter identification model.
In some optional implementations of this embodiment, the fusion unit is further configured to: and fusing the feature map of the target image pair and the feature map of the target text by adopting an attention mechanism to obtain a fused feature map.
In some optional implementations of this embodiment, the target text further includes: a class of material of the first object.
In some optional implementations of this embodiment, the fusing unit includes:
the first fusion subunit is configured to fuse the feature map of the first image and the feature map of the second image to obtain a first feature map; fusing the characteristic diagram of the environmental parameter and the characteristic diagram of the material class of the first object to obtain a second characteristic diagram;
The feature intersection subunit is configured to perform feature intersection on the first feature map and the second feature map by adopting a cross attention mechanism to obtain a first cross feature map and a second cross feature map;
and the second fusion subunit is configured to fuse the first cross feature map and the second cross feature map to obtain a fusion feature map.
In some optional implementations of this embodiment, the first fusion subunit is further configured to:
fusing the feature map of the first image and the feature map of the second image to obtain a first fused feature map; processing the first fused feature map by adopting a self-attention mechanism to obtain a first feature map; fusing the characteristic diagram of the environmental parameter and the characteristic diagram of the material class of the first object to obtain a second fused characteristic diagram; processing the second fused feature map by adopting a self-attention mechanism to obtain a second feature map;
and/or the number of the groups of groups,
a second fusion subunit further configured to: fusing the first cross feature map and the second cross feature map to obtain a fused feature map, including: fusing the first cross feature map and the second cross feature map to obtain a third fused feature map; and processing the third fused feature map by adopting a self-attention mechanism to obtain a fused feature map.
In some optional implementations of the present embodiment, the apparatus 1500 for generating a material parameter identification model further includes:
an image acquisition module 1501 configured to project preset light onto a surface of a first object by a projection device and acquire a third image by a photographing device;
the category identification module is configured to input a third image into a preset material classification model to obtain a material category of the first object.
In some optional implementations of the present embodiment, the category identification module is further configured to: acquiring a fourth image from the third image according to a preset mask area, wherein the fourth image comprises an image of the first object; and inputting the fourth image into a preset material classification model to obtain the material class of the first object.
In some optional implementations of the present embodiment, the apparatus 1500 for generating a material parameter identification model further includes:
an image acquisition module 1501 configured to project a first image on a surface of a first object by a projection device and obtain a second image by a photographing device;
the image segmentation module is configured to carry out image segmentation on the second image to obtain a segmented image, wherein the segmented image is an image comprising a first object;
The image comparison module is configured to compare the segmented image with the first image and determine a projection area;
the region generation module is configured to generate a mask region according to the projection region.
In some optional implementations of the present embodiment, the apparatus 1500 for generating a material parameter identification model further includes: and a parameter acquisition module configured to acquire the environmental parameter by the sensor.
With further reference to fig. 16, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of a device for identifying material parameters, which corresponds to the method embodiment shown in fig. 10, and which is particularly applicable to various electronic devices.
As shown in fig. 16, the identification device 1600 of a material parameter of the present embodiment may include: an image acquisition module 1601 and a parameter identification module 1602. Wherein the image acquisition module 1601 is configured to acquire a first image pair, an environmental parameter in which a second object in the first image pair is located, and a material class of the second object; the parameter identification module 1602 is configured to input the first image pair, the environmental parameter in which the second object is located, and the material class of the second object into a pre-trained material parameter identification model, resulting in a material parameter of the second object.
In this embodiment, in the identification device 1600 of material parameters: the specific processing of the image acquisition module 1601 and the parameter identification module 1602 and the technical effects thereof may refer to the relevant descriptions of steps 1001-1002 in the corresponding embodiment of fig. 10, and are not repeated herein.
In some embodiments, the identification means of the material parameter further comprises: and a loss determination module configured to determine a loss of material of the second object based on the material parameter of the second object.
In some embodiments, the identification means of the material parameter further comprises: a display parameter determination module configured to determine a display parameter based on a material parameter of the second object; and the optimization adjustment module is configured to optimally display the image to be enhanced and displayed according to the display parameters.
In some embodiments, the image to be enhanced for display includes: projecting an image presented on a surface of a second object; or, a virtual reality image in virtual reality.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
Fig. 17 illustrates a schematic block diagram of an example electronic device 1700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 17, the apparatus 1700 includes a computing unit 1701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1702 or a computer program loaded from a storage unit 1708 into a Random Access Memory (RAM) 1703. In the RAM 1703, various programs and data required for the operation of the device 1700 may also be stored. The computing unit 1701, the ROM 1702, and the RAM 1703 are connected to each other via a bus 1704. An input/output (I/O) interface 1705 is also connected to the bus 1704.
Various components in device 1700 are connected to I/O interface 1705, including: an input unit 1706 such as a keyboard, a mouse, etc.; an output unit 1707 such as various types of displays, speakers, and the like; a storage unit 1708 such as a magnetic disk, an optical disk, or the like; and a communication unit 1709 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 1709 allows the device 1700 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunications networks.
The computing unit 1701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized artificial intelligence (8I) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 1701 performs the respective methods and processes described above, for example, a method of generating a material parameter identification model or an identification method of a material parameter. For example, in some embodiments, the method of generating a material parameter identification model or the method of identifying a material parameter may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1700 via ROM 1702 and/or communication unit 1709. When the computer program is loaded into the RAM 1703 and executed by the computing unit 1701, one or more steps of the method of generating a material parameter identification model or the method of identifying material parameters described above may be performed. Alternatively, in other embodiments, the computing unit 1701 may be configured to perform a method of generating a material parameter identification model or an identification method of a material parameter by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit system, field programmable gate array (FPG 8), application specific integrated circuit (8 SIC), application specific standard product (8 SSP), system On Chip (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (17)

1. A method of generating a material parameter identification model, comprising:
acquiring a target image pair, a target text and a material parameter label of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter label of the first object is used for representing parameters of materials of the surface of the first object;
training the target image pair, the target text and the material parameter label of the first object according to the target image pair to obtain a material parameter identification model.
2. The method of claim 1, wherein the training the material parameter labels from the target image pair, the target text, and the first object to obtain a material parameter identification model comprises:
Encoding the target image pair and the target text respectively to obtain a feature map of the target image pair and a feature map of the target text;
fusing the feature map of the target image pair and the feature map of the target text to obtain a fused feature map;
training based on the fusion feature map and the material parameter label of the first object to obtain a material parameter identification model.
3. The method of claim 2, wherein the fusing the feature map of the target image pair and the feature map of the target text to obtain a fused feature map comprises:
and fusing the feature map of the target image pair and the feature map of the target text by adopting an attention mechanism to obtain the fused feature map.
4. The method of any of claims 1-3, wherein the target text further comprises: a class of material of the first object.
5. The method of claim 4, wherein fusing the feature map of the target image pair and the feature map of the target text using an attention mechanism to obtain the fused feature map comprises:
fusing the feature map of the first image and the feature map of the second image to obtain a first feature map; fusing the feature map of the environment parameter and the feature map of the material class of the first object to obtain a second feature map;
Performing feature intersection on the first feature map and the second feature map by adopting a cross attention mechanism to obtain a first cross feature map and a second cross feature map;
and fusing the first cross feature map and the second cross feature map to obtain the fused feature map.
6. The method according to claim 5, wherein the feature map of the first image and the feature map of the second image are fused to obtain a first feature map; and fusing the feature map of the environmental parameter and the feature map of the material class of the first object to obtain a second feature map, including:
fusing the feature map of the first image and the feature map of the second image to obtain a first fused feature map; processing the first fused feature map by adopting a self-attention mechanism to obtain a first feature map; fusing the characteristic diagram of the environmental parameter and the characteristic diagram of the material class of the first object to obtain a second fused characteristic diagram; processing the second fused feature map by adopting a self-attention mechanism to obtain a second feature map;
and/or the number of the groups of groups,
the fusing the first cross feature map and the second cross feature map to obtain the fused feature map includes: fusing the first cross feature map and the second cross feature map to obtain a third fused feature map; and processing the third fused feature map by adopting a self-attention mechanism to obtain the fused feature map.
7. The method of claim 4, wherein the material class of the first object is determined based on:
projecting preset light on the surface of the first object through a projection device, and acquiring a third image through a shooting device;
and inputting the third image into a preset material classification model to obtain the material class of the first object.
8. The method of claim 7, wherein the inputting the third image into a preset material classification model results in a material class of the first object, comprising:
acquiring a fourth image from the third image according to a preset mask area, wherein the fourth image comprises an image of the first object;
and inputting the fourth image into a preset material classification model to obtain the material class of the first object.
9. The method of claim 8, wherein the mask region is determined based on:
projecting the first image on the surface of the first object by a projection device, and obtaining the second image by a shooting device;
image segmentation is carried out on the second image to obtain a segmented image, wherein the segmented image is an image comprising a first object;
Comparing the segmented image with the first image to determine a projection area;
and generating a mask area according to the projection area.
10. A method of identifying a material parameter, comprising:
acquiring a first image pair, wherein the first image pair comprises a third image and a fourth image, and the fourth image is an image obtained by projecting the third image on the surface of the second object, wherein the environment parameter of the second object in the first image pair and the material category of the second object;
inputting the first image pair, the environmental parameter in which the second object is located and the material class of the second object into a material parameter identification model generated by the method according to any one of claims 1-9, to obtain the material parameter of the second object.
11. The method of claim 10, the method further comprising:
and determining the material loss of the second object according to the material parameters of the second object.
12. The method of claim 10, the method further comprising:
determining display parameters according to the material parameters of the second object;
and optimally displaying the image to be enhanced according to the display parameters.
13. The method of claim 12, wherein the image to be enhanced for display comprises: projecting an image presented on a surface of a second object; or, a virtual reality image in virtual reality.
14. An apparatus for generating a material parameter identification model, comprising:
an image acquisition module configured to acquire a target image pair, a target text and a material parameter tag of a first object, wherein the target image pair comprises a first image and a second image, the second image is an image obtained by projecting the first image on the surface of the first object, the target text comprises an environment parameter, and the material parameter tag of the first object is used for representing a parameter of a material to which the surface of the first object belongs;
and the model training module is configured to train the target image pair, the target text and the material parameter label of the first object to obtain a material parameter identification model.
15. An identification device for material parameters, comprising:
an image acquisition module configured to acquire a first image pair, an environmental parameter in which a second object in the first image pair is located, and a material class of the second object, the first image pair including a third image and a fourth image, the fourth image being an image obtained by projecting the third image onto a surface of the second object;
a parameter identification module configured to input the first image pair, the environmental parameter in which the second object is located, and the material class of the second object into a material parameter identification model generated by the method according to any one of claims 1-9, to obtain the material parameter of the second object.
16. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
17. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-10.
CN202310901269.6A 2023-07-21 2023-07-21 Method, device, equipment and storage medium for generating material parameter identification model Pending CN116912648A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310901269.6A CN116912648A (en) 2023-07-21 2023-07-21 Method, device, equipment and storage medium for generating material parameter identification model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310901269.6A CN116912648A (en) 2023-07-21 2023-07-21 Method, device, equipment and storage medium for generating material parameter identification model

Publications (1)

Publication Number Publication Date
CN116912648A true CN116912648A (en) 2023-10-20

Family

ID=88356219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310901269.6A Pending CN116912648A (en) 2023-07-21 2023-07-21 Method, device, equipment and storage medium for generating material parameter identification model

Country Status (1)

Country Link
CN (1) CN116912648A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557574A (en) * 2024-01-12 2024-02-13 广东贝洛新材料科技有限公司 Material parameter detection method and system based on image processing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557574A (en) * 2024-01-12 2024-02-13 广东贝洛新材料科技有限公司 Material parameter detection method and system based on image processing
CN117557574B (en) * 2024-01-12 2024-03-15 广东贝洛新材料科技有限公司 Material parameter detection method and system based on image processing

Similar Documents

Publication Publication Date Title
CN111598164B (en) Method, device, electronic equipment and storage medium for identifying attribute of target object
CN112954450B (en) Video processing method and device, electronic equipment and storage medium
CN108235116B (en) Feature propagation method and apparatus, electronic device, and medium
CN110379020B (en) Laser point cloud coloring method and device based on generation countermeasure network
CN111291885A (en) Near-infrared image generation method, network generation training method and device
EP4137991A1 (en) Pedestrian re-identification method and device
US20180357819A1 (en) Method for generating a set of annotated images
CN111654746B (en) Video frame insertion method and device, electronic equipment and storage medium
CN107273895B (en) Method for recognizing and translating real-time text of video stream of head-mounted intelligent device
EP3352138A1 (en) Method and apparatus for processing a 3d scene
AU2013273829A1 (en) Time constrained augmented reality
KR101553273B1 (en) Method and Apparatus for Providing Augmented Reality Service
CN111199541A (en) Image quality evaluation method, image quality evaluation device, electronic device, and storage medium
EP3561776A1 (en) Method and apparatus for processing a 3d scene
CN114550177A (en) Image processing method, text recognition method and text recognition device
US20230143452A1 (en) Method and apparatus for generating image, electronic device and storage medium
CN116912648A (en) Method, device, equipment and storage medium for generating material parameter identification model
CN113408662A (en) Image recognition method and device, and training method and device of image recognition model
CN112052730A (en) 3D dynamic portrait recognition monitoring device and method
CN113379877B (en) Face video generation method and device, electronic equipment and storage medium
CN112819008B (en) Method, device, medium and electronic equipment for optimizing instance detection network
CN113055593B (en) Image processing method and device
US20160140748A1 (en) Automated animation for presentation of images
CN112732553A (en) Image testing method and device, electronic equipment and storage medium
CN113269730B (en) Image processing method, image processing device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination