CN108986199B - Virtual model processing method and device, electronic equipment and storage medium - Google Patents

Virtual model processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN108986199B
CN108986199B CN201810616186.1A CN201810616186A CN108986199B CN 108986199 B CN108986199 B CN 108986199B CN 201810616186 A CN201810616186 A CN 201810616186A CN 108986199 B CN108986199 B CN 108986199B
Authority
CN
China
Prior art keywords
real environment
image
illumination
virtual model
environment image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810616186.1A
Other languages
Chinese (zh)
Other versions
CN108986199A (en
Inventor
季佳松
林形省
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201810616186.1A priority Critical patent/CN108986199B/en
Publication of CN108986199A publication Critical patent/CN108986199A/en
Application granted granted Critical
Publication of CN108986199B publication Critical patent/CN108986199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a virtual model processing method, a device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a current photographed real environment image, and acquiring a target image for determining illumination information according to the real environment image; comparing the difference of pixel information of pixel points in the target image, and determining illumination information in the real environment image according to a comparison result; and carrying out illumination processing on the virtual model to be rendered into the real environment image according to the illumination information. The embodiment can enable the illumination processing effect of the virtual model to be matched with the illumination condition in the real scene, and the virtual model can be truly fused with the real scene.

Description

Virtual model processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of augmented reality (AR, augmented Reality) technology, and in particular, to a virtual model processing method, apparatus, electronic device, and storage medium.
Background
With the continuous development of photography and image processing technologies, AR technology is also becoming mature. AR technology generally utilizes cameras, sensors, real-time computing and image processing technologies to fuse computer-generated virtual models, scenes, or system cues into real scenes, thereby implementing augmented reality. In order to make the virtual model more truly fused with the real scene, the AR system may perform illumination processing on the virtual model, for example, acquire a shadow image generated by the virtual model under illumination, and so on. Based on this, it is necessary to provide a better virtual model illumination processing technique so that the virtual model can be truly fused with the real scene.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a virtual model processing method, apparatus, electronic device, and storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided a virtual model processing method, the method including:
acquiring a current photographed real environment image, and acquiring a target image for determining illumination information according to the real environment image;
comparing the difference of pixel information of pixel points in the target image, and determining illumination information in the real environment image according to a comparison result;
and carrying out illumination processing on the virtual model to be rendered into the real environment image according to the illumination information.
In an optional implementation manner, the acquiring the target image for determining illumination information according to the real environment image includes:
and acquiring a rendering position of the virtual model in the real environment image, and cutting out the target image from the real environment image according to the rendering position.
In an optional implementation manner, the acquiring the target image for determining illumination information according to the real environment image includes:
identifying an object in the real environment image, and intercepting an image containing the object from the real environment image as the target image.
In an alternative implementation, the comparing the difference of the pixel information of the pixel points in the target image includes:
and obtaining brightness values of a plurality of pixel points in the target image, and comparing differences of the brightness values of the plurality of pixel points.
In an optional implementation manner, the determining the illumination information in the real environment image according to the comparison result includes:
and acquiring a first type pixel point with the maximum brightness value and a second type pixel point with the minimum brightness value in the pixel points, and determining the direction pointing to the second type pixel point from the first type pixel point as the illumination direction in the real environment image.
In an alternative implementation, the plurality of pixel points are uniformly distributed in the target image.
In an optional implementation manner, the illuminating processing is performed on the virtual model to be rendered into the real environment image according to the illumination information, including one or more of the following:
generating shadow data of the virtual model according to the illumination direction, wherein the illumination information comprises the illumination direction;
and processing the brightness data of the virtual model according to the illumination brightness value, wherein the illumination information comprises the illumination brightness value.
According to a second aspect of embodiments of the present disclosure, there is provided a virtual model processing apparatus, the apparatus including:
an image acquisition module configured to: acquiring a current photographed real environment image, and acquiring a target image for determining illumination information according to the real environment image;
a contrast module configured to: comparing the difference of pixel information of pixel points in the target image, and determining illumination information in the real environment image according to a comparison result;
a light processing module configured to: and carrying out illumination processing on the virtual model to be rendered into the real environment image according to the illumination information.
In an alternative implementation, the image acquisition module includes:
a first image acquisition sub-module configured to: and acquiring a rendering position of the virtual model in the real environment image, and cutting out the target image from the real environment image according to the rendering position.
In an alternative implementation, the image acquisition module includes:
a second image acquisition sub-module configured to: identifying an object in the real environment image, and intercepting an image containing the object from the real environment image as the target image.
In an alternative implementation, the comparison module is configured to:
a brightness contrast sub-module configured to: and obtaining brightness values of a plurality of pixel points in the target image, and comparing differences of the brightness values of the plurality of pixel points.
In an alternative implementation, the brightness comparison sub-module includes:
a direction determination sub-module configured to: and acquiring a first type pixel point with the maximum brightness value and a second type pixel point with the minimum brightness value in the pixel points, and determining the direction pointing to the second type pixel point from the first type pixel point as the illumination direction in the real environment image.
In an alternative implementation, the plurality of pixel points are uniformly distributed in the target image.
In an alternative implementation, the illumination processing module includes one or more of the following sub-modules:
a first illumination processing sub-module configured to: generating shadow data of the virtual model according to the illumination direction, wherein the illumination information comprises the illumination direction;
a second illumination processing sub-module configured to: and processing the brightness data of the virtual model according to the illumination brightness value, wherein the illumination information comprises the illumination brightness value.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a current photographed real environment image, and acquiring a target image for determining illumination information according to the real environment image;
comparing the difference of pixel information of pixel points in the target image, and determining illumination information in the real environment image according to a comparison result;
and carrying out illumination processing on the virtual model to be rendered into the real environment image according to the illumination information.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the aforementioned virtual model processing method.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
in the method, the illumination information in the real environment image can be determined according to the comparison result by comparing the difference of the pixel information of the pixel points in the target image, so that the virtual model to be rendered into the real environment image can be subjected to illumination treatment according to the illumination information, the illumination treatment effect of the virtual model is matched with the illumination condition in the real scene, and the virtual model can be truly fused with the real scene.
In the present disclosure, since the target image for determining illumination information is determined according to the rendering position of the virtual model, accurate illumination information can be obtained to perform illumination processing on the virtual model.
In the present disclosure, analysis of illumination information is performed using a target image including a single object, and in order to reduce interference of pixel information of other objects, the accuracy of the determined illumination information may be improved.
In the method, the brightness values of a plurality of pixel points in the target image can be obtained, and the difference of the brightness values of the plurality of pixel points is compared, so that the processing speed of the virtual model can be improved.
In the method, the selected pixel points which are uniformly distributed in the target image can be utilized to analyze illumination information, so that the processing speed of the virtual model can be improved.
In the method, the first type pixel point with the largest brightness value and the second type pixel point with the smallest brightness value in the pixel points are obtained, and the direction pointing to the second type pixel point from the first type pixel point is determined to be the illumination direction in the real environment image.
In the disclosure, the virtual model to be rendered into the real environment image may be subjected to illumination processing according to illumination information, for example, a virtual model shadow image matched with the real environment may be obtained according to an illumination direction, or a luminance value of a pixel point in the target image may be obtained, an illumination luminance value of the virtual model may be determined according to the luminance value, and luminance data of the virtual model may be processed according to the illumination luminance value, so that the virtual model may be better fused with the real environment.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1A is a schematic diagram of an AR scene according to an exemplary embodiment of the present disclosure.
FIG. 1B is a flowchart illustrating a virtual model processing method according to an exemplary embodiment of the present disclosure.
Fig. 1C is a schematic diagram of a target image according to an exemplary embodiment of the present disclosure.
Fig. 1D is a schematic diagram of another target image shown in accordance with an exemplary embodiment of the present disclosure.
FIG. 1E is a diagram illustrating virtual model processing according to an example embodiment of the present disclosure.
Fig. 2 is a block diagram of a virtual model processing apparatus according to an exemplary embodiment of the present disclosure.
Fig. 3 is a block diagram of another virtual model processing apparatus according to an exemplary embodiment of the present disclosure.
Fig. 4 is a block diagram of another virtual model processing apparatus according to an exemplary embodiment of the present disclosure.
Fig. 5 is a block diagram of another virtual model processing apparatus according to an exemplary embodiment of the present disclosure.
Fig. 6 is a block diagram of another virtual model processing apparatus according to an exemplary embodiment of the present disclosure.
FIG. 7 is a block diagram of another virtual model processing apparatus according to an exemplary embodiment of the present disclosure.
Fig. 8 is a block diagram of a device control apparatus according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
As shown in fig. 1A, the present disclosure illustrates an AR scene diagram according to an exemplary embodiment, where the electronic device in fig. 1A has an AR function, and when the electronic device photographs a real environment screen, the electronic device may generate a virtual model and render into the real environment screen. When the virtual model is processed, illumination factors are required to be considered, so that when the virtual model is rendered to a real environment picture, a shadow image projected by the virtual model under illumination can be displayed in the picture, the virtual model can be more truly fused with a real scene, and the model rendering effect and immersion sense are improved.
The existing AR development tools provide a function of performing illumination processing on a virtual model, however, such a function generally adopts a fixed virtual light source to simulate illumination in a real environment, a shadow image obtained by processing in this way is fused into a real environment picture, and a situation that illumination in the virtual scene is inconsistent with illumination in a real scene may occur, for example, an object shadow direction caused by illumination in the real scene is different from a virtual model shadow direction caused by illumination, so that a fusion effect of the virtual model and the real environment is poor.
Based on this, the embodiment of the disclosure provides a virtual model processing scheme, by comparing the differences of the pixel information of the pixel points in the target image, the illumination information in the real environment image can be determined according to the comparison result, so that the virtual model to be rendered into the real environment image can be subjected to illumination processing according to the illumination information, the illumination processing effect of the virtual model is matched with the illumination condition in the real scene, and the virtual model can be truly fused with the real scene. The embodiments of the present disclosure are described in detail below.
As shown in fig. 1B, fig. 1B is a flowchart of a virtual model processing method according to an exemplary embodiment of the present disclosure, where the method may be applied to an electronic device, and includes the following steps:
in step 101, a currently photographed real environment image is acquired, and a target image for determining illumination information is acquired according to the real environment image.
In step 102, the difference of the pixel information of the pixel points in the target image is compared, and the illumination information in the real environment image is determined according to the comparison result.
In step 103, according to the illumination information, performing illumination processing on the virtual model to be rendered into the real environment image.
The embodiment of the disclosure can be applied to electronic equipment with AR (augmented reality) function, such as a smart phone, smart glasses, a smart helmet or a smart television, and the like. Such devices may be configured with a camera, a display screen, a processing module, etc., where the display screen may show a real environment picture taken by the camera module, and in general, the camera module may generate continuous image frames when taking the real environment picture, and the electronic device may obtain an image of the real environment picture, which is referred to as a real environment image in this embodiment.
Due to the illumination of the light source, the surface of the object facing the light source will be presented differently in the image than the surface farther away from the light source, for example the surface facing the light source is brighter and more colorful, the surface farther away from the light source is darker, the color appearance is more single, and shadows may also be generated. The shadow of this embodiment refers to a darker area formed by the opaque object being unable to pass light, since the opaque object can block the propagation of light. Thus, the same object will have differences in pixel information in the image due to illumination by the light source, and embodiments of the present disclosure determine illumination information by comparing differences in pixel information of pixel points in the image.
In some examples, after the electronic device acquires the real environment image, the illumination information may be determined using the real environment image as the target image; in other examples, in order to increase the processing speed, in consideration of the reasons that the real environment image may have a larger data amount and a larger content of the image, the illumination information may be determined by capturing a part of the image from the real environment image as the target image. Two alternative implementations are listed next.
The first, determined from the rendering location.
In the embodiment of the disclosure, a rendering position of the virtual model in the real environment image is acquired, and the target image is cut out from the real environment image according to the rendering position.
In the process of rendering the virtual model by the AR system, the rendering position of the virtual model in the real environment image is generally determined according to the application requirement. For example, in a game scene, a virtual model that moves with the movement of a character may be rendered from the character in the real environment, and the rendering position of the virtual model in such a scene changes as the position of the character in the real environment image changes. Based on the above consideration, the illumination processing of the virtual model needs to be matched with the illumination information of the rendering position in the real environment image, so that the target image can be cut out from the real environment image according to the rendering position, wherein, how large the target image is cut out can be flexibly configured according to actual needs. Through the above-described processing, since the target image for determining illumination information is determined according to the rendering position of the virtual model, accurate illumination information can be obtained to perform illumination processing on the virtual model.
For example, some real environments may be in a room, and the room may have a plurality of light sources, and the objects in different positions in the image are affected by different light sources due to the presence of the plurality of light sources, and their corresponding illumination information may be different. As an example, assuming that the real environment image includes two light sources on the left side and the right side, an object a on the left side in the image is affected by the left light source, an object B on the right side in the image is affected by the right light source, and the virtual model is associated with the object B, the rendering position thereof is determined by the position of the object B, so that the target image can be intercepted by the rendering position of the virtual model, and thus, the illumination information can be accurately determined by the target image.
And second, determining according to the object in the real environment image.
In the embodiment of the disclosure, the object in the real environment image can be identified, and the image containing the object is cut from the real environment image as the target image.
Considering the illumination of the light source, the pixel information of the same object in the image will have a difference, and the real environment image may contain more objects, and the embodiment of the disclosure adopts the target image containing a single object to analyze the illumination information, so that the interference of the pixel information of other objects can be reduced. In addition, since the object may have a single color, the accuracy of illumination information can also be improved by analyzing the target image of the single object.
Through the above processing, the embodiments of the present disclosure may determine the illumination information in the real environment image according to the comparison result by comparing the differences of the pixel information of the pixel points in the target image, where the pixel information of the pixel points includes color values, for example, RGB values in a format of a trichromatic color model (RGB color model), YUV values in a format of YUV (luminence and chroma), and other information obtained by calculating the RGB values or the YUV values, and so on. Due to the illumination of the light source, the surface of the object facing the light source will be presented differently in the image than the surface farther away from the light source, for example the surface facing the light source is brighter and more colorful, the surface farther away from the light source is darker, the color appearance is more single, and shadows may also be generated. Thus, by comparing differences in pixel information of pixel points in the target image, illumination information can be determined, wherein the illumination information can include light source position, illumination direction, illumination brightness, and the like.
In order to increase the processing speed of the virtual model, the pixel information of the embodiment of the disclosure may include a luminance value, and thus, when comparing the difference of the pixel information of the pixel points in the target image, the luminance value of a plurality of pixel points in the target image may be obtained, and the difference of the luminance values of the plurality of pixel points may be compared. When illumination is performed, the surface facing the light source and nearer to the light source is brighter, and the surface deviating from the light source and farther from the light source is darker, or the object on one side deviating from the light source generates shadows, so that the illumination direction can be accurately determined through the brightness value, and illumination processing can be performed on the virtual model according to the illumination direction.
The image is composed of pixel points, and the image contains a great number of pixel points, so that in order to improve the processing speed, in the embodiment of the disclosure, the pixel information of a plurality of pixel points can be selected from the target image for analysis. Specifically, the brightness values of a plurality of pixels in the target image may be obtained, and the difference between the brightness values of the plurality of pixels may be compared. The selection rule can be flexibly configured according to the needs, for example, a plurality of pixel points uniformly distributed in the target image can be selected according to set intervals; or, according to the four vertexes and the central position in the target image, a plurality of pixel points are selected at the four vertexes, a plurality of pixel points are selected at the central position, and the like.
As shown in fig. 1C, which is a schematic diagram of a target image according to an exemplary embodiment of the disclosure, solid dots in fig. 1C represent a plurality of pixels selected from the target image and uniformly distributed, and the selected pixels may be used to analyze illumination information. According to the brightness information of the selected pixel points, the illumination direction can be determined, the pixel point with a higher brightness value is closer to the light source, and the pixel point with a darker brightness value is farther from the light source, so that the illumination direction can be determined to point from the position of the pixel point with a higher brightness value to the pixel point with a smaller brightness value.
In an optional implementation manner, the determining the illumination information in the real environment image according to the comparison result may include: and acquiring a first type pixel point with the maximum brightness value and a second type pixel point with the minimum brightness value in the pixel points, and determining the direction pointing to the second type pixel point from the first type pixel point as the illumination direction in the real environment image. As shown in fig. 1D, fig. 1D is an analyzed illumination direction, i.e., a direction indicated by a line with an arrow in the figure, based on the schematic diagram shown in fig. 1C.
Through the above flow, the electronic device may determine illumination information in the real environment image, based on which the electronic device may perform illumination processing on the virtual model to be rendered in the real environment image according to the illumination information, for example, may obtain a virtual model shadow image matched with the real environment according to an illumination direction, or may also obtain a luminance value of a pixel point in the target image, determine an illumination luminance value of the virtual model according to the luminance value, and process luminance data of the virtual model according to the illumination luminance value, for example, increase luminance of a surface of the virtual model closer to the light source according to the illumination direction, decrease luminance of a surface of the virtual model farther from the light source, and so on. Fig. 1E is a schematic diagram illustrating a virtual model processing according to an exemplary embodiment of the disclosure, where in fig. 1E, the virtual model is illustrated by taking a sphere as an example, after the sphere is subjected to illumination processing according to an illumination direction, a shadow direction of the sphere is matched with the illumination direction, and a surface brightness facing the light source is also higher. In practical application, the illumination processing process may be to utilize an illumination processing algorithm provided by an existing AR development tool, adjust algorithm parameters according to illumination information obtained according to the embodiment of the present disclosure, and utilize an algorithm after adjusting parameters to perform illumination processing on a virtual model.
It should be noted that, because the electronic device shoots the real environment picture in real time, the electronic device generates the image frame in real time according to the frequency, and the real environment picture may continuously change in the process of shooting in real time, when the scheme of the embodiment of the disclosure is applied to the electronic device, the real environment picture can be obtained at regular time according to the set time interval, and the difference of the pixel information of the pixel point in the target image is compared at regular time, and the illumination information in the real environment picture is determined according to the comparison result, so that the change of the illumination information can be determined at regular time according to the change of the real environment picture shot by the electronic device, so that the illumination processing effect of the virtual model can be matched with the illumination condition in the real scene, and the virtual model can be truly fused with the real scene.
Corresponding to the foregoing embodiments of the virtual model processing method, the present disclosure further provides embodiments of the virtual model processing apparatus and an electronic device to which the virtual model processing apparatus is applied.
As shown in fig. 2, fig. 2 is a block diagram of a device control apparatus according to an exemplary embodiment of the present disclosure, the apparatus comprising:
an image acquisition module 21 configured to: acquiring a current photographed real environment image, and acquiring a target image for determining illumination information according to the real environment image;
a contrast module 22 configured to: comparing the difference of pixel information of pixel points in the target image, and determining illumination information in the real environment image according to a comparison result;
a light processing module 23 configured to: and carrying out illumination processing on the virtual model to be rendered into the real environment image according to the illumination information.
According to the embodiment, the difference of the pixel information of the pixel points in the target image is compared, so that the illumination information in the real environment image can be determined according to the comparison result, and therefore the virtual model to be rendered into the real environment image can be subjected to illumination processing according to the illumination information, the illumination processing effect of the virtual model is matched with the illumination condition in the real scene, and the virtual model can be truly fused with the real scene.
As shown in fig. 3, fig. 3 is a block diagram of another device control apparatus according to an exemplary embodiment of the present disclosure, which is based on the foregoing embodiment shown in fig. 2, the image acquisition module 21 includes:
a first image acquisition sub-module 211 configured to: and acquiring a rendering position of the virtual model in the real environment image, and cutting out the target image from the real environment image according to the rendering position.
As can be seen from the above embodiments, since the target image for determining illumination information is determined according to the rendering position of the virtual model, accurate illumination information can be obtained to illuminate the virtual model.
As shown in fig. 4, fig. 4 is a block diagram of another device control apparatus according to an exemplary embodiment of the present disclosure, which is based on the foregoing embodiment shown in fig. 2, the image acquisition module 21 includes:
a second image acquisition sub-module 212 configured to: identifying an object in the real environment image, and intercepting an image containing the object from the real environment image as the target image.
As can be seen from the above embodiments, the accuracy of the determined illumination information can be improved by performing the analysis of the illumination information using the target image including the single object in order to reduce the interference of the pixel information of the other objects.
As shown in fig. 5, fig. 5 is a block diagram of another device control apparatus according to an exemplary embodiment of the present disclosure, where, based on the foregoing embodiment shown in fig. 2, the comparison module 22 is configured to:
a brightness contrast sub-module 221 configured to: and obtaining brightness values of a plurality of pixel points in the target image, and comparing differences of the brightness values of the plurality of pixel points.
As can be seen from the above embodiments, the luminance values of a plurality of pixels in the target image may be obtained, and the difference between the luminance values of the plurality of pixels may be compared, so that the processing speed of the virtual model may be improved.
As shown in fig. 6, fig. 6 is a block diagram of another device control apparatus according to an exemplary embodiment of the present disclosure, which is based on the embodiment shown in fig. 5, the luminance comparison sub-module 221 includes:
the direction determination submodule 2211 is configured to: and acquiring a first type pixel point with the maximum brightness value and a second type pixel point with the minimum brightness value in the pixel points, and determining the direction pointing to the second type pixel point from the first type pixel point as the illumination direction in the real environment image.
As can be seen from the above embodiments, a plurality of uniformly distributed pixels are selected from the target image, and the illumination information can be analyzed by using the selected pixels, so that the processing speed of the virtual model can be improved.
In an alternative implementation, the plurality of pixel points are uniformly distributed in the target image.
The above embodiment shows that, the method is easy to implement, has high accuracy and high processing speed, and obtains the first type pixel point with the largest brightness value and the second type pixel point with the smallest brightness value in the pixel points, and determines the direction pointing to the second type pixel point from the first type pixel point as the illumination direction in the real environment image.
As shown in fig. 7, fig. 7 is a block diagram of another device control apparatus according to an exemplary embodiment of the present disclosure, where the light treatment module 23 includes one or more of the following sub-modules based on the foregoing embodiment shown in fig. 2:
a first illumination processing sub-module 231 configured to: generating shadow data of the virtual model according to the illumination direction, wherein the illumination information comprises the illumination direction;
a second illumination processing sub-module 232 configured to: and processing the brightness data of the virtual model according to the illumination brightness value, wherein the illumination information comprises the illumination brightness value.
As can be seen from the foregoing embodiments, the virtual model to be rendered into the real environment image may be subjected to illumination processing according to illumination information, for example, a virtual model shadow image matched with the real environment may be obtained according to an illumination direction, or a luminance value of a pixel point in the target image may also be obtained, an illumination luminance value of the virtual model may be determined according to the luminance value, and luminance data of the virtual model may be processed according to the illumination luminance value, so that the virtual model may be better fused with the real environment.
Correspondingly, the embodiment of the disclosure also discloses an electronic device, which comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a current photographed real environment image, and acquiring a target image for determining illumination information according to the real environment image;
comparing the difference of pixel information of pixel points in the target image, and determining illumination information in the real environment image according to a comparison result;
and carrying out illumination processing on the virtual model to be rendered into the real environment image according to the illumination information.
Accordingly, the disclosed embodiments also disclose a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the aforementioned virtual model processing method.
The implementation process of the functions and roles of each module in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the objectives of the disclosed solution. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Fig. 8 is a schematic structural view of a device control apparatus according to an exemplary embodiment.
As shown in fig. 8, a device control apparatus 800 is shown according to an exemplary embodiment, and the apparatus 800 may be an AR device such as smart glasses, smart helmets, or the like.
Referring to fig. 8, apparatus 800 may include one or more of the following components: a processing component 801, a memory 802, a power component 803, a multimedia component 804, an audio component 805, an input/output (I/O) interface 806, a sensor component 807, and a communication component 808.
The processing component 801 generally controls overall operation of the apparatus 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 801 may include one or more processors 809 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 801 may include one or more modules that facilitate interactions between the processing component 801 and other components. For example, processing component 801 may include multimedia modules to facilitate interactions between multimedia component 804 and processing component 801.
Memory 802 is configured to store various types of data to support operations at apparatus 800. Examples of such data include instructions for any application or method operating on the device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 802 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 803 provides power to the various components of the apparatus 800. The power components 803 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 804 includes a screen between the device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 804 includes a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 800 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 805 is configured to output and/or input audio signals. For example, the audio component 805 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 802 or transmitted via the communication component 808. In some embodiments, the audio component 805 further comprises a speaker for outputting audio signals.
The I/O interface 802 provides an interface between the processing component 801 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 807 includes one or more sensors for providing status assessment of various aspects of the apparatus 800. For example, the sensor assembly 807 may detect the open/closed state of the device 800, the relative positioning of the components, such as the display and keypad of the device 800, the sensor assembly 807 may also detect the change in position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and the change in temperature of the device 800. The sensor assembly 807 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 807 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 807 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 808 is configured to facilitate communication between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 808 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 808 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 802, including instructions executable by processor 809 of apparatus 800 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Wherein the instructions in the storage medium, when executed by the processor, enable the apparatus 800 to perform a device control method comprising:
acquiring a current photographed real environment picture, and identifying target equipment in a real environment from the real environment picture;
acquiring control data of the target device, wherein the control data indicates a configurable operation project of the target device;
generating a virtual interaction picture comprising the operation item according to the control data, and superposing and displaying the virtual interaction picture in the real environment picture based on the position of the target equipment;
and acquiring a configuration instruction aiming at the operation project through the virtual interaction picture, and controlling the operation of the target equipment by utilizing the configuration instruction.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The foregoing description of the preferred embodiments of the present disclosure is not intended to limit the disclosure, but rather to cover all modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present disclosure.

Claims (10)

1. A method of virtual model processing, the method comprising:
acquiring a current photographed real environment image, and acquiring a target image for determining illumination information according to a rendering position of a virtual model in the real environment image and/or identifying an object in the real environment image;
comparing the difference of pixel information of pixel points in the target image, and determining illumination information in the real environment image according to a comparison result; wherein the pixel information comprises a luminance value and the illumination information comprises an illumination direction;
and carrying out illumination processing on the virtual model to be rendered into the real environment image according to the illumination information.
2. The method of claim 1, wherein the obtaining a target image for determining illumination information from the real environment image comprises one or more of:
acquiring a rendering position of the virtual model in the real environment image, and cutting out the target image from the real environment image according to the rendering position; and/or the number of the groups of groups,
identifying an object in the real environment image, and intercepting an image containing the object from the real environment image as the target image.
3. The method of claim 1, wherein said comparing differences in pixel information of pixels in the target image comprises:
and obtaining brightness values of a plurality of pixel points in the target image, and comparing differences of the brightness values of the plurality of pixel points.
4. A method according to claim 1 or 3, wherein said determining illumination information in said real environment image from the comparison result comprises:
and acquiring a first type pixel point with the maximum brightness value and a second type pixel point with the minimum brightness value in the pixel points, and determining the direction pointing to the second type pixel point from the first type pixel point as the illumination direction in the real environment image.
5. A virtual model processing apparatus, the apparatus comprising:
an image acquisition module configured to: acquiring a current photographed real environment image, and acquiring a target image for determining illumination information according to a rendering position of a virtual model in the real environment image and/or identifying an object in the real environment image;
a contrast module configured to: comparing the difference of pixel information of pixel points in the target image, and determining illumination information in the real environment image according to a comparison result; wherein the pixel information comprises a luminance value and the illumination information comprises an illumination direction;
a light processing module configured to: and carrying out illumination processing on the virtual model to be rendered into the real environment image according to the illumination information.
6. The apparatus of claim 5, wherein the image acquisition module comprises a first image acquisition sub-module and/or a second image acquisition sub-module;
a first image acquisition sub-module configured to: acquiring a rendering position of the virtual model in the real environment image, and cutting out the target image from the real environment image according to the rendering position;
a second image acquisition sub-module configured to: identifying an object in the real environment image, and intercepting an image containing the object from the real environment image as the target image.
7. The apparatus of claim 5, wherein the contrast module is configured to:
a brightness contrast sub-module configured to: and obtaining brightness values of a plurality of pixel points in the target image, and comparing differences of the brightness values of the plurality of pixel points.
8. The apparatus of claim 5 or 7, wherein the comparison module is further configured to:
a direction determination sub-module configured to: and acquiring a first type pixel point with the maximum brightness value and a second type pixel point with the minimum brightness value in the pixel points, and determining the direction pointing to the second type pixel point from the first type pixel point as the illumination direction in the real environment image.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a current photographed real environment image, and acquiring a target image for determining illumination information according to a rendering position of a virtual model in the real environment image and/or identifying an object in the real environment image;
comparing the difference of pixel information of pixel points in the target image, and determining illumination information in the real environment image according to a comparison result; wherein the pixel information comprises a luminance value and the illumination information comprises an illumination direction;
and carrying out illumination processing on the virtual model to be rendered into the real environment image according to the illumination information.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method of any of claims 1 to 4.
CN201810616186.1A 2018-06-14 2018-06-14 Virtual model processing method and device, electronic equipment and storage medium Active CN108986199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810616186.1A CN108986199B (en) 2018-06-14 2018-06-14 Virtual model processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810616186.1A CN108986199B (en) 2018-06-14 2018-06-14 Virtual model processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108986199A CN108986199A (en) 2018-12-11
CN108986199B true CN108986199B (en) 2023-05-16

Family

ID=64540523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810616186.1A Active CN108986199B (en) 2018-06-14 2018-06-14 Virtual model processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108986199B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325798B (en) * 2018-12-13 2023-08-18 浙江宇视科技有限公司 Camera model correction method, device, AR implementation equipment and readable storage medium
CN110021071B (en) * 2018-12-25 2024-03-12 创新先进技术有限公司 Rendering method, device and equipment in augmented reality application
CN110009723B (en) * 2019-03-25 2023-01-31 创新先进技术有限公司 Reconstruction method and device of ambient light source
CN110084891B (en) * 2019-04-16 2023-02-17 淮南师范学院 Color adjusting method of AR glasses and AR glasses
CN110033423B (en) * 2019-04-16 2020-08-28 北京字节跳动网络技术有限公司 Method and apparatus for processing image
CN110310224B (en) * 2019-07-04 2023-05-30 北京字节跳动网络技术有限公司 Light effect rendering method and device
CN110458826B (en) * 2019-08-09 2022-06-03 百度在线网络技术(北京)有限公司 Ambient brightness detection method and device
CN111429543B (en) * 2020-02-28 2020-10-30 苏州叠纸网络科技股份有限公司 Material generation method and device, electronic equipment and medium
CN111273885A (en) * 2020-02-28 2020-06-12 维沃移动通信有限公司 AR image display method and AR equipment
CN111399654B (en) * 2020-03-25 2022-08-12 Oppo广东移动通信有限公司 Information processing method, information processing device, electronic equipment and storage medium
CN111833423A (en) * 2020-06-30 2020-10-27 北京市商汤科技开发有限公司 Presentation method, presentation device, presentation equipment and computer-readable storage medium
CN112785672B (en) * 2021-01-19 2022-07-05 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112967467B (en) * 2021-02-24 2022-07-29 九江学院 Cultural relic anti-theft method, system, mobile terminal and storage medium
CN114979457B (en) * 2021-02-26 2023-04-07 华为技术有限公司 Image processing method and related device
CN113206971B (en) * 2021-04-13 2023-10-24 聚好看科技股份有限公司 Image processing method and display device
CN113487662B (en) * 2021-07-02 2024-06-11 广州博冠信息科技有限公司 Picture display method and device, electronic equipment and storage medium
CN117813630A (en) * 2022-01-29 2024-04-02 华为技术有限公司 Virtual image processing method and device
CN115631291B (en) * 2022-11-18 2023-03-10 如你所视(北京)科技有限公司 Real-time relighting method and apparatus, device, and medium for augmented reality
CN117354438A (en) * 2023-10-31 2024-01-05 神力视界(深圳)文化科技有限公司 Light intensity processing method, light intensity processing device, electronic equipment and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106981087A (en) * 2017-04-05 2017-07-25 杭州乐见科技有限公司 Lighting effect rendering intent and device
CN107134005A (en) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 Illumination adaptation method, device, storage medium, processor and terminal
WO2018045759A1 (en) * 2016-09-07 2018-03-15 中兴通讯股份有限公司 Method and device for lighting rendering in augmented reality, and mobile terminal
CN107909638A (en) * 2017-11-15 2018-04-13 网易(杭州)网络有限公司 Rendering intent, medium, system and the electronic equipment of dummy object

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1720131B1 (en) * 2005-05-03 2009-04-08 Seac02 S.r.l. An augmented reality system with real marker object identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018045759A1 (en) * 2016-09-07 2018-03-15 中兴通讯股份有限公司 Method and device for lighting rendering in augmented reality, and mobile terminal
CN106981087A (en) * 2017-04-05 2017-07-25 杭州乐见科技有限公司 Lighting effect rendering intent and device
CN107134005A (en) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 Illumination adaptation method, device, storage medium, processor and terminal
CN107909638A (en) * 2017-11-15 2018-04-13 网易(杭州)网络有限公司 Rendering intent, medium, system and the electronic equipment of dummy object

Also Published As

Publication number Publication date
CN108986199A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
CN108986199B (en) Virtual model processing method and device, electronic equipment and storage medium
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
CN109889724B (en) Image blurring method and device, electronic equipment and readable storage medium
EP3010226A2 (en) Method and apparatus for obtaining photograph
CN108154465B (en) Image processing method and device
CN106131441B (en) Photographing method and device and electronic equipment
US11770497B2 (en) Method and device for processing video, and storage medium
US11310443B2 (en) Video processing method, apparatus and storage medium
CN107015648B (en) Picture processing method and device
CN108154466B (en) Image processing method and device
CN108122195B (en) Picture processing method and device
CN109472738B (en) Image illumination correction method and device, electronic equipment and storage medium
CN112219224A (en) Image processing method and device, electronic equipment and storage medium
CN109167921B (en) Shooting method, shooting device, shooting terminal and storage medium
CN108986803B (en) Scene control method and device, electronic equipment and readable storage medium
CN106469446B (en) Depth image segmentation method and segmentation device
US11252341B2 (en) Method and device for shooting image, and storage medium
EP3848894A1 (en) Method and device for segmenting image, and storage medium
CN113315903B (en) Image acquisition method and device, electronic equipment and storage medium
CN114390189A (en) Image processing method, device, storage medium and mobile terminal
CN112752010B (en) Shooting method, device and medium
CN114051302B (en) Light control method, device, electronic equipment and storage medium
CN110458962B (en) Image processing method and device, electronic equipment and storage medium
CN118102080A (en) Image shooting method, device, terminal and storage medium
CN117616777A (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant