WO2022048373A1 - 图像处理方法、移动终端及存储介质 - Google Patents

图像处理方法、移动终端及存储介质 Download PDF

Info

Publication number
WO2022048373A1
WO2022048373A1 PCT/CN2021/110203 CN2021110203W WO2022048373A1 WO 2022048373 A1 WO2022048373 A1 WO 2022048373A1 CN 2021110203 W CN2021110203 W CN 2021110203W WO 2022048373 A1 WO2022048373 A1 WO 2022048373A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
image
mobile terminal
positioning information
ambient light
Prior art date
Application number
PCT/CN2021/110203
Other languages
English (en)
French (fr)
Inventor
邹韬
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21863441.8A priority Critical patent/EP4195664A4/en
Priority to US18/043,445 priority patent/US20230334789A1/en
Publication of WO2022048373A1 publication Critical patent/WO2022048373A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Definitions

  • the embodiments of the present application relate to the field of communication technologies, and in particular, to an image processing method, a mobile terminal, and a storage medium.
  • AR Augmented Reality
  • 3D model virtual content
  • the 3D model is loaded into the real environment, and then the real environment (including real people, animals or objects) and the 3D model are photographed.
  • the embodiments of the present application provide an image processing method, a mobile terminal, and a storage medium, so as to provide an image processing method, which can complete the image shooting of the combination of virtual and real, and ensure the brightness and/or shadow of the real environment and the virtual model. Consistent shading and/or shadows to avoid image distortion.
  • an embodiment of the present application provides an image processing method, including:
  • a preview interface is displayed; wherein, the preview interface includes a real environment picture; specifically, the real environment picture is a picture captured by the current camera.
  • the first virtual object is determined; specifically, the second operation may include the user's operation of selecting the shooting mode and the operation of selecting the first virtual object; for example, the user may select the virtual-real fusion on the preview interface Shooting mode to enter the virtual-real fusion shooting mode; at this time, in the virtual-real fusion shooting mode, the preview interface can display at least one candidate virtual object for the user to select, and the user can arbitrarily select one of the candidate virtual objects to determine the first virtual object .
  • the positioning information may include size information, angle information of the first virtual object, and the position of the first virtual object in the preview interface information; the positioning information may be the default position information, size information and angle information of the first virtual object, or may be the position information, size information and angle information adjusted by the user for the first virtual object.
  • a virtual object can be adjusted by dragging, zooming and rotating.
  • the first image and the second image are synthesized based on the positioning information to obtain a third image; wherein the first image includes a real environment picture, the second image includes a second virtual object corresponding to the positioning information, and the second virtual object is formed by the first image.
  • the virtual object is generated after rendering based on the ambient light information, and the ambient light information corresponds to the real environment picture.
  • the first image captures a picture of the real environment, therefore, the first image only includes the real scene picture and does not include the second virtual object; the second image captures the second virtual object, so the second image does not include the real environment
  • the picture, that is, the first image and the second image are separated.
  • the third image may be obtained by compositing (eg, superimposing) the first image and the second image.
  • the third image may be a composite image for viewing by the user.
  • the brightness and/or shadow of the second virtual object is consistent with the brightness and/or shadow of the real environment, thereby improving the viewing experience of the user.
  • the second virtual object may be obtained by rendering the brightness and/or shadow of the first virtual object, and the rendering of the brightness and/or shadow may be performed based on ambient light information.
  • combining the first image and the second image based on the positioning information to obtain the third image includes:
  • a first image is generated; specifically, the first image does not include the second virtual object.
  • ambient light information corresponding to the first image based on the first image may be performed on the first image to obtain ambient light information; wherein the ambient light information may include illumination angle and illumination intensity.
  • the first virtual object is rendered based on the ambient light information to obtain the second virtual object; specifically, the rendering may include shading and/or shadow rendering.
  • the second image is generated based on the second virtual object; specifically, the second image does not include a real environment picture.
  • the second image may be generated by a flash photo or a screenshot, which is not limited in this embodiment of the present application.
  • the first image and the second image are synthesized based on the positioning information to obtain a third image.
  • combining the first image and the second image based on the positioning information to obtain the third image includes:
  • the first virtual object is rendered based on the ambient light information to obtain a second virtual object.
  • the first image and the second image are generated.
  • the first image and the second image are synthesized based on the positioning information to obtain a third image.
  • acquiring the positioning information of the first virtual object includes:
  • the positioning information of the first virtual object in the preview interface is determined.
  • the positioning information may include size information, angle information of the first virtual object, and coordinate position information of the first virtual object in the preview interface.
  • the positioning information includes default positioning information of the first virtual object.
  • determining the first virtual object includes:
  • At least one candidate virtual object is displayed.
  • a first virtual object is determined among the candidate virtual objects.
  • displaying at least one candidate virtual object includes:
  • the type of the real environment in the preview interface is identified to obtain the type of environment; specifically, the type of environment is used to identify the theme of the current real environment, for example, the type of environment may include character type, animal type, Appliance type, building type, etc.
  • the virtual object database can be queried based on the environment type, and virtual objects can be recommended for display, avoiding multi-page display caused by too many virtual objects, improving the matching degree between virtual objects and the real environment, and improving user experience.
  • the second virtual object is generated after being rendered by the first virtual object based on ambient light information, including:
  • the preset rendering model Inputting the first virtual object and ambient light information into the preset rendering model, so that the preset rendering model renders the brightness and/or shadow of the first virtual object to obtain a second virtual object, wherein the second virtual object includes brightness and/or shadows.
  • an image processing apparatus including:
  • a preview module configured to display a preview interface in response to the detected first operation; wherein the preview interface includes a real environment picture;
  • a selection module for determining the first virtual object in response to the detected second operation
  • an acquisition module configured to acquire the positioning information of the first virtual object, and display the first virtual object in the preview interface based on the positioning information
  • a synthesis module for synthesizing the first image and the second image based on the positioning information to obtain a third image; wherein the first image includes a real environment picture, the second image includes a second virtual object corresponding to the positioning information, and the second The virtual object is generated by rendering the first virtual object based on ambient light information, and the ambient light information corresponds to a real environment picture.
  • the above-mentioned synthesis module includes:
  • a first generating unit for generating a first image in response to the detected third operation
  • an identification unit configured to obtain ambient light information corresponding to the first image based on the first image
  • a rendering unit configured to render the first virtual object based on ambient light information to obtain a second virtual object
  • a second generating unit configured to generate a second image based on the second virtual object
  • the synthesizing unit is used for synthesizing the first image and the second image based on the positioning information to obtain a third image.
  • the above-mentioned synthesis module includes:
  • an acquisition unit used for acquiring ambient light information corresponding to the real environment picture
  • a rendering unit configured to render the first virtual object based on ambient light information to obtain a second virtual object
  • a generating unit for generating a first image and a second image in response to the detected third operation
  • the synthesizing unit is used for synthesizing the first image and the second image based on the positioning information to obtain a third image.
  • the obtaining module is further configured to determine the positioning information of the first virtual object in the preview interface in response to the detected fourth operation.
  • the positioning information includes default positioning information of the first virtual object.
  • the above-mentioned selection module includes:
  • a display unit for displaying at least one candidate virtual object in response to the detected second operation
  • a selection unit configured to determine a first virtual object among the candidate virtual objects in response to the detected fifth operation.
  • the above-mentioned display unit is further configured to identify the type of the real environment in the preview interface in response to the detected second operation to obtain the environment type; and recommend and display candidate virtual objects based on the environment type.
  • the above-mentioned synthesis module is further configured to input the first virtual object and ambient light information into the preset rendering model, so that the preset rendering model renders the brightness and/or shadow of the first virtual object, A second virtual object is obtained, wherein the second virtual object includes shades and/or shadows.
  • an embodiment of the present application provides a mobile terminal, including:
  • a memory the memory is used to store computer program codes, and the computer program codes include instructions.
  • the mobile terminal reads the instructions from the memory, the mobile terminal performs the following steps:
  • a preview interface is displayed; wherein, the preview interface includes a real environment picture;
  • the first image and the second image are synthesized based on the positioning information to obtain a third image.
  • the first image and the second image are synthesized based on the positioning information to obtain a third image.
  • the positioning information of the first virtual object in the preview interface is determined.
  • the positioning information includes default positioning information of the first virtual object.
  • the first virtual object is determined among candidate virtual objects.
  • the above-mentioned mobile terminal when executed by the above-mentioned mobile terminal, the above-mentioned mobile terminal is caused to perform, in response to the detected second operation, the step of displaying at least one candidate virtual object includes:
  • the step of causing the above-mentioned mobile terminal to execute the second virtual object after the first virtual object is rendered based on the ambient light information includes:
  • the preset rendering model Inputting the first virtual object and ambient light information into the preset rendering model, so that the preset rendering model renders the brightness and/or shadow of the first virtual object to obtain a second virtual object, wherein the second virtual object includes brightness and/or shadows.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it runs on a computer, causes the computer to execute the method described in the first aspect.
  • an embodiment of the present application provides a computer program, which is used to execute the method described in the first aspect when the computer program is executed by a computer.
  • the program in the fifth aspect may be stored in whole or in part on a storage medium packaged with the processor, and may also be stored in part or in part in a memory not packaged with the processor.
  • FIG. 1 is a schematic structural diagram of a mobile terminal provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of an embodiment of an image processing method provided by the present application.
  • 3A is a schematic diagram of a shooting mode selection interface provided by an embodiment of the present application.
  • 3B is a schematic diagram of displaying a candidate virtual object interface provided by an embodiment of the present application.
  • 3C is a schematic diagram of a virtual object selection interface provided by an embodiment of the present application.
  • 3D is a schematic diagram of a virtual object rendering effect provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of image synthesis provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of another embodiment of an image processing method provided by the present application.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • plural means two or more.
  • a camera is usually included in a current mobile terminal, and a user can capture an image arbitrarily through the camera.
  • AR technology people can combine real and virtual scenes.
  • AR technology applied to camera shooting users can capture real objects and virtual objects on one image.
  • a mobile terminal may also be referred to as terminal equipment, user equipment (UE), access terminal, subscriber unit, subscriber station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device, user agent, or user device.
  • UE user equipment
  • PDA Mobile terminal Personal Digital Assistant
  • FIG. 1 it is a schematic structural diagram of a mobile terminal 100 according to an embodiment of the present application.
  • the mobile terminal 100 may include a camera module 110 , a display screen 120 , a processor 130 , an I/O subsystem 140 , a memory 150 and other input devices 160 .
  • the camera module 110 is used for capturing images
  • the display screen 120 is used for displaying images and an operation interface.
  • the camera module 110 includes at least one camera 111 , wherein, if the camera module 110 includes only one camera 111 , the camera 111 can be front-facing or rear-facing; if the camera module 110 includes multiple cameras 111 , the multiple cameras 111 .
  • the cameras 111 can be located on the same side of the mobile terminal 100, or can be arbitrarily distributed on both sides of the mobile terminal 100; it should be noted that, if there are multiple cameras 111 on any side of the mobile terminal 100, there can be a main camera 111 on that side. Camera, when the user starts shooting, the mobile terminal 100 can turn on the main camera, and can obtain the current environment information through the main camera, and display it in the preview interface of the mobile terminal 100 .
  • the camera 111 may be used to obtain ambient light information, such as illumination angle and illumination intensity.
  • the camera 111 can also be used to capture a picture of the current real environment.
  • the touch detection device detects the user's touch orientation and posture, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, and converts it into a processor capable of The processed information is then sent to the processor 130, and the commands sent by the processor 130 can be received and executed.
  • the touch panel 122 can be realized by various types of resistive, capacitive, infrared, and surface acoustic waves, and any technology developed in the future can also be used to realize the touch panel 122 .
  • the touch panel 122 can cover the display panel 121, and the user can display content on the display panel 121 according to the content displayed on the display panel 121 (the display content includes, but not limited to, a soft keyboard, a virtual mouse, virtual keys, icons, etc.) on the display panel 121
  • the operation is performed on or near the covered touch panel 122.
  • the touch panel 122 detects the operation on or near it, the operation is transmitted to the processor 130 through the I/O subsystem 140 to determine the user input.
  • the processor 130 according to User input provides corresponding visual output on display panel 121 through I/O subsystem 140 .
  • the touch panel 122 and the display panel 121 are used as two independent components to realize the input and input functions of the terminal device 100, in some embodiments, the touch panel 122 and the display panel 121 may be The input and output functions of the terminal device 100 are realized by integration.
  • the display screen 120 may be used to receive user input operations, for example, the user may perform operations such as clicking, sliding, and dragging on the display screen 120 .
  • the display screen 120 can also display a picture of the real environment captured by the camera 111 .
  • the processor 130 is the control center of the mobile terminal 100, uses various interfaces and lines to connect various parts of the entire terminal device, runs or executes the software programs and/or modules stored in the memory 150, and invokes the software programs and/or modules stored in the memory 150. data, perform various functions of the mobile terminal 100 and process data, so as to perform overall monitoring of the mobile terminal 100 .
  • the processor 130 may include one or more processing units; preferably, the processor 130 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, etc. , the modem processor mainly deals with wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 130.
  • the involved processors may include, for example, a central processing unit (Central Processing Unit; hereinafter referred to as: CPU), a digital signal processor (Digital Singnal Processor; hereinafter referred to as: DSP) or a microcontroller, and may also include a graphics processor (Graphics Processing Unit; hereinafter referred to as: GPU), embedded neural network processor (Neural-network Process Units; hereinafter referred to as: NPU) and image signal processor (Image Signal Processing; hereinafter referred to as: ISP), the processor may also include necessary
  • the hardware accelerator or logic processing hardware circuit such as a specific integrated circuit (Application Specific Integrated Circuit; hereinafter referred to as: ASIC), or one or more integrated circuits used to control the execution of the program of the technical solution of the present application, etc.
  • the processor 130 may be used to synthesize the image of the real environment and the virtual object on one image.
  • the I/O subsystem 140 is used to control the input and output external devices, and may include the input controller 141 and the display controller 142 of other devices.
  • one or more other input control device controllers 141 receive signals from and/or send signals to other input devices 160, which may include physical buttons (push buttons, rocker buttons, etc.) , dial, slide switch, joystick, click wheel, light mouse (light mouse is a touch-sensitive surface that does not display visual output, or an extension of a touch-sensitive surface formed by a touch screen).
  • other input control device controllers 141 may be connected to any one or more of the above-mentioned devices.
  • the display controller 142 in the I/O subsystem 140 receives signals from and/or sends signals to the display screen 120 . After the display screen 120 detects the user input, the display controller 142 converts the detected user input into interaction with the user interface objects displayed on the display screen 120, that is, to realize human-computer interaction.
  • Step 101 start the camera.
  • the startup of the camera can be started by running an application program corresponding to the camera.
  • the user may operate the icon of the camera application on the display interface of the mobile terminal 100 to turn on the camera. It can be understood that, when the user opens the camera application program through an icon operation, it may be a single click, a double click, or a slide, or other methods, which are not limited in this embodiment of the present application.
  • Step 102 the camera captures the real environment picture.
  • the display interface 300 of the mobile terminal 100 includes a shooting operation area 310 , a shooting mode selection area 320 and a screen preview area 330 ; wherein the shooting mode selection area 320 may include multiple shooting mode candidates 321 , for example, a large aperture , night scene, portrait, virtual and real fusion, video recording and professional mode; the shooting operation area 310 includes a shooting button 311 and a camera switching button 312; the screen preview area 330 is used to display the picture of the real environment captured by the camera, for example, as shown in Figure 3A street picture. It should be noted that the mobile terminal 100 with a single camera does not have a camera switching function, so there is no camera switching button 312 . When the mobile terminal 100 includes multiple cameras and the cameras are located on both sides of the mobile terminal 100 , the mobile terminal 100 has a camera switching function, and may include a camera switching button 312 .
  • the mobile terminal 100 includes multiple cameras, and the cameras are located on both sides of the mobile terminal 100, by clicking the camera switching button 312 in the shooting operation area 310, the current camera can be selected, for example, the front camera or the rear camera can be selected. Camera, if the mobile terminal 100 includes only one camera, there is no need to switch the camera at this time. If the current camera supports the virtual-real fusion shooting function, the multiple virtual-real fusion option will appear in the candidate item 321 in the shooting mode selection area 320, otherwise it will not appear.
  • Step 103 in response to the detected operation of the user for selecting the virtual-real fusion shooting mode, display candidate virtual objects.
  • the user can select the shooting mode candidate 321 to determine the current shooting mode. For example, the user can click the virtual-real fusion option to enter the virtual-real fusion shooting mode. At this time, The mobile terminal 100 displays candidate virtual objects.
  • a virtual object candidate area 3210 will pop up in the display interface 300, and the display interface 400 shown in FIG. 3B can be obtained.
  • the virtual object may be a 3D model, for example, a 3D character, a 3D animal, or a 3D object, which is not limited in this embodiment of the present application.
  • the virtual object candidate area 3210 includes at least one virtual object candidate 3211 for the user to select a virtual object.
  • the virtual object candidate 3211 may be a preview image corresponding to the virtual object. It can be understood that the virtual object candidate 3211 may also be an icon or other display form, which is not limited in this embodiment of the present application. Exemplarily, after the user selects the virtual object candidate 3211 , the mobile terminal 100 may load the virtual object corresponding to the selected preview image, thereby displaying the virtual object in the display interface 400 .
  • the virtual object may be pre-stored in the memory 150 of the mobile terminal 100 , and after the user selects the virtual object, it may be directly retrieved and loaded in the memory 150 .
  • the virtual object can also be stored in other devices, for example, in a server. After the user selects the virtual object, the virtual object selected by the user can be downloaded from other devices, and the downloaded virtual object can be loaded.
  • the preset virtual object candidate 3211 may be displayed. For example, only one preset virtual object candidate 3211 is pushed each time virtual-real fusion shooting is performed, that is, only one preset virtual object candidate 3211 is available for the user to select in the virtual object candidate area 3210 . It can be understood that, multiple virtual object candidates 3211 can also be preset. For example, a plurality of preset virtual object candidates 3211 are pushed in the virtual object candidate area 3210. At this time, there are multiple virtual object candidates 3211 in the virtual object candidate area 3210 for the user to select. Wherein, one or more virtual objects corresponding to the preset virtual object candidates 3211 may be pre-stored in the mobile terminal 100, or may be pre-stored in other devices.
  • the virtual object candidate 3211 when the virtual object candidate 3211 is displayed in the virtual object candidate area 3210, the virtual object candidate 3211 may also be pushed according to the real environment picture captured by the current camera.
  • the virtual object candidates 3211 can be classified in advance, for example, it can be a person, an animal, a home appliance, or a building, etc.
  • the mobile terminal 100 After the camera captures the real environment picture, the mobile terminal 100 The picture is analyzed to determine what category the current real environment picture is related to.
  • the category can be a person category, an animal category, a home appliance category, or a building category.
  • the mobile terminal 100 After the mobile terminal 100 determines the category of the current real environment image, it can match the category of the current real environment image with the category of the virtual object candidate 3211, and push the matched virtual object candidate 3211, for example, match the category of the current real environment image with the category of the virtual object candidate 3211.
  • the virtual object candidates 3211 related to the picture category are displayed in the virtual object candidate area 3210 for the user to select. Thereby, the efficiency of the user's selection of virtual objects from the candidate virtual objects can be improved, thereby improving the user's experience.
  • Step 104 in response to the detected operation of the user for selecting the virtual object, determine the first virtual object.
  • the mobile terminal 100 can directly call and load the virtual object, so that the virtual object is displayed in the screen preview area 330; if the virtual object is not stored in the mobile terminal 100, Then, the mobile terminal 100 can download the virtual object according to the link address of the virtual object candidate 3211.
  • the link address can be the server address, and can initiate a request to the server to obtain the virtual object.
  • the virtual object may be loaded, so that the virtual object is displayed in the screen preview area 330 .
  • the virtual object may include an initial state, and the initial state may include default size information and default angle information of the virtual object.
  • the size information can be used to identify the size of the virtual object, for example, the virtual object can identify the size information through the length, width, and height information.
  • the angle information may be used to identify the rotation angle of the virtual object.
  • the virtual object may identify the angle information by the horizontal rotation angle and the vertical rotation angle. It can be understood that, after the mobile terminal 100 loads the virtual object, the virtual object may be rendered based on the default size information and the default angle information and displayed in the preview interface.
  • 3B is taken as an example for description.
  • there are four virtual object candidates 3211 in the virtual object candidate area 3210 which are the cart 1 in the upper left corner, the cart in the upper right corner, the building in the lower left corner, and the building in the lower right corner.
  • trolley 2. The user can select the car 1 by operating the preview image of the car 1 in the upper left corner. For example, the user can click the preview image of the car 1 to select the car 1. It is understood that the user can also select the virtual object through other operations. This embodiment of the present application does not limit this.
  • Step 105 Acquire positioning information of the first virtual object, and display the first virtual object in the preview interface based on the positioning information.
  • the positioning information may include size information and angle information of the virtual object in the foregoing step 104, and in addition, the positioning information may also include position information.
  • the position information is used to identify the coordinate position of the virtual object in the screen preview area 330.
  • the center point of the virtual object may be used as the coordinate position, so that the mobile terminal 100 can record the center point of the virtual object in the screen preview area 330.
  • Coordinate location It can be understood that the picture displayed in the picture preview area 330 is a real environment picture, therefore, after imaging, for example, after generating an image corresponding to the real environment picture, the coordinates of the virtual objects in the picture preview area 330 can be converted into virtual objects. The coordinates in the display environment screen image.
  • the positioning information may be the default positioning information of the virtual object, for example, the default size information and angle information of the virtual object, and the default position displayed in the screen preview area 330 .
  • the virtual object may also be displayed in the screen preview area 330 at any coordinate position. For example, a coordinate position is randomly selected to display the virtual object, which is not limited in this embodiment of the present application.
  • the mobile terminal 100 After the mobile terminal 100 receives the operation of the user selecting the first virtual object (for example, the car 1), it acquires the virtual object corresponding to the virtual object candidate 3211 selected by the user, loads the first virtual object, and based on The default positioning information of the first virtual object displays the first virtual object in the screen preview area 330, thereby obtaining a display interface 500 as shown in FIG. 3C.
  • the mobile terminal 100 can directly call and load the first virtual object, so that the first virtual object is displayed in the screen preview area 330; if the first virtual object is If it is not stored in the mobile terminal 100, the mobile terminal 100 can download the first virtual object according to the link address of the virtual object candidate 3211.
  • the link address can be the server address, and can initiate a request to the server to obtain the first virtual object. virtual object. After the mobile terminal 100 acquires the first virtual object through downloading, the first virtual object may be loaded, so that the first virtual object is displayed in the screen preview area 330 .
  • the user can also change the positioning information of the virtual object in the screen preview area 330 by operating the virtual object.
  • the user can drag the virtual object to change the position information of the virtual object in the screen preview area 330; the user can rotate the virtual object to change the angle information of the virtual object in the screen preview area 330; the user can zoom the virtual object , to change the size information of the virtual object in the screen preview area 330 .
  • the mobile terminal 100 may record the updated positioning information of the virtual object.
  • Step 106 Obtain current ambient light information according to the real environment picture captured by the camera.
  • the current ambient light information is obtained according to the real environment image captured by the camera.
  • the ambient light information may include illumination angle and illumination intensity, the illumination angle may be used to represent the direction of the current light source illumination, and the illumination intensity may be used to represent the intensity of illumination and the amount of illumination on the surface area of the object.
  • the acquisition of ambient light information may be based on an image analysis method.
  • the processor 130 in the mobile terminal 100 may call the image analysis method, so that the display environment captured by the camera can be analyzed.
  • the picture is analyzed, and then the ambient light information can be obtained.
  • the brightest spot in the real environment picture can be detected, and then the direction of the light source illumination, that is, the illumination angle, can be estimated according to the brightest spot; the light intensity can be estimated by the brightness difference and distance between the brightest spot and the second bright spot.
  • the image analysis method belongs to the prior art and will not be repeated here.
  • the camera moves with the movement of the mobile terminal 100 , and therefore, the relative position of the camera and the light source changes, thereby causing the ambient light information to change with the movement of the mobile terminal 100 .
  • the camera can acquire ambient light information in real time. For example, when the user moves the mobile terminal 100, the camera will also move accordingly, the real environment picture displayed in the picture preview area 330 will change, and accordingly, the ambient light information will also change.
  • Step 107 Render the first virtual object in the preview area of the current screen according to the ambient light information to obtain a second virtual object.
  • the mobile terminal 100 may render the first virtual object according to the current ambient light information, thereby obtaining the second virtual object.
  • the rendering may include updating the brightness and/or shadow of the first virtual object, that is, the second virtual object updates the brightness and/or shadow on the basis of the first virtual object.
  • the brightness can be used to represent the brightness and darkness of each part of the virtual object. According to the illumination angle and light intensity of the light source, each part of the virtual object will display the corresponding brightness. Exemplarily, when the light source illuminates from the top of the head of the virtual object, the head of the virtual object is brighter, and the feet of the virtual object are darker.
  • the shadow is used to represent the shadow part corresponding to the virtual object.
  • the virtual object According to the illumination angle and illumination intensity of the light source, the virtual object generates the corresponding shadow area and shadow brightness.
  • the original virtual object includes the initial brightness, and the initial brightness corresponds to the initial light source. Therefore, after acquiring the current ambient light information, the mobile terminal 100 can The brightness of the object is updated; the original virtual object does not contain shadows. Therefore, after acquiring the current ambient light information, the mobile terminal 100 can add shadows to the virtual objects based on the current ambient light information.
  • the shadows are generated based on the current ambient light information, however, for some light sources, no shadows are generated, for example, the light source is projected vertically from the head of a virtual object, or in some cloudy conditions, Ambient light is not enough to generate shadows.
  • the rendering of the brightness and/or the shadow of the virtual object may be implemented by a preset rendering model (eg, an existing 3D engine).
  • the mobile terminal 100 can call a preset 3D engine, and input the ambient light information and the first virtual object to be rendered into the preset 3D engine, thereby completing the rendering of the first virtual object and obtaining the second virtual object.
  • the second virtual object can be kept consistent with the object in the real environment picture, for example, the brightness of the second virtual object is consistent with the brightness of the object in the real environment picture, and/or the brightness of the second virtual object Shadows are consistent with the shadows of objects in the real-world environment.
  • the 3D engine belongs to the prior art, and details are not repeated here.
  • 3C is taken as an example for illustration.
  • the illumination direction of the light source is at the upper left
  • the shadow of the tree in the upper left corner in the real environment is at the right side of the tree, but the virtual object (eg, car 1 ) has no shadow.
  • the processor 130 of the mobile terminal 100 invokes a preset 3D engine to render the virtual object, thereby obtaining the display interface 600 as shown in FIG. 3D .
  • the brightness of the virtual object for example, the car 1
  • the virtual object is added with shadows, which is consistent with the shadows of the trees.
  • the user can also transform the virtual object.
  • transforming the virtual object may include performing position transformation, size transformation and/or angle transformation on the virtual object.
  • the user may drag the virtual object to change the coordinate position of the virtual object in the screen preview area 330; and/or the user may zoom the virtual object to change the size of the virtual object in the screen preview area 330, and/or the user may Rotate the virtual object to change the angle of the virtual object in the screen preview area 330, wherein the rotation may include rotation in the horizontal direction and rotation in the vertical direction, thereby changing the horizontal angle and vertical angle.
  • the mobile terminal 100 may re-render the transformed virtual object.
  • the angles of various parts of the virtual object also change, so re-rendering needs to be performed to update the shading and/or shadow of the virtual object.
  • Step 108 generating a third image in response to the detected operation of the user for shooting, wherein the third image includes a first image corresponding to the real environment picture and a second image corresponding to the second virtual object.
  • the user may photograph the picture of the real environment through a shooting operation, and combine the picture of the real environment and the image of the virtual object into one image.
  • the user may press the shooting button 311 in the shooting operation area 310 to realize the shooting function, and thus the third image may be generated in the mobile terminal 100 .
  • the third image is obtained by synthesizing the first image and the second image, the first image may be a current real environment picture, and the second image may be an image corresponding to the second virtual object.
  • the mobile terminal 100 After receiving the user's photographing operation, the mobile terminal 100 generates the first image and the second image. Next, the mobile terminal 100 acquires the coordinate position of the second image in the first image.
  • the step of acquiring the coordinate position of the second image in the first image by the mobile terminal 100 may be performed simultaneously with the step of generating the first image and the second image, or may be performed at the step of generating the first image and the second image. It is performed before, which is not particularly limited in this embodiment of the present application.
  • the mobile terminal 100 synthesizes the first image and the second image. For example, the mobile terminal 100 may superimpose the second image on the first image, thereby obtaining a synthesized third image.
  • the second image corresponding to the second virtual object may be acquired through a flash photo, or other image interception methods, which are not limited in this embodiment of the present application.
  • the combination of images can be achieved by layer merging.
  • the first image can be used as the first layer
  • the second image can be used as the second layer.
  • the synthesized third image can be obtained.
  • Image synthesis may also be performed in other manners, which are not particularly limited in this embodiment of the present application.
  • the image 710 is the first image corresponding to the real environment picture captured by the camera
  • the image 720 is the plane image of the second virtual object obtained after shading and/or shadow rendering.
  • an image 730 can be obtained, and the image 730 is the final synthesized third image.
  • the brightness and/or shadow of the real environment picture in the image 710 is consistent with the brightness and/or shadow of the second virtual object in the image 720 .
  • the third image can also be displayed, and the third image can be saved in the album of the mobile terminal 100 for the user to browse. It can be understood that, the third image may also be stored in the cloud or a server, which is not limited in this embodiment of the present application.
  • the current ambient light information is acquired in real time in the process of capturing the real environment picture by the mobile terminal, and the brightness and/or shadow of the virtual object is rendered in real time based on the ambient light information, thereby ensuring that after rendering The brightness and/or shadow of the virtual object is consistent with the brightness and/or shadow of the real environment, so as to avoid image distortion due to inconsistency between the two, thereby improving the user's viewing experience.
  • FIG. 5 is a flowchart of another embodiment of the image processing method of the present application. The method can be applied to the above-mentioned mobile terminal 100, including:
  • Step 201 start the camera.
  • step 201 is the same as step 101, and details are not repeated here.
  • Step 202 the camera captures the real environment picture.
  • step 202 is the same as step 102, and details are not repeated here.
  • Step 203 in response to the detected operation of the user for selecting the virtual-real fusion shooting mode, display candidate virtual objects.
  • step 203 is the same as step 103, and details are not repeated here.
  • Step 204 in response to the detected operation of the user for selecting the virtual object, determine the first virtual object.
  • step 204 is the same as step 104, and details are not repeated here.
  • Step 205 Acquire positioning information of the first virtual object, and display the first virtual object in the preview interface based on the positioning information.
  • step 205 is the same as step 105, and details are not repeated here.
  • Step 206 generating a first image in response to the detected operation of the user for photographing.
  • the user may photograph the picture of the real environment through a photographing operation, so as to obtain the first image corresponding to the picture of the real environment.
  • the user may press the shooting button 311 in the shooting operation area 310 .
  • the mobile terminal 100 may generate a first image, and the first image may be a picture of a real scene captured by a camera.
  • Step 207 obtaining ambient light information according to the first image.
  • image analysis is performed according to the first image to obtain ambient light information corresponding to the first image.
  • the image analysis method in step 106 may be used to perform image analysis on the first image, whereby corresponding ambient light information, such as illumination angle and illumination intensity, may be obtained.
  • Step 208 Render the first virtual object according to the ambient light information to obtain a second virtual object.
  • the first virtual object is rendered according to the ambient light information, for example, the first virtual object may be rendered by invoking the 3D engine in step 107, thereby obtaining the second virtual object.
  • the second virtual object is a virtual object whose brightness and/or shadow has been adjusted.
  • the brightness and/or shadow of the second virtual object can be kept consistent with the real environment.
  • Step 209 acquiring a second image corresponding to the second virtual object, and synthesizing the second image and the first image to obtain a third image.
  • step 108 to acquire the second image and the coordinate position of the second image in the first image, reference may be made to step 108 for the specific acquisition method, which will not be repeated here.
  • the second image and the first image are synthesized according to the coordinate position, thereby obtaining the third image.
  • step 108 For the method of image synthesis, reference may be made to step 108, which will not be repeated here.
  • the ambient light information corresponding to the real environment image is acquired, and based on the ambient light information, the virtual object is subjected to a brightness and/or brightness measurement. or shadow rendering, which can ensure that the brightness and/or shadow of the virtual object is consistent with the brightness and/or shadow of the real environment, and can also reduce the resource consumption caused by real-time rendering and avoid excessive load on the mobile terminal. , thereby improving the processing efficiency of the mobile terminal.
  • FIG. 6 is a schematic structural diagram of an embodiment of an image processing apparatus of the present application.
  • the above-mentioned image processing apparatus 60 may include: a preview module 61 , a selection module 62 , an acquisition module 63 and a synthesis module 64 ;
  • the preview module 61 is configured to display a preview interface in response to the detected first operation; wherein, the preview interface includes a real environment picture;
  • a selection module 62 configured to determine the first virtual object in response to the detected second operation
  • an obtaining module 63 configured to obtain the positioning information of the first virtual object, and display the first virtual object in the preview interface based on the positioning information;
  • the synthesizing module 64 is configured to synthesize the first image and the second image based on the positioning information to obtain a third image; wherein the first image includes a real environment picture, the second image includes a second virtual object corresponding to the positioning information, and the first image includes a second virtual object corresponding to the positioning information.
  • the second virtual object is generated after the first virtual object is rendered based on ambient light information, and the ambient light information corresponds to the real environment picture.
  • the above-mentioned synthesis module 64 includes: a first generation unit 641, an identification unit 642, a rendering unit 643, a second generation unit 644, and a synthesis unit 645;
  • a first generating unit 641, configured to generate a first image in response to the detected third operation
  • an identification unit 642 configured to obtain ambient light information corresponding to the first image based on the first image
  • a rendering unit 643, configured to render the first virtual object based on the ambient light information to obtain a second virtual object
  • a second generating unit 644 configured to generate a second image based on the second virtual object
  • the combining unit 645 is configured to combine the first image and the second image based on the positioning information to obtain a third image.
  • the above-mentioned synthesizing module 64 includes: an acquiring unit 646, a rendering unit 647, a generating unit 648, and a synthesizing unit 649;
  • an obtaining unit 646, configured to obtain ambient light information corresponding to the real environment picture
  • a rendering unit 647 configured to render the first virtual object based on the ambient light information to obtain a second virtual object
  • a generating unit 648 configured to generate a first image and a second image in response to the detected third operation
  • the combining unit 649 is configured to combine the first image and the second image based on the positioning information to obtain a third image.
  • the obtaining module 63 is further configured to determine the positioning information of the first virtual object in the preview interface in response to the detected fourth operation.
  • the foregoing positioning information includes default positioning information of the first virtual object.
  • the above selection module 62 includes: a display unit 621 and a selection unit 622;
  • a display unit 621 configured to display at least one candidate virtual object in response to the detected second operation
  • the selecting unit 622 is configured to, in response to the detected fifth operation, determine a first virtual object among the candidate virtual objects.
  • the above-mentioned synthesis module 64 is further configured to input the first virtual object and ambient light information into the preset rendering model, so that the preset rendering model renders the brightness and/or shadow of the first virtual object , to obtain a second virtual object, wherein the second virtual object includes shades and/or shadows.
  • each module of the image processing apparatus shown in FIG. 6 above is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
  • these modules can all be implemented in the form of software calling through processing elements; they can also all be implemented in hardware; some modules can also be implemented in the form of software calling through processing elements, and some modules can be implemented in hardware.
  • the detection module may be a separately established processing element, or may be integrated in a certain chip of the electronic device.
  • the implementation of other modules is similar.
  • all or part of these modules can be integrated together, and can also be implemented independently.
  • each step of the above-mentioned method or each of the above-mentioned modules can be completed by an integrated logic circuit of hardware in the processor element or an instruction in the form of software.
  • the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more specific integrated circuits (Application Specific Integrated Circuit; hereinafter referred to as: ASIC), or, one or more microprocessors Digital Singnal Processor (hereinafter referred to as: DSP), or, one or more Field Programmable Gate Array (Field Programmable Gate Array; hereinafter referred to as: FPGA), etc.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Singnal Processor
  • FPGA Field Programmable Gate Array
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (System-On-a-Chip; hereinafter referred to as: SOC).
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the mobile terminal 100 .
  • the mobile terminal 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the above-mentioned mobile terminal includes corresponding hardware structures and/or software modules for executing each function.
  • the embodiments of the present application can be implemented in hardware or a combination of hardware and computer software. Whether a function is performed by hardware or by computer software-driven hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of the embodiments of the present invention.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiment of the present invention is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • Each functional unit in each of the embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • a computer-readable storage medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or the processor 130 to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Abstract

本申请实施例提供一种图像处理的方法、移动终端及存储介质,涉及通信技术领域,该方法包括:响应于检测到的第一操作,显示预览界面;响应于检测到的第二操作,确定第一虚拟对象;获取第一虚拟对象的定位信息,基于所述定位信息在所述预览界面中显示所述第一虚拟对象;基于所述定位信息将第一图像与第二图像进行合成,得到第三图像;其中,所述第一图像包括所述现实环境画面,所述第二图像包括与所述定位信息对应的第二虚拟对象,所述第二虚拟对象由所述第一虚拟对象基于环境光信息进行渲染后生成,所述环境光信息与所述现实环境画面对应。本申请实施例提供的图像处理方法,能够将虚拟对象的明暗度和/或阴影与现实环境保持一致,避免图像失真。

Description

图像处理方法、移动终端及存储介质
本申请要求于2020年09月01日提交中国专利局、申请号为202010902601.7、申请名称为“图像处理方法、移动终端及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及通信技术领域,尤其涉及一种图像处理方法、移动终端及存储介质。
背景技术
随着拍照技术的不断发展和移动终端的广泛使用,移动终端的拍照功能越来越受到人们的青睐。此外,随着增强现实(Augmented Reality,AR)技术的快速发展,虚拟内容(例如,一个3D模型)可以被加载到现实场景中;在该基础上,发展出了AR拍摄类应用,即可以将3D模型加载至现实环境,而后将现实环境(包括现实人物、动物或物体)和3D模型拍摄下来。
发明内容
本申请实施例提供了一种图像处理方法、移动终端及存储介质,以提供一种图像处理的方式,能够完成虚实结合的图像拍摄,并保证现实环境的明暗度和/或阴影与虚拟模型的明暗度和/或阴影一致,避免图像失真。
第一方面,本申请实施例提供了一种图像处理方法,包括:
响应于检测到的第一操作,显示预览界面;其中,预览界面包括现实环境画面;具体地,该现实环境画面为当前摄像头捕获的画面。
响应于检测到的第二操作,确定第一虚拟对象;具体地,该第二操作可以包括用户选取拍摄模式的操作以及选取第一虚拟对象的操作;例如,用户可以在预览界面上选取虚实融合拍摄模式,以进入虚实融合拍摄模式;此时,在虚实融合拍摄模式下,预览界面可以显示至少一个候选虚拟对象供用户选择,用户可以在候选虚拟对象中任意选取一个,以确定第一虚拟对象。
获取第一虚拟对象的定位信息,基于定位信息在预览界面中显示第一虚拟对象;具体地,定位信息可以包括第一虚拟对象的尺寸信息、角度信息以及第一虚拟对象在预览界面中的位置信息;其中,定位信息可以是第一虚拟对象默认的位置信息、尺寸信息和角度信息,也可以是用户对第一虚拟对象调整后的位置信息、尺寸信息及角度信息,例如,用户可以对第一虚拟对象进行拖拽、缩放及旋转等调整。
基于定位信息将第一图像与第二图像进行合成,得到第三图像;其中,第一图像 包括现实环境画面,第二图像包括与定位信息对应的第二虚拟对象,第二虚拟对象由第一虚拟对象基于环境光信息进行渲染后生成,环境光信息与现实环境画面对应。具体地,第一图像拍摄的是现实环境画面,因此,第一图像仅包含现实场景画面,不包括第二虚拟对象;第二图像拍摄的是第二虚拟对象,因此第二图像不包括现实环境画面,也就是说,第一图像和第二图像是分离的。通过将第一图像和第二图像进行合成(例如,叠加在一起),可以得到第三图像。该第三图像可以是给用户观看的合成图像。在该第三图像中,第二虚拟对象的明暗度和/或阴影与现实环境的明暗度/或阴影保持一致,由此可以提高用户的观看体验。其中,第二虚拟对象可以通过对第一虚拟对象进行明暗度和/或阴影的渲染后得到,明暗度和/或阴影的渲染可以基于环境光信息进行。
其中一种可能的实现方式中,基于定位信息将第一图像与第二图像进行合成,得到第三图像包括:
响应于检测到的第三操作,生成第一图像;具体地,第一图像不包括第二虚拟对象。
基于第一图像获取与第一图像对应的环境光信息;具体地,可以对第一图像进行图像识别,以获得环境光信息;其中,环境光信息可以包括光照角度和光照强度。
基于环境光信息对第一虚拟对象进行渲染,得到第二虚拟对象;具体地,渲染可以包括明暗度和/或阴影渲染。
基于第二虚拟对象生成第二图像;具体地,第二图像不包括现实环境画面。第二图像的生成可以通过闪照或截屏等方式,本申请实施例对此不作限定。
基于定位信息将第一图像和第二图像进行合成,得到第三图像。
其中一种可能的实现方式中,基于定位信息将第一图像与第二图像进行合成,得到第三图像包括:
获取与现实环境画面对应的环境光信息;具体地,可以根据当前摄像头捕获的现实环境画面进行图像识别,以得到对应的环境光信息。
基于环境光信息对第一虚拟对象进行渲染,得到第二虚拟对象。
响应于检测到的第三操作,生成第一图像及第二图像。
基于定位信息将第一图像和第二图像进行合成,得到第三图像。
其中一种可能的实现方式中,获取第一虚拟对象的定位信息包括:
响应于检测到的第四操作,确定第一虚拟对象在预览界面中的定位信息。
具体地,定位信息可以包括第一虚拟对象的尺寸信息、角度信息以及第一虚拟对象在预览界面中的坐标位置信息。
其中一种可能的实现方式中,定位信息包括第一虚拟对象的默认定位信息。
其中一种可能的实现方式中,响应于检测到的第二操作,确定第一虚拟对象包括:
响应于检测到的第二操作,显示至少一个候选虚拟对象。
响应于检测到的第五操作,在候选虚拟对象中确定第一虚拟对象。
其中一种可能的实现方式中,响应于检测到的第二操作,显示至少一个候选虚拟对象包括:
响应于检测到的第二操作,对预览界面中现实环境的类型进行识别,得到环境类 型;具体地,环境类型用于识别当前现实环境的主题,例如,环境类型可以包括人物类型、动物类型、家电类型、建筑类型等。
基于环境类型推荐显示候选虚拟对象。具体地,可以基于环境类型在虚拟对象的数据库中进行查询,由此可以推荐显示虚拟对象,避免虚拟对象过多导致的多页显示,提高虚拟对象与现实环境的匹配度,提高用户的体验。
其中一种可能的实现方式中,第二虚拟对象由第一虚拟对象基于环境光信息进行渲染后生成包括:
将第一虚拟对象及环境光信息输入预置渲染模型,使得预置渲染模型对第一虚拟对象的明暗度和/或阴影进行渲染,得到第二虚拟对象,其中,第二虚拟对象包括明暗度和/或阴影。
第二方面,本申请实施例提供一种图像处理装置,包括:
预览模块,用于响应于检测到的第一操作,显示预览界面;其中,预览界面包括现实环境画面;
选取模块,用于响应于检测到的第二操作,确定第一虚拟对象;
获取模块,用于获取第一虚拟对象的定位信息,基于定位信息在预览界面中显示第一虚拟对象;
合成模块,用于基于定位信息将第一图像与第二图像进行合成,得到第三图像;其中,第一图像包括现实环境画面,第二图像包括与定位信息对应的第二虚拟对象,第二虚拟对象由第一虚拟对象基于环境光信息进行渲染后生成,环境光信息与现实环境画面对应。
其中一种可能的实现方式中,上述合成模块包括:
第一生成单元,用于响应于检测到的第三操作,生成第一图像;
识别单元,用于基于第一图像获取与第一图像对应的环境光信息;
渲染单元,用于基于环境光信息对第一虚拟对象进行渲染,得到第二虚拟对象;
第二生成单元,用于基于第二虚拟对象生成第二图像;
合成单元,用于基于定位信息将第一图像和第二图像进行合成,得到第三图像。
其中一种可能的实现方式中,上述合成模块包括:
获取单元,用于获取与现实环境画面对应的环境光信息;
渲染单元,用于基于环境光信息对第一虚拟对象进行渲染,得到第二虚拟对象;
生成单元,用于响应于检测到的第三操作,生成第一图像及第二图像;
合成单元,用于基于定位信息将第一图像和第二图像进行合成,得到第三图像。
其中一种可能的实现方式中,上述获取模块还用于响应于检测到的第四操作,确定第一虚拟对象在预览界面中的定位信息。
其中一种可能的实现方式中,定位信息包括第一虚拟对象的默认定位信息。
其中一种可能的实现方式中,上述选取模块包括:
显示单元,用于响应于检测到的第二操作,显示至少一个候选虚拟对象;
选取单元,用于响应于检测到的第五操作,在候选虚拟对象中确定第一虚拟对象。
其中一种可能的实现方式中,上述显示单元还用于响应于检测到的第二操作,对预览界面中现实环境的类型进行识别,得到环境类型;基于环境类型推荐显示候选虚 拟对象。
其中一种可能的实现方式中,上述合成模块还用于将第一虚拟对象及环境光信息输入预置渲染模型,使得预置渲染模型对第一虚拟对象的明暗度和/或阴影进行渲染,得到第二虚拟对象,其中,第二虚拟对象包括明暗度和/或阴影。
第三方面,本申请实施例提供一种移动终端,包括:
存储器,上述存储器用于存储计算机程序代码,上述计算机程序代码包括指令,当上述移动终端从上述存储器中读取上述指令,以使得上述移动终端执行以下步骤:
响应于检测到的第一操作,显示预览界面;其中,预览界面包括现实环境画面;
响应于检测到的第二操作,确定第一虚拟对象;
获取第一虚拟对象的定位信息,基于定位信息在预览界面中显示第一虚拟对象;
基于定位信息将第一图像与第二图像进行合成,得到第三图像;其中,第一图像包括现实环境画面,第二图像包括与定位信息对应的第二虚拟对象,第二虚拟对象由第一虚拟对象基于环境光信息进行渲染后生成,环境光信息与现实环境画面对应。
其中一种可能的实现方式中,上述指令被上述移动终端执行时,使得上述移动终端执行基于定位信息将第一图像与第二图像进行合成,得到第三图像的步骤包括:
响应于检测到的第三操作,生成第一图像;
基于第一图像获取与第一图像对应的环境光信息;
基于环境光信息对第一虚拟对象进行渲染,得到第二虚拟对象;
基于第二虚拟对象生成第二图像;
基于定位信息将第一图像和第二图像进行合成,得到第三图像。
其中一种可能的实现方式中,上述指令被上述移动终端执行时,使得上述移动终端执行基于定位信息将第一图像与第二图像进行合成,得到第三图像的步骤包括:
获取与现实环境画面对应的环境光信息;
基于环境光信息对第一虚拟对象进行渲染,得到第二虚拟对象;
响应于检测到的第三操作,生成第一图像及第二图像;
基于定位信息将第一图像和第二图像进行合成,得到第三图像。
其中一种可能的实现方式中,上述指令被上述移动终端执行时,使得上述移动终端执行获取第一虚拟对象的定位信息的步骤包括:
响应于检测到的第四操作,确定第一虚拟对象在预览界面中的定位信息。
其中一种可能的实现方式中,定位信息包括第一虚拟对象的默认定位信息。
其中一种可能的实现方式中,上述指令被上述移动终端执行时,使得上述移动终端执行响应于检测到的第二操作,确定第一虚拟对象的步骤包括:
响应于检测到的第二操作,显示至少一个候选虚拟对象;
响应于检测到的第五操作,在候选虚拟对象中确定所述第一虚拟对象。
其中一种可能的实现方式中,上述指令被上述移动终端执行时,使得上述移动终端执行响应于检测到的第二操作,显示至少一个候选虚拟对象的步骤包括:
响应于检测到的第二操作,对预览界面中现实环境的类型进行识别,得到环境类型;
基于环境类型推荐显示候选虚拟对象。
其中一种可能的实现方式中,上述指令被上述移动终端执行时,使得上述移动终端执行第二虚拟对象由第一虚拟对象基于环境光信息进行渲染后生成的步骤包括:
将第一虚拟对象及环境光信息输入预置渲染模型,使得预置渲染模型对第一虚拟对象的明暗度和/或阴影进行渲染,得到第二虚拟对象,其中,第二虚拟对象包括明暗度和/或阴影。
第四方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,当其在计算机上运行时,使得计算机执行如第一方面所述的方法。
第五方面,本申请实施例提供一种计算机程序,当上述计算机程序被计算机执行时,用于执行第一方面所述的方法。
在一种可能的设计中,第五方面中的程序可以全部或者部分存储在与处理器封装在一起的存储介质上,也可以部分或者全部存储在不与处理器封装在一起的存储器上。
附图说明
图1为本申请实施例提供的移动终端的结构示意图;
图2为本申请提供的图像处理方法一个实施例的流程图;
图3A为本申请实施例提供的拍摄模式选取界面示意图;
图3B为本申请实施例提供的候选虚拟对象界面显示示意图;
图3C为本申请实施例提供的虚拟对象选取界面示意图;
图3D为本申请实施例提供的虚拟对象渲染效果示意图;
图4为本申请实施例提供的图像合成示意图;
图5为本申请提供的图像处理方法另一个实施例的流程图;
图6为本申请实施例提供的图像处理装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
目前的移动终端中通常都包含摄像头,通过该摄像头用户可以任意的拍摄图像。随着AR技术的不断发展,人们可以将现实场景和虚拟场景相结合。随着AR技术应用到相机拍摄中,用户可以将现实对象与虚拟对象拍摄在一张图像上。
通常,虚拟对象是一个3D的模型。然而,为了方便用户调用或加载,3D模型都是预先生成的,也就是说,3D模型的各项特征(例如,明暗度)都是预先设定的。此外, 3D模型通常是不包含阴影的。而在实际拍摄过程中,现实环境的光源都是不可控的,例如,光源的角度不同,对象在照片中的明暗度是不同的。且通常现实环境的对象在照片中会产生阴影部分,这时,如果一个不带阴影的3D模型和一张带阴影的照片进行合成,由于3D模型的明暗度和阴影与照片中显示对象的明暗度与阴影不一致,会导致合成后的图像会显的失真。
基于上述问题,本申请实施例提出了一种图像处理方法,应用于移动终端,可以保证现实环境的明暗度和/或阴影与虚拟模型的明暗度和/或阴影一致,避免图像失真。移动终端也可以称为终端设备、用户设备(User Equipment,UE)、接入终端、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置。移动终端个人数字处理(Personal Digital Assistant,PDA)设备、具有无线通信功能的手持设备、手持式通信设备、手持式计算设备、本申请实施例对执行该技术方案的移动终端的具体形式不做特殊限制。
如图1所示,为本申请实施例提供的一种移动终端100的结构示意图。移动终端100可以包括摄像模组110、显示屏120、处理器130、I/O子系统140、存储器150及其它输入设备160。其中,摄像模组110用于采集图像,显示屏120用于显示图像及操作界面。
摄像模组110包含有至少一个摄像头111,其中,若摄像模组110仅包含一个摄像头111,则该摄像头111可前置也可后置;若摄像模组110包含多个摄像头111,则该多个摄像头111可以位于移动终端100的同一侧,也可以任意分布在移动终端100的两侧;需要说明的是,若移动终端100的任意一侧有多个摄像头111,则该侧可以有一个主摄像头,当用户启动拍摄时,该移动终端100可以开启该主摄像头,并可以通过该主摄像头获取当前的环境信息,在移动终端100的预览界面中进行显示。在本申请中,摄像头111可以用于获取环境光信息,例如,光照角度和光照强度。摄像头111也可以用于捕获当前的现实环境的画面。
显示屏120可用于显示由用户输入的信息或提供给用户的信息以及终端设备100的各种菜单,还可以接受用户输入。具体的,显示屏120可包括显示面板121,以及触控面板122。其中,显示面板121可以采用液晶显示器(LCD,Liquid Crystal Display)、有机发光二极管(OLED,Organic Light-Emitting Diode)等形式来配置显示面板121。触控面板122,也称为触摸屏、触敏屏等,可收集用户在其上或附近的接触或者非接触操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板122上或在触控面板122附近的操作,也可以包括体感操作;该操作包括单点控制操作、多点控制操作等操作类型),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板122可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位、姿势,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成处理器能够处理的信息,再送给处理器130,并能接收处理器130发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板122,也可以采用未来发展的任何技术实现触控面板122。进一步的,触控面板122可覆盖显示面板121,用户可以根据显示面板121显示的内容(该显示内容包括但不限于,软键盘、虚拟鼠标、虚拟按键、 图标等等),在显示面板121上覆盖的触控面板122上或者附近进行操作,触控面板122检测到在其上或附近的操作后,通过I/O子系统140传送给处理器130以确定用户输入,随后,处理器130根据用户输入通过I/O子系统140在显示面板121上提供相应的视觉输出。虽然在图1中,触控面板122与显示面板121是作为两个独立的部件来实现终端设备100的输入和输入功能,但是在某些实施例中,可以将触控面板122与显示面板121集成而实现终端设备100的输入和输出功能。在本申请中,显示屏120可以用于接收用户的输入操作,例如,用户可以在显示屏120上进行点击、滑动以及拖拽等操作。显示屏120还可以显示摄像头111捕获的现实环境的画面。
处理器130是移动终端100的控制中心,利用各种接口和线路连接整个终端设备的各个部分,通过运行或执行存储在存储器150内的软件程序和/或模块,以及调用存储在存储器150内的数据,执行移动终端100的各种功能和处理数据,从而对移动终端100进行整体监控。可选的,处理器130可包括一个或多个处理单元;优选的,处理器130可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器130中。其中,涉及的处理器可以例如包括中央处理器(Central Processing Unit;以下简称:CPU)、数字信号处理器(Digital Singnal Processor;以下简称:DSP)或微控制器,还可包括图形处理器(Graphics Processing Unit;以下简称:GPU)、嵌入式神经网络处理器(Neural-network Process Units;以下简称:NPU)和图像信号处理器(Image Signal Processing;以下简称:ISP),该处理器还可包括必要的硬件加速器或逻辑处理硬件电路,如特定集成电路(Application Specific Integrated Circuit;以下简称:ASIC),或一个或多个用于控制本申请技术方案程序执行的集成电路等。在本申请中,处理器130可以用于将现实环境的图像与虚拟对象合成在一张图像上。
I/O子系统140用来控制输入输出的外部设备,可以包括其他设备输入控制器141、显示控制器142。可选的,一个或多个其他输入控制设备控制器141从其他输入设备160接收信号和/或者向其他输入设备160发送信号,其他输入设备160可以包括物理按钮(按压按钮、摇臂按钮等)、拨号盘、滑动开关、操纵杆、点击滚轮、光鼠(光鼠是不显示可视输出的触摸敏感表面,或者是由触摸屏形成的触摸敏感表面的延伸)。值得说明的是,其他输入控制设备控制器141可以与任一个或者多个上述设备连接。所述I/O子系统140中的显示控制器142从显示屏120接收信号和/或者向显示屏120发送信号。显示屏120检测到用户输入后,显示控制器142将检测到的用户输入转换为与显示在显示屏120上的用户界面对象的交互,即实现人机交互。
存储器150可用于存储软件程序以及模块,处理器130通过运行存储在存储器150的软件程序以及模块,从而执行移动终端100的各种功能应用以及数据处理。存储器150可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据移动终端100的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器150可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。在本申请中,存储器150 可以用于存储拍摄的图像和虚拟对象。
其他输入设备160可用于接收输入的数字或字符信息,以及产生与移动终端100的用户设置以及功能控制有关的键信号输入。具体地,其他输入设备160可包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆、光鼠(光鼠是不显示可视输出的触摸敏感表面,或者是由触摸屏形成的触摸敏感表面的延伸)等中的一种或多种。其他输入设备160与I/O子系统140的其他输入设备控制器141相连接,在其他设备输入控制器141的控制下与处理器130进行信号交互。
现结合图2-图5对本申请实施例提供的图像处理方法进行说明,如图2所示为本申请图像处理方法一个实施例的流程图,该方法可以应用于上述移动终端100中,包括:
步骤101,启动摄像头。
具体地,摄像头的启动可以通过运行与摄像头对应的应用程序启动。例如,用户可以在移动终端100的显示界面上操作摄像头应用程序的图标,以开启摄像头。可以理解的是,用户通过图标操作打开摄像头应用程序时,可以是单击、双击或滑动,也可以是其它方式,本申请实施例对此不作限定。
可选地,摄像头的启动也可以在一个应用程序中调用摄像头的应用程序,以启动摄像头。例如,用户在通过聊天应用程序与另一用户进行对话时,可以在聊天应用程序中调用摄像头应用程序,由此可以在聊天应用程序中拍摄照片后向对方发送。
步骤102,摄像头捕获现实环境画面。
具体地,在用户启动该摄像头后,摄像头可以捕获现实环境的画面,将现实环境的画面显示在移动终端100的显示屏120上。示例性的,移动终端100的显示界面可以用于显示如图3A所示的现实环境画面。参考图3A,移动终端100的显示界面300包括拍摄操作区域310、拍摄模式选择区域320及画面预览区域330;其中,拍摄模式选择区域320可以包含多种拍摄模式的候选项321,例如,大光圈、夜景、人像、虚实融合、录像以及专业等模式;拍摄操作区域310包含拍摄按钮311及摄像头切换按钮312;画面预览区域330用于显示摄像头捕获的现实环境的画面,例如,如图3A所示的街道画面。需要注意的是,对于单个摄像头的移动终端100,不具有摄像头切换功能,因此没有摄像头切换按钮312。在移动终端100包含多个摄像头且摄像头位于移动终端100的两侧时,该移动终端100具有摄像头切换功能,可以包含摄像头切换按钮312。
其中,若移动终端100包含多个摄像头,且摄像头位于移动终端100的两侧,通过点击拍摄操作区域310中的摄像头切换按钮312,可以选择当前的摄像头,例如,可以选择前置摄像头还是后置摄像头,若移动终端100只包含一个摄像头,这时无需进行摄像头的切换。若当前的摄像头支持虚实融合拍摄功能,则拍摄模式选择区域320中的候选项321中就会出现多虚实融合选项,否则不出现。
步骤103,响应于检测到的用户用于选取虚实融合拍摄模式的操作,显示候选虚拟对象。
具体地,在如图3A所示的显示界面300上,用户可以选择拍摄模式候选项321,以确定当前的拍摄模式,例如,用户可以点击虚实融合选项,以进入虚实融合拍摄模 式,此时,移动终端100显示候选虚拟对象。
现以图3A为例进行说明,当用户通过点击该虚实融合选项321之后,显示界面300中就会弹出虚拟对象候选区域3210,由此可以得到如图3B所示的显示界面400。其中,该虚拟对象可以是一个3D的模型,例如,一个3D的人物,一个3D的动物,或者一个3D的物体,本申请实施例对此不作限定。虚拟对象候选区域3210包括至少一个虚拟对象候选项3211,以供用户对虚拟对象进行选取。
其中,虚拟对象候选项3211可以是与虚拟对象对应的预览图像,可以理解的是,虚拟对象候选项3211也可以是图标或其他显示形式,本申请实施例对此不作限定。示例性的,当用户选取虚拟对象候选项3211后,移动终端100可以加载与选中的预览图像对应的虚拟对象,由此可以将虚拟对象显示在显示界面400中。
虚拟对象可以预先存储在移动终端100的存储器150中,当用户选中虚拟对象后,可以直接在存储器150中调取并进行加载。虚拟对象也可以存储在其他设备中,例如,存储在服务器中,当用户选中虚拟对象后,可以从其他设备上下载用户选中的虚拟对象,并将下载完成的虚拟对象进行加载。
进一步地,在虚拟对象候选区域3210中对虚拟对象候选项3211进行显示时,可以将预先设置的虚拟对象候选项3211进行显示。例如,每次进行虚实融合拍摄时只推送一个预先设置的虚拟对象候选项3211,也就是说,虚拟对象候选区域3210中只有一个预先设置的虚拟对象候选项3211可供用户选取。可以理解的是,也可以预先设置多个虚拟对象候选项3211。例如,在虚拟对象候选区域3210中推送多个预先设置的虚拟对象候选项3211,这时,虚拟对象候选区域3210中有多个虚拟对象候选项3211可供用户选取。其中,一个或多个与预先设置的虚拟对象候选项3211对应的虚拟对象可以预先存储在移动终端100中,也可以预先存储在其他设备中。
可选地,在虚拟对象候选区域3210中对虚拟对象候选项3211进行显示时,还可以根据当前摄像头捕获的现实环境画面对虚拟对象候选项3211进行推送。示例性的,可以预先给虚拟对象候选项3211进行分类,例如,可以是人物类、动物类、家电类或建筑类等,当摄像头捕获到现实环境画面后,移动终端100可以对当前的现实环境画面进行分析,以确定当前的现实环境画面与什么类别相关,例如,该类别可以是人物类、动物类、家电类或建筑类等。当移动终端100确定当前现实环境画面的类别后,可以将当前现实环境画面的类别与虚拟对象候选项3211的类别进行匹配,将匹配的虚拟对象候选项3211进行推送,例如,将与当前现实环境画面类别相关的虚拟对象候选项3211显示在虚拟对象候选区域3210中,以供用户选取。由此可以提高用户在候选虚拟对象中对虚拟对象进行选取的效率,进而提高用户的体验。
步骤104,响应于检测到的用户用于选取虚拟对象的操作,确定第一虚拟对象。
具体地,用户可以在虚拟对象候选区域3210中对虚拟对象候选项3211进行操作,以选取虚拟对象。例如,用户可以在虚拟对象候选区域3210中点击或拖拽虚拟对象候选项3211,以选定虚拟对象。可以理解的是,用户也可以通过其他操作选定虚拟对象,本申请实施例对此不作限定。移动终端100接收到用户选取虚拟对象的操作后,加载与用户选取的虚拟对象候选项3211对应的虚拟对象。其中,若虚拟对象已存储在移动终端100内,则移动终端100可以直接调用该虚拟对象并进行加载,使得该虚拟对象 显示在画面预览区域330中;若虚拟对象未存储在移动终端100内,则移动终端100可以根据虚拟对象候选项3211的链接地址下载虚拟对象,例如,该链接地址可以是服务器地址,则可以向服务器发起请求,以获取该虚拟对象。当移动终端100通过下载获取到虚拟对象后,可以对该虚拟对象进行加载,使得该虚拟对象显示在画面预览区域330中。其中,虚拟对象可以包括初始的状态,该初始状态可以包括虚拟对象默认的尺寸信息和默认的角度信息。尺寸信息可以用于标识虚拟对象的大小,例如,该虚拟对象可以通过长宽高等信息来标识尺寸信息。角度信息可以用于标识该虚拟对象的旋转角度,示例性的,该虚拟对象可以通过横向旋转角度和纵向旋转角度来标识角度信息。可以理解的是,在移动终端100对虚拟对象进行加载后,可以基于默认的尺寸信息和默认的角度信息对虚拟对象进行渲染后显示在预览界面中。
现以图3B为例进行说明,参考图3B,虚拟对象候选区域3210中有4个虚拟对象候选项3211,分别是左上角的小车1、右上角的大车、左下角的建筑及右下角的小车2。用户可以通过对左上角小车1的预览图像的操作,选取该小车1,例如,用户可以点击小车1的预览图像,选取小车1,可以理解的是,用户也可以通过其他操作选定虚拟对象,本申请实施例对此不作限定。
步骤105,获取第一虚拟对象的定位信息,基于该定位信息在预览界面中显示第一虚拟对象。
具体地,该定位信息可以包括上述步骤104中虚拟对象的尺寸信息和角度信息,此外,该定位信息还可以包括位置信息。位置信息用于标识虚拟对象在画面预览区域330中的坐标位置,例如,可以以虚拟对象的中心点为坐标位置,由此移动终端100可以记录下虚拟对象的中心点在画面预览区域330中的坐标位置。可以理解的是,画面预览区域330中显示的画面是现实环境画面,因此,成像之后,例如,生成与现实环境画面对应的图像之后,虚拟对象在画面预览区域330中的坐标可以转换为虚拟对象在显示环境画面图像中的坐标。
其中,该定位信息可以是虚拟对象默认的定位信息,例如,以虚拟对象默认的尺寸信息及角度信息,以及以默认的位置显示在画面预览区域330中。也可以以任意的坐标位置在画面预览区域330中对虚拟对象进行显示,例如,随机选取一个坐标位置显示该虚拟对象,本申请实施例对此不作限定。
以图3B为例,移动终端100接收到用户选取第一虚拟对象(例如,小车1)的操作后,获取与用户选取的虚拟对象候选项3211对应的虚拟对象,加载第一虚拟对象,并基于第一虚拟对象默认的定位信息将该第一虚拟对象显示在画面预览区域330中,由此可以得到如图3C所示的显示界面500。其中,若第一虚拟对象已存储在移动终端100内,则移动终端100可以直接调用该第一虚拟对象并进行加载,使得该第一虚拟对象显示在画面预览区域330中;若第一虚拟对象未存储在移动终端100内,则移动终端100可以根据虚拟对象候选项3211的链接地址下载第一虚拟对象,例如,该链接地址可以是服务器地址,则可以向服务器发起请求,以获取该第一虚拟对象。当移动终端100通过下载获取到第一虚拟对象后,可以对该第一虚拟对象进行加载,使得该第一虚拟对象显示在画面预览区域330中。
可选地,用户还可以通过对虚拟对象的操作,来改变虚拟对象在画面预览区域330 中的定位信息。例如,用户可以拖拽虚拟对象,以改变虚拟对象在画面预览区域330中的位置信息;用户可以旋转虚拟对象,以改变虚拟对象在画面预览区域330中的角度信息;用户可以对虚拟对象进行缩放,以改变虚拟对象在画面预览区域330中的尺寸信息等。响应于用户改变虚拟对象定位信息的操作,移动终端100可以记录下虚拟对象更新后的定位信息。
步骤106,根据摄像头捕获的现实环境画面获得当前环境光信息。
具体地,当画面预览区域330中显示虚拟对象后,根据摄像头捕获的现实环境画面获得当前环境光信息。其中,环境光信息可以包括光照角度和光照强度,光照角度可以用于表征当前光源照射的方向,光照强度可以用于表征光照的强弱和对象表面积被照明程度的量。
需要说明的是,环境光信息的获取可以基于图像分析的方法,例如,当摄像头捕获现实环境画面后,移动终端100中的处理器130可以调用图像分析方法,由此可以对摄像头捕获的显示环境画面进行分析,进而可以得到环境光信息。示例性的,可以检测出现实环境画面中的最亮点,然后可以根据最亮点估计出光源照射的方向,也就是光照角度;通过最亮点与次亮点之间的亮度差异与距离可以估计出光照强度。图像分析方法属于现有技术,在此不再赘述。
可以理解的是,摄像头会随着移动终端100的移动而移动,因此,摄像头与光源的相对位置会发生变化,由此导致环境光信息会随着移动终端100的移动而改变。摄像头在移动终端100的移动过程中,可以实时地获取环境光信息。例如,当用户移动移动终端100时,摄像头也会随之移动,画面预览区域330中显示的现实环境画面会发生变化,相应地,环境光信息也发生了变化。
步骤107,根据环境光信息对当前画面预览区域中的第一虚拟对象进行渲染,得到第二虚拟对象。
具体地,当第一虚拟对象显示在移动终端100的画面预览区域330中后,移动终端100可以根据当前的环境光信息对第一虚拟对象进行渲染,由此可以得到第二虚拟对象。其中,渲染可以包括对第一虚拟对象更新明暗度和/或阴影,也就是说,第二虚拟对象在第一虚拟对象的基础上对明暗度和/或阴影进行更新。明暗度可以用于表征虚拟对象身上各部位的明暗区分,根据光源的光照角度和光照强度,虚拟对象的各部位会显示对应的明暗度。示例性的,当光源从虚拟对象的头顶照射下来时,则虚拟对象的头部比较亮,脚部则比较暗。阴影用于表征与虚拟对象对应的阴影部分,根据光源的光照角度和光照强度,虚拟对象产生对应的的阴影区域和阴影亮度。示例性的,当光源从虚拟对象的正前方照射过来时,背后会产生阴影。可以理解的是,原始的虚拟对象包含初始的明暗度,该初始的明暗度和初始的光源对应,因此,当获取到当前的环境光信息后,移动终端100可以基于当前的环境光信息对虚拟对象的明暗度进行更新;而原始的虚拟对象是不包含阴影的,因此,当获取到当前的环境光信息后,移动终端100可以基于当前的环境光信息对虚拟对象添加阴影。需要说明的是,阴影的生成是基于当前的环境光信息,然而,对于某些光源,是没有阴影生成的,例如,光源从虚拟对象的头部进行垂直投射,或者在某些阴天条件,环境光不足以生成阴影。
需要说明的是,虚拟对象明暗度的渲染和/或阴影的渲染可以通过预置的渲染模型 (例如,现有的3D引擎)实现。示例性的,移动终端100可以调用预置的3D引擎,将环境光信息及待渲染的第一虚拟对象输入预置的3D引擎,由此可以完成对第一虚拟对象的渲染,得到第二虚拟对象,由此可以使得第二虚拟对象与现实环境画面中的对象保持一致,例如,第二虚拟对象的明暗度和现实环境画面中的对象的明暗度保持一致,和/或第二虚拟对象的阴影和现实环境画面中的对象的阴影保持一致。其中,3D引擎属于现有技术,在此不再赘述。
现以图3C为例进行说明,参考图3C,光源照射方向在左上方,现实环境中左上角树木的阴影在树木的右侧,然而虚拟对象(例如,小车1)没有阴影。当移动终端100的处理器130获取到环境光信息后,调用预置的3D引擎对虚拟对象进行渲染,由此可以得到如图3D所示的显示界面600。参考图3D,虚拟对象(例如,小车1)的明暗度发生了变化,与树木的明暗度保持一致;且虚拟对象添加了阴影,与树木的阴影保持一致。
进一步地,在画面预览区域330中用户还可以对虚拟对象进行变换。其中,对虚拟对象进行变换可以包括对虚拟对象进行位置变换、尺寸变换和/或角度变换。例如,用户可以拖动虚拟对象,以改变虚拟对象在画面预览区域330中的坐标位置;和/或用户可以缩放虚拟对象,以改变虚拟对象在画面预览区域330中的尺寸,和/或用户可以对虚拟对象进行旋转,以改变虚拟对象在画面预览区域330中的角度,其中,旋转可以包括水平方向的旋转和垂直方向的旋转,由此可以改变虚拟对象在画面预览区域330中的水平角度和垂直角度。
可以理解的是,当虚拟对象进行变换之后,移动终端100可以对变换后的虚拟对象进行重新渲染。例如,虚拟对象的角度变换之后,虚拟对象的各部位的角度也发生变化,因此,需要进行重新渲染,对虚拟对象的明暗度和/或阴影进行更新。
步骤108,响应于检测到的用户用于拍摄的操作,生成第三图像,其中,第三图像包括与现实环境画面对应的第一图像及与第二虚拟对象对应的第二图像。
具体地,用户可以通过拍摄操作将现实环境的画面拍摄下来,将现实环境的画面与虚拟对象的图像合成在一张图像中。示例性的,用户可以在拍摄操作区域310中按下拍摄按钮311,以实现拍摄功能,由此可以在移动终端100中生成第三图像。其中,第三图像通过第一图像与第二图像合成获得,第一图像可以是当前的现实环境画面,第二图像可以是与第二虚拟对象对应的图像。移动终端100接收到用户的拍摄操作后,生成第一图像及第二图像。接着,移动终端100获取第二图像在第一图像中的坐标位置。可以理解的是,移动终端100获取第二图像在第一图像中的坐标位置的步骤可以与生成第一图像及第二图像的步骤同时执行,也可以在生成第一图像及第二图像的步骤之前执行,本申请实施例对此不作特殊限定。基于上述坐标位置,移动终端100将第一图像与第二图像进行合成,例如,移动终端100可以将第二图像叠加在第一图像上,由此可以得到合成后的第三图像。其中,与第二虚拟对象对应的的第二图像的获取可以通过闪照的方式,也可以通过其他图像截取的方式,本申请实施例对此不作限定。
可以理解的是,图像的合成可以是通过图层合并的方式,例如,可以将第一图像作为第一图层,将第二图像作为第二图层,通过将第二图层叠加在第一图层上,由此 可以得到合成后的第三图像。也可以通过其他方式进行图像合成,本申请实施例对此不作特殊限定。
现结合图4进行说明,如图所示,图像710为与摄像头捕获的现实环境画面对应的第一图像,图像720为经过明暗度和/或阴影渲染后得到的第二虚拟对象的平面图像。通过将图像710和图像720进行合成,可以得到图像730,图像730为最终合成的第三图像。其中,图像710中的现实环境画面的明暗度和/或阴影与图像720中第二虚拟对象的明暗度和/或阴影一致。
进一步地,当移动终端100生成第三图像之后,还可以将该第三图像进行显示,并可以将第三图像保存在移动终端100的相册中,以便用户进行浏览。可以理解的是,第三图像也可以存储在云端或服务器中,本申请实施例对此不作限定。
本实施例中,通过移动终端在捕获现实环境画面的过程中实时获取当前的环境光信息,并基于该环境光信息对虚拟对象的明暗度和/或阴影进行实时渲染,由此可以保证渲染后的虚拟对象的明暗度和/或阴影与现实环境的明暗度和/或阴影保持一致,避免由于两者的不一致导致图像失真,进而可以提高用户的观看体验。
如图5所示为本申请图像处理方法另一个实施例的流程图,该方法可以应用于上述移动终端100中,包括:
步骤201,启动摄像头。
具体地,该步骤201与步骤101相同,在此不再赘述。
步骤202,摄像头捕获现实环境画面。
具体地,该步骤202与步骤102相同,在此不再赘述。
步骤203,响应于检测到的用户用于选取虚实融合拍摄模式的操作,显示候选虚拟对象。
具体地,该步骤203与步骤103相同,在此不再赘述。
步骤204,响应于检测到的用户用于选取虚拟对象的操作,确定第一虚拟对象。
具体地,该步骤204与步骤104相同,在此不再赘述。
步骤205,获取第一虚拟对象的定位信息,基于该定位信息在预览界面中显示第一虚拟对象。
具体地,该步骤205与步骤105相同,在此不再赘述。
步骤206,响应于检测到的用户用于拍摄的操作,生成第一图像。
具体地,用户可以通过拍摄操作将现实环境的画面拍摄下来,以得到与现实环境画面对应的第一图像。例如,用户可以在拍摄操作区域310中按下拍摄按钮311。移动终端100收到用户的拍摄操作后,可以生成第一图像,第一图像可以是摄像头捕获的现实场景的画面。
步骤207,根据第一图像获得环境光信息。
具体地,根据第一图像进行图像分析,获得与该第一图像对应的环境光信息。其中,对第一图像进行图像分析可以通过步骤106中的图像分析方法,由此可以得到对应的环境光信息,例如,光照角度和光照强度。
步骤208,根据环境光信息对第一虚拟对象进行渲染,得到第二虚拟对象。
具体地,根据环境光信息对第一虚拟对象进行渲染,例如,可以通过步骤107中 调用3D引擎的方式对第一虚拟对象进行渲染,由此可以得到第二虚拟对象。其中,第二虚拟对象是经过明暗度和/或阴影调整后的虚拟对象。由此,可以使得第二虚拟对象的明暗度和/或阴影与现实环境保持一致。
步骤209,获取与第二虚拟对象对应的第二图像,将第二图像与第一图像进行合成,得到第三图像。
具体地,获取第二图像以及第二图像在第一图像中的坐标位置,具体获取的方式可以参考步骤108,在此不再赘述。根据坐标位置将第二图像与第一图像进行合成,由此可以得到第三图像,图像合成的方式可以参考步骤108,在此不再赘述。
本实施例中,通过移动终端在执行拍摄操作,生成与现实环境画面对应的图像后,获取与该现实环境图像对应的环境光信息,并基于该环境光信息对虚拟对象进行一次明暗度和/或阴影的渲染,可以保证虚拟对象的明暗度和/或阴影与现实环境的明暗度和/或阴影一致之外,还可以降低实时渲染导致的资源消耗,避免给移动终端带来过大的负载,由此可以提高移动终端的处理效率。
图6为本申请图像处理装置一个实施例的结构示意图,如图6所示,上述图像处理装置60可以包括:预览模块61、选取模块62、获取模块63及合成模块64;
预览模块61,用于响应于检测到的第一操作,显示预览界面;其中,预览界面包括现实环境画面;
选取模块62,用于响应于检测到的第二操作,确定第一虚拟对象;
获取模块63,用于获取第一虚拟对象的定位信息,基于定位信息在预览界面中显示第一虚拟对象;
合成模块64,用于基于定位信息将第一图像与第二图像进行合成,得到第三图像;其中,第一图像包括现实环境画面,第二图像包括与定位信息对应的第二虚拟对象,第二虚拟对象由第一虚拟对象基于环境光信息进行渲染后生成,环境光信息与现实环境画面对应。
其中一种可能的实现方式中,上述合成模块64包括:第一生成单元641、识别单元642、渲染单元643、第二生成单元644及合成单元645;
第一生成单元641,用于响应于检测到的第三操作,生成第一图像;
识别单元642,用于基于第一图像获取与第一图像对应的环境光信息;
渲染单元643,用于基于环境光信息对第一虚拟对象进行渲染,得到第二虚拟对象;
第二生成单元644,用于基于第二虚拟对象生成第二图像;
合成单元645,用于基于定位信息将第一图像和第二图像进行合成,得到第三图像。
其中一种可能的实现方式中,上述合成模块64包括:获取单元646、渲染单元647、生成单元648及合成单元649;
获取单元646,用于获取与现实环境画面对应的环境光信息;
渲染单元647,用于基于环境光信息对第一虚拟对象进行渲染,得到第二虚拟对象;
生成单元648,用于响应于检测到的第三操作,生成第一图像及第二图像;
合成单元649,用于基于定位信息将第一图像和第二图像进行合成,得到第三图像。
其中一种可能的实现方式中,上述获取模块63还用于响应于检测到的第四操作,确定第一虚拟对象在预览界面中的定位信息。
其中一种可能的实现方式中,上述定位信息包括第一虚拟对象的默认定位信息。
其中一种可能的实现方式中,上述选取模块62包括:显示单元621及选取单元622;
显示单元621,用于响应于检测到的第二操作,显示至少一个候选虚拟对象;
选取单元622,用于响应于检测到的第五操作,在候选虚拟对象中确定第一虚拟对象。
其中一种可能的实现方式中,上述显示单元621还用于响应于检测到的第二操作,对预览界面中现实环境的类型进行识别,得到环境类型;基于环境类型推荐显示候选虚拟对象。
其中一种可能的实现方式中,上述合成模块64还用于将第一虚拟对象及环境光信息输入预置渲染模型,使得预置渲染模型对第一虚拟对象的明暗度和/或阴影进行渲染,得到第二虚拟对象,其中,第二虚拟对象包括明暗度和/或阴影。
图6所示实施例提供的图像处理装置可用于执行本申请图2-图5所示方法实施例的技术方案,其实现原理和技术效果可以进一步参考方法实施例中的相关描述。
应理解以上图6所示的图像处理装置的各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且这些模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分模块以软件通过处理元件调用的形式实现,部分模块通过硬件的形式实现。例如,检测模块可以为单独设立的处理元件,也可以集成在电子设备的某一个芯片中实现。其它模块的实现与之类似。此外这些模块全部或部分可以集成在一起,也可以独立实现。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。
例如,以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(Application Specific Integrated Circuit;以下简称:ASIC),或,一个或多个微处理器(Digital Singnal Processor;以下简称:DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array;以下简称:FPGA)等。再如,这些模块可以集成在一起,以片上系统(System-On-a-Chip;以下简称:SOC)的形式实现。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对移动终端100的结构限定。在本申请另一些实施例中,移动终端100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
可以理解的是,上述移动终端为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取 决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明实施例的范围。
本申请实施例可以根据上述方法示例对上述移动终端等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器130执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (18)

  1. 一种图像处理方法,应用于移动终端,所述移动终端上设置有至少一个摄像头,所述方法包括:
    响应于检测到的第一操作,显示预览界面;其中,所述预览界面包括现实环境画面;
    确定第一虚拟对象;
    获取所述第一虚拟对象的定位信息,基于所述定位信息在所述预览界面中显示所述第一虚拟对象;
    基于所述定位信息将第一图像与第二图像进行合成,得到第三图像;其中,所述第一图像包括所述现实环境画面,所述第二图像包括与所述定位信息对应的第二虚拟对象,所述第二虚拟对象由所述第一虚拟对象基于环境光信息进行渲染后生成,所述环境光信息与所述现实环境画面对应。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述定位信息将第一图像与第二图像进行合成,得到第三图像包括:
    响应于检测到的第二操作,生成所述第一图像;
    基于所述第一图像获取与所述第一图像对应的环境光信息;
    基于所述环境光信息对所述第一虚拟对象进行渲染,得到所述第二虚拟对象;
    基于所述第二虚拟对象生成所述第二图像;
    基于所述定位信息将所述第一图像和所述第二图像进行合成,得到所述第三图像。
  3. 根据权利要求1所述的方法,其特征在于,所述基于所述定位信息将第一图像与第二图像进行合成,得到第三图像包括:
    获取与所述现实环境画面对应的环境光信息;
    基于所述环境光信息对所述第一虚拟对象进行渲染,得到所述第二虚拟对象;
    响应于检测到的第三操作,生成所述第一图像及所述第二图像;基于所述定位信息将所述第一图像和所述第二图像进行合成,得到所述第三图像。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述获取所述第一虚拟对象的定位信息包括:
    响应于检测到的第四操作,确定所述第一虚拟对象在所述预览界面中的定位信息。
  5. 根据权利要求1-3任一项所述的方法,其特征在于,所述定位信息包括所述第一虚拟对象的默认定位信息。
  6. 根据权利要求1-3任一项所述的方法,其特征在于,所述确定第一虚拟对象包括:
    显示至少一个候选虚拟对象;
    响应于检测到的所述第五操作,在所述候选虚拟对象中确定所述第一虚拟对象。
  7. 根据权利要求6所述的方法,其特征在于,所述显示至少一个候选虚拟对象包括:
    识别所述预览界面中现实环境的类型,得到环境类型;
    基于所述环境类型推荐显示所述候选虚拟对象。
  8. 根据权利要求1所述的方法,其特征在于,所述第二虚拟对象由所述第一虚拟 对象基于环境光信息进行渲染后生成包括:
    将所述第一虚拟对象及所述环境光信息输入预置渲染模型,使得所述预置渲染模型对所述第一虚拟对象的明暗度和/或阴影进行渲染,得到所述第二虚拟对象,其中,所述第二虚拟对象包括明暗度和/或阴影。
  9. 一种移动终端,其特征在于,包括:显示器、处理器、存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括指令,其中,所述指令被所述处理器运行而使得所述移动终端执行以下步骤:
    响应于检测到的第一操作,显示预览界面;其中,所述预览界面包括现实环境画面;
    确定第一虚拟对象;
    获取所述第一虚拟对象的定位信息,基于所述定位信息在所述预览界面中显示所述第一虚拟对象;
    基于所述定位信息将第一图像与第二图像进行合成,得到第三图像;其中,所述第一图像包括所述现实环境画面,所述第二图像包括与所述定位信息对应的第二虚拟对象,所述第二虚拟对象由所述第一虚拟对象基于环境光信息进行渲染后生成,所述环境光信息与所述现实环境画面对应。
  10. 根据权利要求9所述的移动终端,其特征在于,当执行所述基于所述定位信息将第一图像与第二图像进行合成,得到第三图像时,所述指令被所述处理器运行而使得所述移动终端具体执行以下步骤:
    响应于检测到的第二操作,生成所述第一图像;
    基于所述第一图像获取与所述第一图像对应的环境光信息;
    基于所述环境光信息对所述第一虚拟对象进行渲染,得到所述第二虚拟对象;
    基于所述第二虚拟对象生成所述第二图像;
    基于所述定位信息将所述第一图像和所述第二图像进行合成,得到所述第三图像。
  11. 根据权利要求9所述的移动终端,其特征在于,当执行所述基于所述定位信息将第一图像与第二图像进行合成,得到第三图像时,所述指令被所述处理器运行而使得所述移动终端具体执行以下步骤:
    获取与所述现实环境画面对应的环境光信息;
    基于所述环境光信息对所述第一虚拟对象进行渲染,得到所述第二虚拟对象;
    响应于检测到的第三操作,生成所述第一图像及所述第二图像;
    基于所述定位信息将所述第一图像和所述第二图像进行合成,得到所述第三图像。
  12. 根据权利要求9-11任一项所述的移动终端,其特征在于,当执行所述获取所述第一虚拟对象的定位信息时,所述指令被所述处理器运行而使得所述移动终端具体执行以下步骤:
    响应于检测到的第四操作,确定所述第一虚拟对象在所述预览界面中的定位信息。
  13. 根据权利要求9-11任一项所述的移动终端,其特征在于,所述定位信息包括所述第一虚拟对象的默认定位信息。
  14. 根据权利要求9-11任一项所述的移动终端,其特征在于,当执行所述确定第一虚拟对象时,所述指令被所述处理器运行而使得所述移动终端具体执行以下步骤:
    显示至少一个候选虚拟对象;
    响应于检测到的第五操作,在所述候选虚拟对象中确定所述第一虚拟对象。
  15. 根据权利要求14所述的移动终端,其特征在于,当执行所述显示至少一个候选虚拟对象时,所述指令被所述处理器运行而使得所述移动终端具体执行以下步骤:
    识别所述预览界面中现实环境的类型,得到环境类型;
    基于所述环境类型推荐显示所述候选虚拟对象。
  16. 根据权利要求9所述的移动终端,其特征在于,当执行所述第二虚拟对象由所述第一虚拟对象基于环境光信息进行渲染后生成时,所述指令被所述处理器运行而使得所述移动终端具体执行以下步骤:
    将所述第一虚拟对象及所述环境光信息输入预置渲染模型,使得所述预置渲染模型对所述第一虚拟对象的明暗度和/或阴影进行渲染,得到所述第二虚拟对象,其中,所述第二虚拟对象包括明暗度和/或阴影。
  17. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在所述移动终端上运行时,使得所述移动终端执行如权利要求1-8中任一项所述图像处理的方法。
  18. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-8中任一项所述图像处理的方法。
PCT/CN2021/110203 2020-09-01 2021-08-03 图像处理方法、移动终端及存储介质 WO2022048373A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21863441.8A EP4195664A4 (en) 2020-09-01 2021-08-03 IMAGE PROCESSING METHOD, MOBILE TERMINAL AND STORAGE MEDIUM
US18/043,445 US20230334789A1 (en) 2020-09-01 2021-08-03 Image Processing Method, Mobile Terminal, and Storage Medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010902601.7 2020-09-01
CN202010902601.7A CN114125421A (zh) 2020-09-01 2020-09-01 图像处理方法、移动终端及存储介质

Publications (1)

Publication Number Publication Date
WO2022048373A1 true WO2022048373A1 (zh) 2022-03-10

Family

ID=74855955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/110203 WO2022048373A1 (zh) 2020-09-01 2021-08-03 图像处理方法、移动终端及存储介质

Country Status (4)

Country Link
US (1) US20230334789A1 (zh)
EP (1) EP4195664A4 (zh)
CN (2) CN114125421A (zh)
WO (1) WO2022048373A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114640838A (zh) * 2022-03-15 2022-06-17 北京奇艺世纪科技有限公司 画面合成方法、装置、电子设备及可读存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125421A (zh) * 2020-09-01 2022-03-01 华为技术有限公司 图像处理方法、移动终端及存储介质
CN113596572A (zh) * 2021-07-28 2021-11-02 Oppo广东移动通信有限公司 一种语音识别方法、装置、存储介质及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103207728A (zh) * 2012-01-12 2013-07-17 三星电子株式会社 提供增强现实的方法和支持该方法的终端
CN105681684A (zh) * 2016-03-09 2016-06-15 北京奇虎科技有限公司 基于移动终端的图像实时处理方法及装置
CN108509887A (zh) * 2018-03-26 2018-09-07 深圳超多维科技有限公司 一种获取环境光照信息方法、装置和电子设备
CN110365907A (zh) * 2019-07-26 2019-10-22 维沃移动通信有限公司 一种拍照方法、装置及电子设备
CN112422945A (zh) * 2020-09-01 2021-02-26 华为技术有限公司 图像处理方法、移动终端及存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9552674B1 (en) * 2014-03-26 2017-01-24 A9.Com, Inc. Advertisement relevance
JP2016139199A (ja) * 2015-01-26 2016-08-04 株式会社リコー 画像処理装置、画像処理方法、およびプログラム
CN106157363A (zh) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 一种基于增强现实的拍照方法、装置和移动终端
CN108182730B (zh) * 2018-01-12 2022-08-12 北京小米移动软件有限公司 虚实对象合成方法及装置
CN108765542B (zh) * 2018-05-31 2022-09-09 Oppo广东移动通信有限公司 图像渲染方法、电子设备和计算机可读存储介质
CN108958475B (zh) * 2018-06-06 2023-05-02 创新先进技术有限公司 虚拟对象控制方法、装置及设备
CN110021071B (zh) * 2018-12-25 2024-03-12 创新先进技术有限公司 一种增强现实应用中的渲染方法、装置及设备
US10692277B1 (en) * 2019-03-21 2020-06-23 Adobe Inc. Dynamically estimating lighting parameters for positions within augmented-reality scenes using a neural network
CN111260769B (zh) * 2020-01-09 2021-04-13 北京中科深智科技有限公司 一种基于动态光照变化的实时渲染方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103207728A (zh) * 2012-01-12 2013-07-17 三星电子株式会社 提供增强现实的方法和支持该方法的终端
CN105681684A (zh) * 2016-03-09 2016-06-15 北京奇虎科技有限公司 基于移动终端的图像实时处理方法及装置
CN108509887A (zh) * 2018-03-26 2018-09-07 深圳超多维科技有限公司 一种获取环境光照信息方法、装置和电子设备
CN110365907A (zh) * 2019-07-26 2019-10-22 维沃移动通信有限公司 一种拍照方法、装置及电子设备
CN112422945A (zh) * 2020-09-01 2021-02-26 华为技术有限公司 图像处理方法、移动终端及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4195664A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114640838A (zh) * 2022-03-15 2022-06-17 北京奇艺世纪科技有限公司 画面合成方法、装置、电子设备及可读存储介质
CN114640838B (zh) * 2022-03-15 2023-08-25 北京奇艺世纪科技有限公司 画面合成方法、装置、电子设备及可读存储介质

Also Published As

Publication number Publication date
US20230334789A1 (en) 2023-10-19
EP4195664A1 (en) 2023-06-14
CN114125421A (zh) 2022-03-01
CN112422945A (zh) 2021-02-26
EP4195664A4 (en) 2024-02-21

Similar Documents

Publication Publication Date Title
WO2022048373A1 (zh) 图像处理方法、移动终端及存储介质
US10210664B1 (en) Capture and apply light information for augmented reality
JP6627861B2 (ja) 画像処理システムおよび画像処理方法、並びにプログラム
KR101737725B1 (ko) 컨텐츠 생성 툴
US9324305B2 (en) Method of synthesizing images photographed by portable terminal, machine-readable storage medium, and portable terminal
US20150185825A1 (en) Assigning a virtual user interface to a physical object
JP5807686B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP2022537614A (ja) マルチ仮想キャラクターの制御方法、装置、およびコンピュータプログラム
US20150187137A1 (en) Physical object discovery
US20230377189A1 (en) Mirror-based augmented reality experience
TW201346640A (zh) 影像處理裝置及電腦程式產品
TWI721466B (zh) 基於擴增實境的互動方法及裝置
CN111159449B (zh) 一种图像显示方法及电子设备
US11880999B2 (en) Personalized scene image processing method, apparatus and storage medium
US10504264B1 (en) Method and system for combining images
WO2021039856A1 (ja) 情報処理装置、表示制御方法および表示制御プログラム
WO2017147909A1 (zh) 目标设备的控制方法和装置
WO2022048372A1 (zh) 图像处理方法、移动终端及存储介质
CN115439171A (zh) 商品信息展示方法、装置及电子设备
GB2598452A (en) 3D object model reconstruction from 2D images
CN112887601A (zh) 拍摄方法、装置及电子设备
WO2023076909A1 (en) Point and clean
JP6304305B2 (ja) 画像処理装置、画像処理方法及びプログラム
CN116126133A (zh) 虚拟对象的交互方法、装置、设备、存储介质和程序产品
CN117930978A (zh) 一种博物馆文物ar交互方法、系统、设备与存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21863441

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021863441

Country of ref document: EP

Effective date: 20230309

NENP Non-entry into the national phase

Ref country code: DE