WO2024022349A1 - 图像处理方法、装置、电子设备及存储介质 - Google Patents

图像处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2024022349A1
WO2024022349A1 PCT/CN2023/109158 CN2023109158W WO2024022349A1 WO 2024022349 A1 WO2024022349 A1 WO 2024022349A1 CN 2023109158 W CN2023109158 W CN 2023109158W WO 2024022349 A1 WO2024022349 A1 WO 2024022349A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
preview image
focus
target object
target
Prior art date
Application number
PCT/CN2023/109158
Other languages
English (en)
French (fr)
Inventor
周燎
郭浩龙
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2024022349A1 publication Critical patent/WO2024022349A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present application belongs to the field of image processing technology, and specifically relates to an image processing method, device, electronic equipment and storage medium.
  • Augmented Reality (AR) technology is a technology that combines real world information and virtual world information, that is, puts virtual objects in actual scenes, which can increase the interest of users in work and life and improve user experience. . With the continuous development of AR technology, its use in shooting applications of electronic devices is becoming more and more common.
  • an electronic device when using the AR shooting function of a camera application, can first collect images of real objects, and when virtual objects such as people or animals are added to the collected images, the electronic device can convert the collected images of real objects into Perform image synthesis with virtual objects to obtain a picture containing an image of a real object and a picture of a virtual object.
  • the purpose of the embodiments of the present application is to provide an image processing method, device, electronic device and storage medium, which can solve the problem of poor image effects captured by electronic devices in AR shooting scenes.
  • inventions of the present application provide an image processing method.
  • the image processing method includes: displaying a shooting preview image.
  • the shooting preview image includes a first object and a second object.
  • the first object is a virtual shooting object
  • the second object is a virtual shooting object.
  • the object is an object in the shooting scene corresponding to the shooting preview image; with the target object as the focus object, the shooting preview image is updated; where the target object includes one of the following: a first object or a second object.
  • an image processing device which includes: a display module and an update module.
  • a display module is used to display a shooting preview image.
  • the shooting preview image includes a first object and a second object.
  • the first object is a virtual shooting object
  • the second object is an object in the shooting scene corresponding to the shooting preview image.
  • An update module is used to update the shooting preview image with the target object displayed by the display module as the focus object; wherein the target object includes one of the following: a first object and a second object.
  • inventions of the present application provide an electronic device.
  • the electronic device includes a processor and a memory.
  • the memory stores programs or instructions that can be run on the processor.
  • the programs or instructions are processed by the processor.
  • the processor is executed, the steps of the method described in the first aspect are implemented.
  • embodiments of the present application provide a readable storage medium.
  • Programs or instructions are stored on the readable storage medium.
  • the steps of the method described in the first aspect are implemented. .
  • inventions of the present application provide a chip.
  • the chip includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement the first aspect. the method described.
  • embodiments of the present application provide a computer program product, the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the method as described in the first aspect.
  • the electronic device determines the target object from the first object and the second object, performs focus processing on the target object, and performs blur processing on other image areas.
  • the electronic device can select the focus subject in the shooting scene based on the first object and the preset shooting subject to re-trigger the focus and blur strategy to generate better photos or videos. Effect, that is, after adding the first object to change the composition, the electronic device will perform refocusing and blurring operations according to the current composition situation to ensure that the image after the refocusing and blurring operations can meet the shooting needs of the current scene, improving It can improve the focus or blur effect of the overall image, thereby improving the effect of electronic devices capturing images in AR shooting scenes.
  • Figure 1 is one of the flow charts of an image processing method provided by an embodiment of the present application.
  • Figure 2 is the second flow chart of an image processing method provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of an example of a preview interface and virtual objects in the default focus mode provided by an embodiment of the present application
  • Figure 4 is the third flow chart of an image processing method provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of an interface example of an actual focus subject shot in an indoor shooting scene provided by an embodiment of the present application
  • Figure 6 is a schematic diagram of an interface example for determining a virtual object as a focus subject provided by an embodiment of the present application
  • Figure 7 is a schematic diagram of an interface example for determining the original actual focus subject as the focus subject provided by the embodiment of the present application.
  • Figure 8 is one of the schematic diagrams of an interface example of the before and after refocusing and blurring effects provided by the embodiment of the present application;
  • Figure 9 is a second schematic diagram of an interface example of refocusing and blurring effects provided by an embodiment of the present application.
  • Figure 10 is the third schematic diagram of an interface example of the before and after refocusing and blurring effects provided by the embodiment of the present application.
  • Figure 11 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • Figure 12 is one of the schematic diagrams of the hardware structure of an electronic device provided by an embodiment of the present application.
  • Figure 13 is a second schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application.
  • first, second, etc. in the description and claims of this application are used to distinguish similar objects and are not used to describe a specific order or sequence. It is to be understood that the figures so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in orders other than those illustrated or described herein, and that "first,” “second,” etc. are distinguished Objects are usually of one type, and the number of objects is not limited. For example, the first object can be one or multiple.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the related objects are in an "or” relationship.
  • the electronic device can select the focus subject in the shooting scene based on the first object and the preset shooting subject to re-trigger the focus and blur strategy to generate better photo or video effects, that is, in After adding the first object to change the composition, the electronic device will perform refocusing and blurring operations based on the current composition to ensure that the image after the refocusing and blurring operations can meet the shooting needs of the current scene and improve the quality of the overall image. Focus or blur effect, thereby improving the effect of electronic devices capturing images in AR shooting scenes.
  • FIG. 1 shows a flow chart of an image processing method provided by an embodiment of the present application. This method can be applied to electronic devices. As shown in Figure 1, the graphics processing method provided by the embodiment of the present application may include the following steps 201 and 202.
  • Step 201 The electronic device displays the shooting preview image.
  • the above-mentioned captured preview image includes a first object and a second object.
  • the first object is a virtual shooting object
  • the second object is an object in the shooting scene corresponding to the shooting preview image.
  • the electronic device after the electronic device turns on the camera function, it can display the shooting preview interface, start collecting the shooting preview image through the camera, and display the shooting preview image in the shooting preview interface, and the shooting preview image includes the second object; when the user After adding the first object, the electronic device can display the first object in the shooting preview interface. At this time, the shooting preview image includes the first object and the second object.
  • the above-mentioned first object and second object may include but are not limited to at least one of the following: human objects, animal objects, other categories of objects (such as buildings, flowers, trees, etc.), etc., specifically can be determined according to actual usage requirements, and is not limited in the embodiments of this application.
  • the above-mentioned second object can be understood as: the object in the actual shooting scene is captured by the camera of the electronic device.
  • the above-mentioned first object can be understood as: projecting one or more virtual objects in the actual scene/real scene through AR technology.
  • Step 202 The electronic device uses the target object as the focus object and updates the shooting preview image.
  • the target object includes one of the following: a first object and a second object.
  • the electronic device can compare the salience of all objects in the captured preview image to select the most salient object from the target object, that is, select the most salient object from the first object and the second object.
  • the object is used as the focus object, and the focus object is used as the main object to be focused, so as to focus on the main object.
  • the above-mentioned degree of prominence may be any one of the following: the degree of prominence of each object in the captured preview image, and the weight of each object's characteristic information (such as category, location, etc.).
  • the electronic device after adding the first object to the shooting scene, can select the focus subject in the shooting scene based on the first object and the preset shooting subject to re-trigger the focus and blur strategy to generate better photography.
  • video effect that is, after adding the first object to change the composition, the electronic device will perform refocusing and blurring operations based on the current composition to ensure that the image after the refocusing and blurring operations can meet the shooting needs of the current scene. , which improves the focus or blur effect of the overall image, thus improving the effect of electronic devices capturing images in AR shooting scenes.
  • the image processing method provided by the embodiment of the present application also includes the following:
  • the above-mentioned step 203, and the above-mentioned step 202 can be specifically implemented through the following step 202a or step 202b.
  • Step 203 The electronic device determines whether the second object includes a preset shooting subject.
  • Step 202a When the second object does not include the preset shooting subject, the electronic device uses the first object as the focus object and updates the shooting preview image.
  • the electronic device can determine the first object as the target object, that is, the subject object to be focused. Wherein, when the prominence of all photographic objects in the second object is less than a threshold, or the difference of all photographic objects in the second object is less than a threshold, the electronic device determines that there is no focus object in the second object.
  • the above-mentioned preset shooting subjects are not limited to at least one of the following: human objects, animal objects, other categories of objects (such as buildings, flowers, trees, etc.), etc.
  • the specific objects can be based on actual usage needs. It is determined that the embodiments of this application are not limiting.
  • the electronic device in the default focus mode, when the second object does not include the preset shooting subject, the electronic device can use the first object as the focus object and update the shooting preview image.
  • the default focus mode can be understood as: the system default shooting mode of the electronic device after the camera function is turned on. In this shooting mode, the electronic device can automatically focus and blur according to the subject.
  • the electronic device displays a preview interface 10 , which includes an image collected by a camera of the electronic device, and the image includes a second object; electronic The device detects that there is no actual focus object in the second object, so the electronic device can (B) in Figure 3
  • the newly added first object shown is determined as the target object, so that the target object is focused.
  • Step 202b If the second object includes at least one preset shooting subject, the electronic device determines the target object among the at least one preset shooting subject and the first object, uses the target object as the focus object, and updates the shooting preview image.
  • the electronic device when the second object includes at least one preset photographing subject, the electronic device can determine the object with the largest feature weight among the at least one preset photographing subject and the first object as the focus object, that is, the object to be focused. the main object.
  • the electronic device may determine the object whose prominence between at least one preset photographing subject and the first object is greater than a threshold, or the difference in protrusion from other objects is greater than a threshold, as the target object.
  • the electronic device is in an indoor shooting scene, and the preset shooting subject in the shooting scene is the doll bear 12; after adding the first object, the electronic device can compare the first object and the doll bear 12. Category, position in the shooting scene, size, distance from the camera, etc., to determine the new focus object.
  • the electronic device in the default focus mode, can determine whether there is a preset photographing subject in the second object to determine the focus object, and determine the first object as the focus object when there is no preset photographing subject.
  • the focus object is determined by the characteristic weight of the preset shooting subject and the first object, thus improving the accuracy of determining the focus object and ensuring that the determined focus object is more consistent with the current situation. Shooting scenes.
  • step 202b can be specifically implemented through the following steps 202b1 to 202b3.
  • Step 202b1 The electronic device obtains the first feature weight of the first object in the preview image and at least one second feature weight of at least one preset shooting subject in the preview image.
  • the feature weights (such as the first feature weight, the second feature weight, the target feature weight, etc.) described in the embodiments of the present application are used to indicate the shooting saliency of the photographed object in the preview image.
  • the electronic device in the default focus mode, can calculate the preset shooting subject of the current shooting scene based on the current shooting scene information, and complete the initial focusing; and, the electronic device can record the preset at this time. Second characteristic information of the subject in the shooting scene.
  • the above-mentioned second characteristic information includes at least one of the following: the position information of the preset shooting subject in the shooting scene, the screen size of the preset shooting subject in the shooting scene, the preset shooting subject The distance to the camera and the category information of the preset subject.
  • the electronic device in the default focus mode, can use a virtual object generation algorithm, a 3D measurement algorithm, such as instant positioning and mapping (Simultaneous Localization and Mapping, slam), time of flight technology (Time of Flight, TOF) and other algorithms, you can also use methods such as phase focus (Phase Detection Auto Focus, PDAF) or contrast focus (Contrast Detection Auto Focus, CDAF) to obtain the second characteristic information of the preset shooting subject in the current shooting scene. .
  • a virtual object generation algorithm such as instant positioning and mapping (Simultaneous Localization and Mapping, slam), time of flight technology (Time of Flight, TOF) and other algorithms
  • phase focus Phase Detection Auto Focus, PDAF
  • contrast focus Contrast Detection Auto Focus, CDAF
  • the characteristic information of the first object may include at least one of the following: the size of the screen occupied by the first object in the shooting scene, the category of the first object information, the position information of the first object in the shooting scene, and the distance between the first object and the camera.
  • the screen size occupied by the first object in the shooting scene and the category information of the first object are obtained by the electronic device using a virtual object generation algorithm; the position information of the first object in the shooting scene and the distance between the first object and the camera The distance is obtained by the electronic device using a 3D measurement algorithm.
  • the electronic device may use at least one of the first characteristic information, that is, the category information of the first object, the position information in the shooting scene, the screen size in the shooting scene, and the distance between the first object and the camera. items, assigning at least one weight value to the first object, each weight value corresponding to one piece of information among each item of information in the first feature information, thereby obtaining the first final weight, that is, the first feature weight.
  • the method of determining the second feature weight for each of the at least one preset shooting subject is similar to the method of determining the first feature weight, and will not be described again here.
  • the electronic device may perform variance calculation on at least one weight value to obtain the first feature weight.
  • the electronic device may use the weight value of the information with the highest importance as the first feature weight according to the importance of each piece of information in the first feature information.
  • the electronic device can calculate the feature weights of at least one preset shooting subject and the first object according to the following principles, that is, obtain at least one preset shooting subject and the first object respectively.
  • Weight of distance from the camera the closer to the camera, the greater the weight; the further away from the camera, the smaller the weight.
  • Step 202b2 The electronic device determines the target feature weight among the first feature weight and at least one second feature weight.
  • the above target weight is not less than the first feature weight and at least one second feature weight.
  • the target weight is the maximum weight value among the first feature weight and at least one second feature weight.
  • Step 202b3 The electronic device determines the shooting object corresponding to the target feature weight as the target object, uses the target object as the focus object, and updates the shooting preview image.
  • the first object is located in an area close to the middle of the picture.
  • the electronic device can compare the first object with at least one preset shooting subject. (such as the doll bear 12), the distance between it and the camera, etc., to determine the new focus object.
  • the object near the middle of the screen is the first object.
  • the first object is located at the lower left corner of the picture, and its proportion in the picture is relative to at least one preset shooting subject.
  • the electronic device can determine that the focus object remains the original at least one preset shooting subject, the doll bear, by comparing the position, size, distance between the first object and the at least one preset shooting subject. 12 remains unchanged.
  • the electronic device can determine the characteristic weights of the objects based on the characteristic information of at least one preset photographing subject and the first object, so as to decide whether to keep the original at least one preset photographing subject unchanged or to change the first object.
  • the object is used as a new focus object to ensure the accuracy of the determined focus object, thereby ensuring that the determined focus object is more consistent with the current shooting scene.
  • step 202 can be specifically implemented through the following step 301.
  • Step 301 The electronic device uses the target object as the focus object, performs image processing on the shot preview image, and obtains a processed shot preview image.
  • the electronic device in the default focus mode, can determine the target object based on the comparison result of the first feature weight and the second feature weight, and re-trigger the focus algorithm and blur algorithm:
  • the position coordinate information of the target object is used for focusing processing, and then the front and rear depth of field is blurred to achieve the effect of highlighting the focus object and improving the picture effect.
  • the electronic device displays the preview interface 13 in the default focus mode.
  • the preview interface 13 includes a second object, but there is no preset shooting subject in the second object.
  • the electronic device can display the first object and the second object in the preview interface 13; as shown in (B) of FIG. 8, the electronic device determines that there are no shooting objects in the preview interface 13.
  • There is a preset shooting subject that is, the second object in the preview interface 13 is not the preset shooting object.
  • the electronic device can determine the first object as the target object and re-trigger the focus and blur algorithm to perform focusing on the target object. processing, and then perform blur processing on the image area except the target object in the preview interface 13 .
  • (A) in Figure 8 is the image effect before refocusing and blurring
  • (B) in Figure 8 is the image effect after refocusing and blurring. After refocusing and blurring, the shooting scene can be highlighted. salient objects in .
  • the electronic device displays the preview interface 14 in the default focus mode, the preview interface 14 includes a second object, and there is a preset shooting subject in the second object, After adding the first object, the electronic device can display the first object and the second object in the preview interface 14; as shown in (B) of FIG. 9, the electronic device determines that there is a preset shooting among the shooting objects in the preview interface 14.
  • the subject such as the toy bear 12, at this time, the electronic device can determine the object with a closer distance between the cameras by comparing the position of the preset subject and the first object, the size of the screen occupied, and the distance between the camera and the camera.
  • the target object that is, the electronic device can determine the elephant 11 in the preview interface 14 as the target object, and re-trigger the focus and blur algorithm to perform focus processing on the target object, and then focus on the objects in the preview interface 14 except the target object.
  • the image area is blurred. It can be understood that (A) in Figure 9 is the image effect before refocusing and blurring, and (B) in Figure 9 is the refocusing The image effect after blurring, after refocusing and blurring, can clearly highlight the significance of the first object in the shooting scene.
  • the electronic device displays the preview interface 15 in the default focus mode, the preview interface 15 includes a second object, and there is a preset shooting subject in the second object, After adding the first object, the electronic device can display the first object and the second object in the preview interface 15; as shown in (B) of Figure 10, the electronic device determines that there is a preset shooting among the shooting objects in the preview interface 15
  • the subject such as the teddy bear 12
  • the electronic device can compare the preset positions of the subject and the first object, the size of the screen it occupies, and the distance between the camera and the larger part of the screen, and the location and the middle
  • the nearby object is determined as the target object, that is, the electronic device can determine the doll bear 12 in the preview interface 15 as the target object, and re-trigger the focus and blur algorithm to perform focus processing on the target object, and then adjust the preview interface 15 Perform blur processing on the image area except the target object.
  • the electronic device in the virtual object motion focusing mode, can determine the degree of significance of the first object based on the number and characteristic information of the first object, and use the first object with the highest degree of significance as the Focus on the object and perform refocusing and blurring by tracking the movement and position changes of the first object in real time, highlighting the significance of the first object in the shooting scene and improving the focus or blurring effect of the image.
  • Embodiments of the present application provide an image processing method. After adding a first object to a shooting scene, the electronic device can select the focus subject in the shooting scene based on the first object and the preset shooting subject to re-trigger the focus and blur strategy. Generate better photo or video effects, that is, after adding the first object to change the composition, the electronic device will perform refocusing and blurring operations based on the current composition to ensure that the image after the refocusing and blurring operations can meet the requirements The shooting needs of the current scene have improved the focus or blur effect of the overall image, thus improving the effect of electronic devices shooting images in AR shooting scenes.
  • step 301 can be specifically implemented through the following step 301a.
  • Step 301a The electronic device uses the target object as the focus object according to the spatial position information of the target object, performs first image processing on the captured preview image, and obtains a processed captured preview image.
  • the electronic device in the virtual object motion focusing mode, can use 3D measurement technology to obtain the spatial position information of the target object, such as spatial coordinates and other information, and control the motor to implement the first image processing of the target object; and , the electronic device can record the characteristic information of the target object in the shooting scene at this time.
  • the characteristic information of the target object in the shooting scene includes at least one of the following: characteristic information such as the size of the target object, the color of the target object, and the brightness of the target object.
  • the electronic device can obtain and record the position information of the target object in space to determine the target object as the focus object to ensure the accuracy of the determined focus object, and thereby perform image processing on the captured preview image to ensure that The determined focus object is more consistent with the current shooting preview image.
  • step 301 can be specifically performed through the following step 301b and step 301c implementation.
  • Step 301b When the target object moves, the electronic device determines the plane position of the target object based on the characteristic information of the target object in the shooting scene and the characteristic information of the shooting object in the shooting preview image.
  • the above-mentioned photographing object may be the second object.
  • the electronic device in the virtual object motion focus tracking mode, can use the scale-invariant feature according to the characteristic information of the target object in the shooting scene, that is, according to at least one of the size, color, and brightness of the target object.
  • the Scale-invariant Feature Transform (SIFT) matching algorithm matches all detected objects and determines the plane position of the target object.
  • SIFT feature matching algorithm can make the object remain unchanged after operations such as rotation, scaling, and brightness changes.
  • Step 301c The electronic device performs second image processing on the captured preview image based on the target object's plane position and the first distance information, with the target object as the focus object, to obtain a processed captured preview image.
  • the above-mentioned first distance information is the distance between the spatial position corresponding to the target object and the camera.
  • the electronic device can obtain the first distance information through 3D measurement technology, and control the motor to perform second image processing on the captured preview image.
  • the electronic device obtains the first object position through real-time tracking and refocuses the target object, highlighting the salience in the shooting scene and improving the image shooting effect of the electronic device in the AR shooting scene.
  • the above target object is the first object.
  • the above step 202 can be specifically implemented through the following step 202c.
  • Step 202c When the first object moves, the electronic device performs motion tracking on the first object and updates the shooting preview image.
  • the electronic device in the virtual object motion focus mode, when the first object moves, the electronic device can perform motion focus on the first object and update the shooting preview image.
  • the virtual object motion focusing mode can be understood as: the electronic device uses the virtual object as the focus subject object, and performs focusing and blurring operations according to the movement position change of the virtual object.
  • the above-mentioned target object is a first object
  • the first object includes a first virtual object and a second virtual object.
  • the above step 202 can be specifically implemented through the following step 202d.
  • Step 202d When the characteristic weight of the first virtual object is greater than the characteristic weight of the second virtual object, the electronic device performs motion tracking on the first virtual object and updates the shooting preview image.
  • the above-mentioned feature weight is used to indicate the shooting saliency of the photographed object in the preview image.
  • the electronic device can obtain the first virtual object through target/semantic detection and 3D measurement (such as slam, TOF) technology. and characteristic information of the second virtual object. And based on the characteristic information of the first virtual object and the second virtual object, the most significant virtual object, that is, the virtual object with the largest characteristic weight is selected as the target object for motion tracking.
  • target/semantic detection and 3D measurement such as slam, TOF
  • the above feature information includes at least one of category information, the size of the screen occupied in the shooting scene, position information in the shooting scene, and the distance from the camera.
  • the image processing method provided by the embodiment of the present application further includes the following steps 401 and 402.
  • Step 401 When the time for the target object to move outside the shooting scene is less than or equal to the preset time, when the target object moves into the shooting scene again, the electronic device uses the target object as the focus object and updates the shooting preview image.
  • Step 402 When the time for the target object to move outside the shooting scene is greater than the preset time, the electronic device uses the third object as the focus object and updates the shooting preview image, and the third object is the shooting object in the shooting preview image.
  • step 202 for the method for the electronic device to re-determine the focus object and update the captured preview image, please refer to step 202 and its related solutions in the above embodiment, which will not be described again here.
  • the electronic device can determine the length of time the target object moves outside the shooting scene to determine the object to perform image processing again, so that in the sports mode, the electronic device can update the shooting preview according to the movement of the target object. images to adapt to changes in composition in the shooting scene in real time, thereby improving the image shooting effect of electronic devices in AR shooting scenes.
  • the execution subject may be an image acquisition device, an electronic device, or a functional module or entity in the electronic device.
  • the image acquisition device performing the image acquisition method is taken as an example to illustrate the image acquisition device provided by the embodiments of this application.
  • FIG. 11 shows a possible structural schematic diagram of the image processing device involved in the embodiment of the present application.
  • the image processing device 70 may include: a display module 71 and an update module 72 .
  • the display module 71 is used to display a shooting preview image.
  • the shooting preview image includes a first object and a second object.
  • the first object is a virtual shooting object
  • the second object is an object in the shooting scene corresponding to the shooting preview image.
  • the update module 72 is used to update the shooting preview image with the target object as the focus object.
  • the target object includes one of the following: a first object and a second object.
  • the above-mentioned update module 72 is specifically used to update the shooting preview image with the first object as the focus object when the second object does not include the preset shooting subject; or, in the second When the objects include at least one preset shooting subject, the target object is determined among the at least one preset shooting subject and the first object, and the shooting preview image is updated with the target object as the focus object.
  • the above-mentioned update module 72 is specifically configured to obtain the first feature weight of the first object in the preview image and at least one second feature weight of at least one preset shooting subject in the preview image; and Determine a target feature weight among the first feature weight and at least one second feature weight, and the target weight is not less than the first feature weight and at least one second feature weight; and determine the photographed object corresponding to the target feature weight as the target object; wherein, Feature weights are used to indicate the subject's shooting saliency in the preview image.
  • the target object is the first object; the update module 72 is specifically configured to perform motion focus tracking on the first object and update the shooting preview image when the first object moves.
  • the above-mentioned target object is a first object
  • the first object includes a first virtual object and a second virtual object
  • the above-mentioned update module 72 is specifically used to detect when the feature weight of the first virtual object is greater than that of the second virtual object.
  • the feature weight of the virtual object the first virtual object is moved to focus, and the shooting preview image is updated; wherein the feature weight is used to indicate the shooting prominence of the shooting object in the preview image.
  • the above-mentioned update module 72 is specifically configured to use the target object as the focus object, perform image processing on the captured preview image, and obtain a processed captured preview image.
  • the above-mentioned update module 72 is specifically configured to perform first image processing on the shot preview image according to the spatial position information of the target object, with the target object as the focus object, to obtain a processed shot preview image.
  • the above-mentioned update module 72 is specifically used to determine the target object's location based on the characteristic information of the target object in the shooting scene and the characteristic information of the shooting object in the shooting preview image when the target object moves. Plane position; and based on the plane position of the target object and the first distance information, with the target object as the focus object, perform second image processing on the shooting preview image to obtain a processed shooting preview image, the first distance information is corresponding to the target object The distance between the spatial position and the camera.
  • the above-mentioned update module 72 is also used to update the shooting preview image with the target object as the focus object, and the time when the target object moves outside the shooting scene is less than or equal to the preset time.
  • the target object moves into the shooting scene again, use the target object as the focus object and update the shooting preview image; or, when the target object moves outside the shooting scene for longer than the preset time, use the third object as the focus object.
  • the shooting preview image is updated, and the third object is the shooting object in the shooting preview image.
  • Embodiments of the present application provide an image processing device. After adding a first object to a shooting scene, the image processing device can select a focus subject in the shooting scene based on the first object and the preset shooting subject to re-trigger the focus and blur strategy. , to generate better photo or video effects, that is, after adding the first object to change the composition, the image processing device will perform refocusing and blurring operations according to the current composition situation to ensure the image after the refocusing and blurring operations It can meet the shooting needs of the current scene, improve the focus or blur effect of the overall image, thereby improving the effect of the image captured by the image processing device in the AR shooting scene.
  • the image processing device in the embodiment of the present application may be a device, or may be a component, integrated circuit, or chip in an electronic device.
  • the device may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device can be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle-mounted electronic device, a mobile Internet device (Mobile Internet Device, MID), or augmented reality (AR)/virtual reality (virtual reality, VR) equipment, robots, wearable devices, ultra-mobile personal computers (UMPC), netbooks or personal digital assistants (PDA), etc.
  • Network Attached Storage Network Attached Storage
  • NAS network Attached Storage
  • PC personal computer
  • TV television
  • teller machine or self-service machine etc.
  • the embodiments of this application are not specifically limited.
  • the image processing device in the embodiment of the present application may be a device with an operating system.
  • the operating system can be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of this application.
  • the image processing device provided by the embodiments of the present application can implement various processes implemented by the above method embodiments. To avoid duplication, they will not be described again here.
  • this embodiment of the present application also provides an electronic device 90, including a processor 91 and a memory 92.
  • the memory 92 stores programs or instructions that can be run on the processor 91.
  • the program or instruction is executed by the processor 91, each step of the above image processing method embodiment is implemented, and the same technical effect can be achieved. To avoid duplication, the details will not be described here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 13 is a schematic diagram of the hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, etc. part.
  • the electronic device 100 may also include a power supply (such as a battery) that supplies power to various components.
  • the power supply may be logically connected to the processor 110 through a power management system, thereby managing charging, discharging, and function through the power management system. Consumption management and other functions.
  • the structure of the electronic device shown in Figure 13 does not constitute a limitation of the electronic device.
  • the electronic device may include more or less components than shown in the figure, or combine certain components, or arrange different components, which will not be described again here. .
  • the display unit 106 is used to display a shooting preview image.
  • the shooting preview image includes a first object and a second object.
  • the first object is a virtual shooting object
  • the second object is an object in the shooting scene corresponding to the shooting preview image.
  • the processor 110 is used to update the shooting preview image with the target object as the focus object.
  • An electronic device After adding a first object to a shooting scene, the electronic device can select a focus subject in the shooting scene based on the first object and the preset shooting subject to re-trigger the focus and blur strategy. Generate better photo or video effects, that is, after adding the first object to change the composition, the electronic device will perform refocusing and blurring operations based on the current composition to ensure that the image after the refocusing and blurring operations can meet the requirements The shooting requirements of the current scene improve the focus or blur effect of the overall image, thereby improving the effect of the image captured by the electronic device in the AR shooting scene.
  • the processor 110 is specifically configured to use the first object as the focus object and update the shooting preview image when the second object does not include a preset shooting subject; or, include at least one preset shooting subject in the second object.
  • a shooting subject a target object is determined among at least one preset shooting subject and the first object, and the shooting preview image is updated with the target object as the focus object.
  • the processor 110 is specifically configured to obtain the first feature weight of the first object in the preview image and At least one second feature weight of one less preset shooting subject in the preview image; and the target feature weight is determined among the first feature weight and at least one second feature weight, and the target weight is not less than the first feature weight and at least one second feature weight. feature weight; and determining the photographic object corresponding to the target characteristic weight as the target object; wherein the feature weight is used to indicate the shooting prominence of the photographic object in the preview image.
  • the above target object is the first object.
  • the processor 110 is specifically configured to perform motion focus tracking on the first object when the first object moves, and update the shooting preview image.
  • the above-mentioned target object is a first object
  • the first object includes a first virtual object and a second virtual object.
  • the processor 110 is specifically configured to perform motion focus tracking on the first virtual object when the characteristic weight of the first virtual object is greater than the characteristic weight of the second virtual object, and update the shooting preview image; wherein the characteristic weight is used to indicate The subject is photographed with saliency in the preview image.
  • the processor 110 is specifically configured to use the target object as the focus object, perform image processing on the captured preview image, and obtain a processed captured preview image.
  • the processor 110 is specifically configured to use the target object as the focus object according to the spatial position information of the target object, perform first image processing on the shot preview image, and obtain a processed shot preview image.
  • the processor 110 is specifically configured to determine the plane position of the target object according to the characteristic information of the target object in the shooting scene and the characteristic information of the shooting object in the shooting preview image when the target object moves; and according to the target The plane position and first distance information of the object, with the target object as the focus object, perform second image processing on the shooting preview image to obtain a processed shooting preview image, the first distance information is the distance between the spatial position corresponding to the target object and the camera distance.
  • the processor 110 is also configured to update the shooting preview image with the target object as the focus object, and if the time for the target object to move outside the shooting scene is less than or equal to a preset time, When the target object moves into the shooting scene again, the shooting preview image is updated with the target object as the focus object; or, when the target object moves outside the shooting scene for longer than the preset time, the shooting preview image is updated with the third object as the focus object. preview image, and the third object is the shooting object in the shooting preview image.
  • the electronic device provided by the embodiments of the present application can implement each process implemented by the above method embodiments, and can achieve the same technical effect. To avoid duplication, the details will not be described here.
  • the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042.
  • the graphics processor 1041 is responsible for the image capture device (GPU) in the video capture mode or the image capture mode. Process the image data of still pictures or videos obtained by cameras (such as cameras).
  • the display unit 106 may include a display panel 1061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 107 includes a touch panel 1071 and at least one of other input devices 1072 .
  • Touch panel 1071 is also called a touch screen.
  • the touch panel 1071 may include two parts: a touch detection device and a touch controller.
  • Other input devices 1072 may include, but are not limited to, physical keyboards, function keys (such as Such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be described in detail here.
  • Memory 109 may be used to store software programs as well as various data.
  • the memory 109 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instructions required for at least one function (such as a sound playback function, Image playback function, etc.) etc.
  • memory 109 may include volatile memory or nonvolatile memory, or memory 109 may include both volatile and nonvolatile memory.
  • non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory.
  • Volatile memory can be random access memory (Random Access Memory, RAM), static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (Synch link DRAM) , SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DRRAM).
  • RAM Random Access Memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM Double Data Rate SDRAM
  • DDRSDRAM double data rate synchronous dynamic random access memory
  • Enhanced SDRAM, ESDRAM synchronous dynamic random access memory
  • Synch link DRAM synchronous link dynamic random access memory
  • SLDRAM direct memory
  • the processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor and a modem processor, where the application processor mainly handles operations related to the operating system, user interface, application programs, etc., Modem processors mainly process wireless communication signals, such as baseband processors. It can be understood that the above modem processor may not be integrated into the processor 110 .
  • Embodiments of the present application also provide a readable storage medium.
  • Programs or instructions are stored on the readable storage medium.
  • the program or instructions are executed by a processor, each process of the above method embodiments is implemented and the same technology can be achieved. The effect will not be described here to avoid repetition.
  • the processor is the processor in the electronic device described in the above embodiment.
  • the readable storage medium includes computer readable storage media, such as computer read-only memory ROM, random access memory RAM, magnetic disk or optical disk, etc.
  • An embodiment of the present application further provides a chip.
  • the chip includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement various processes of the above method embodiments. , and can achieve the same technical effect, so to avoid repetition, they will not be described again here.
  • chips mentioned in the embodiments of this application may also be called system-on-chip, system-on-a-chip, system-on-a-chip or system-on-chip, etc.
  • Embodiments of the present application provide a computer program product.
  • the program product is stored in a storage medium.
  • the program product is executed by at least one processor to implement each process of the above image processing method embodiment, and can achieve the same technical effect. , to avoid repetition, will not be repeated here.
  • the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation.
  • the technical solution of the present application can be embodied in the form of a computer software product that is essentially or contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM/RAM, disk , CD), including several instructions to cause a terminal (which can be a mobile phone, computer, server, or network device, etc.) to execute the methods described in various embodiments of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种图像处理方法、装置、电子设备及存储介质,属于图像处理技术领域,该方法包括:显示拍摄预览图像,拍摄预览图像包括第一对象和第二对象,该第一对象为虚拟拍摄对象,该第二对象为拍摄预览图像对应的拍摄场景中的对象;以目标对象为对焦对象,更新拍摄预览图像;其中,目标对象包括以下其中之一:第一对象、第二对象。

Description

图像处理方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请主张在2022年07月29日在中国提交的申请号为202210908193.5的中国专利的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于图像处理技术领域,具体涉及一种图像处理方法、装置、电子设备及存储介质。
背景技术
增强现实(Augmented Reality,AR)技术是一种将真实世界信息和虚拟世界信息相结合,即在实际场景中放入虚拟对象的技术,可以增加用户在工作、生活中的趣味性,提升用户体验。随着AR技术的不断发展,在电子设备的拍摄类应用程序中的使用越来越为常见。
通常,电子设备在使用相机应用程序的AR拍摄功能时,可以先采集现实对象的图像,并且,当采集的图像中加入了人物或动物等虚拟对象之后,电子设备可以将采集的现实对象的图像和虚拟对象进行图像合成,得到一张包含有现实对象的图像和虚拟对象的图片。
然而,上述方法中,由于在采集图像过程中加入虚拟对象之后,整个图像的场景构图发生了改变,而现有方法直接将采集的现实对象的图像和虚拟对象进行合成,导致电子设备在AR拍摄场景中,拍摄得到的图像的效果较差。
发明内容
本申请实施例的目的是提供一种图像处理方法、装置、电子设备及存储介质,能够解决电子设备在AR拍摄场景中,拍摄得到的图像效果较差的问题。
第一方面,本申请实施例提供了一种图像处理方法,该图像处理方法包括:显示拍摄预览图像,该拍摄预览图像包括第一对象和第二对象,第一对象为虚拟拍摄对象,第二对象为拍摄预览图像对应的拍摄场景中的对象;以目标对象为对焦对象,更新拍摄预览图像;其中,目标对象包括以下其中之一:第一对象、第二对象。
第二方面,本申请实施例提供了一种图像处理装置,该图像处理装置包括:显示模块和更新模块。显示模块,用于显示拍摄预览图像,该拍摄预览图像包括第一对象和第二对象,第一对象为虚拟拍摄对象,第二对象为拍摄预览图像对应的拍摄场景中的对象。更新模块,用于以上述显示模块显示的目标对象为对焦对象,更新拍摄预览图像;其中,目标对象包括以下其中之一:第一对象、第二对象。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
第六方面,本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如第一方面所述的方法。
在本申请实施例中,电子设备从第一对象和第二对象中确定目标对象,以对目标对象执行对焦处理,并对其他图像区域执行虚化处理。本方案中,在拍摄场景中加入第一对象之后,电子设备可以根据第一对象和预设拍摄主体选择拍摄场景中的对焦主体,以重新触发对焦和虚化策略,生成更佳的拍照或视频效果,即在加入第一对象使得构图发生变化后,电子设备会根据当前的构图情况执行重新对焦和虚化操作,以保证重新对焦和虚化操作后的图像能够满足当前场景的拍摄需求,提升了整体图像的对焦或虚化效果,从而提升了电子设备在AR拍摄场景中拍摄图像的效果。
附图说明
图1是本申请实施例提供的一种图像处理方法的流程图之一;
图2是本申请实施例提供的一种图像处理方法的流程图之二;
图3是本申请实施例提供的一种在默认对焦模式下的预览界面和虚拟对象的实例示意图;
图4是本申请实施例提供的一种图像处理方法的流程图之三;
图5是本申请实施例提供的一种在室内拍摄场景中拍摄的实际对焦主体的界面实例示意图;
图6是本申请实施例提供的一种将虚拟对象确定为对焦主体的界面实例示意图;
图7是本申请实施例提供的一种将原实际对焦主体确定为对焦主体的界面实例示意图;
图8是本申请实施例提供的一种重对焦与虚化前后效果的界面实例示意图之一;
图9是本申请实施例提供的一种重对焦与虚化前后效果的界面实例示意图之二;
图10是本申请实施例提供的一种重对焦与虚化前后效果的界面实例示意图之三;
图11是本申请实施例提供的一种图像处理装置的结构示意图;
图12是本申请实施例提供的一种电子设备的硬件结构示意图之一;
图13是本申请实施例提供的一种电子设备的硬件结构示意图之二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所 有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的图像处理方法进行详细地说明。
在实际场景中放入虚拟对象,用以增加工作、生活的趣味性以及提升用户体验等方面的应用。相机AR功能工作时,在加入例如人物、动物等虚拟对象之后,会保持原有的策略,不对虚拟对象带来的场景构图变化做重对焦或虚化,由此形成的场景效果并不理想。
在拍摄场景中加入第一对象之后,电子设备可以根据第一对象和预设拍摄主体选择拍摄场景中的对焦主体,以重新触发对焦和虚化策略,生成更佳的拍照或视频效果,即在加入第一对象使得构图发生变化后,电子设备会根据当前的构图情况执行重新对焦和虚化操作,以保证重新对焦和虚化操作后的图像能够满足当前场景的拍摄需求,提升了整体图像的对焦或虚化效果,从而提升了电子设备在AR拍摄场景中拍摄图像的效果。
本申请实施例提供一种图像处理方法,图1示出了本申请实施例提供的一种图像处理方法的流程图,该方法可以应用于电子设备。如图1所示,本申请实施例提供的图形处理方法可以包括下述的步骤201和步骤202。
步骤201、电子设备显示拍摄预览图像。
本申请实施例中,上述拍摄预览图像包括第一对象和第二对象。第一对象为虚拟拍摄对象,第二对象为拍摄预览图像对应的拍摄场景中的对象。
本申请实施例中,电子设备开启摄像头功能后,可以显示拍摄预览界面,并通过摄像头开始采集拍摄预览图像,并在拍摄预览界面中显示拍摄预览图像,该拍摄预览图像包括第二对象;在用户加入第一对象之后,电子设备可以在拍摄预览界面中显示第一对象,此时拍摄预览图像包括第一对象和第二对象。
可选地,本申请实施例中,上述第一对象和第二对象可以包括但不限于以下至少一项:人物对象、动物对象、其他类别对象(例如建筑、花卉、树木等对象)等,具体的可以根据实际使用需求确定,本申请实施例不作限制。
需要说明的是,上述第二对象可以理解为:电子设备的摄像头采集实际拍摄场景中的对象。上述第一对象可以理解为:通过AR技术,在实际场景/现实场景中投射一个或多个虚拟对象。
步骤202、电子设备以目标对象为对焦对象,更新拍摄预览图像。
本申请实施例中,上述目标对象包括以下其中之一:第一对象、第二对象。
本申请实施例中,电子设备可以对该拍摄预览图像中的所有对象进行显著程度的对比,以从目标对象中选择显著程度最高的对象,即从第一对象和第二对象中选择显著程度最高的对象作为对焦对象,将该对焦对象作为待对焦的主体对象,从而对该主体对象进行对焦。
可选地,本申请实施例中,上述显著程度可以为以下任一项:各个对象在拍摄预览图像中的突出程度、各个对象的特征信息(例如类别、位置等)的权重大小。
本申请实施例中,在拍摄场景中加入第一对象之后,电子设备可以根据第一对象和预设拍摄主体选择拍摄场景中的对焦主体,以重新触发对焦和虚化策略,生成更佳的拍照或视频效果,即在加入第一对象使得构图发生变化后,电子设备会根据当前的构图情况执行重新对焦和虚化操作,以保证重新对焦和虚化操作后的图像能够满足当前场景的拍摄需求,提升了整体图像的对焦或虚化效果,从而提升了电子设备在AR拍摄场景中拍摄图像的效果。
可选地,本申请实施例中,结合图1,如图2所示,在上述步骤202中的“电子设备以目标对象为对焦对象”之前,本申请实施例提供的图像处理方法还包括下述的步骤203,并且上述步骤202具体可以通过下述的步骤202a或步骤202b实现。
步骤203、电子设备确定在第二对象中是否包含预设拍摄主体。
步骤202a、在第二对象中不包括预设拍摄主体的情况下,电子设备以第一对象为对焦对象,更新拍摄预览图像。
可以理解,当第二对象中不包括预设拍摄主体时,电子设备可以将第一对象确定为目标对象,即待对焦的主体对象。其中,在第二对象中的所有拍摄对象的突出程度均小于一个阈值,或者第二对象中的所有拍摄对象的差异小于一个阈值的情况下,电子设备确定第二对象中不存在对焦对象。
可选地,本申请实施例中,上述预设拍摄主体不限于以下至少一项:人物对象、动物对象、其他类别对象(例如建筑、花卉、树木等对象)等,具体的可以根据实际使用需求确定,本申请实施例不作限制。
可选地,本申请实施例中,在默认对焦模式下,在第二对象中不包括预设拍摄主体的情况下,电子设备可以以第一对象为对焦对象,更新拍摄预览图像。
需要说明的是,默认对焦模式可以理解为:电子设备开启摄像头功能之后,电子设备的系统默认的拍摄模式,在该拍摄模式下电子设备可以根据拍摄对象进行自动对焦与虚化。
示例性地,在默认对焦模式下,如图3中的(A)所示,电子设备显示预览界面10,该预览界面10中包括电子设备摄像头采集的图像,该图像中包括第二对象;电子设备检测到该第二对象中不存在实际对焦对象,因此电子设备可以将如图3中的(B) 所示的新增的第一对象,确定为目标对象,从而对该目标对象进行对焦处理。
步骤202b、在第二对象中包括至少一个预设拍摄主体的情况下,电子设备在至少一个预设拍摄主体和第一对象中确定目标对象,以目标对象为对焦对象,更新拍摄预览图像。
本申请实施例中,在第二对象中包括至少一个预设拍摄主体的情况下,电子设备可以将至少一个预设拍摄主体和第一对象中特征权重最大的对象确定为对焦对象,即待对焦的主体对象。
可选地,本申请实施例中,电子设备可以将至少一个预设拍摄主体和第一对象中突出程度大于一个阈值,或者与其他对象的突出程度的差异大于一个阈值对象,确定为目标对象。
示例性地,如图5所示,电子设备处于室内拍摄场景,该拍摄场景中预设拍摄主体为玩偶熊12;在加入第一对象之后,电子设备可以通过对比第一对象与玩偶熊12的类别、在拍摄场景中的位置、大小、与摄像头之间的距离等,确定出新的对焦对象。
本申请实施例中,在默认对焦模式下,电子设备可以判断第二对象中是否存在预设拍摄主体,来确定对焦对象,以在不存在预设拍摄主体时,将第一对象确定为对焦对象以进行对焦处理,在存在至少一个预设拍摄主体时,由预设拍摄主体和第一对象的特征权重决定对焦对象,因此提高了确定对焦对象的精确性,保证了确定的对焦对象更加符合当前拍摄场景。
可选地,本申请实施例中,结合图2,如图4所示,上述步骤202b具体可以通过下述的步骤202b1至步骤202b3实现。
步骤202b1、电子设备获取第一对象在预览图像中的第一特征权重和至少一个预设拍摄主体在预览图像中的至少一个第二特征权重。
本申请实施例中所述的特征权重(例如第一特征权重、第二特征权重、目标特征权重等),用于指示拍摄对象在预览图像中的拍摄显著性。
可选地,本申请实施例中,在默认对焦模式下,电子设备可以根据当前拍摄场景信息,计算当前拍摄场景的预设拍摄主体,完成初始对焦;并且,电子设备可以记录此时的预设拍摄主体在拍摄场景中的第二特征信息。
可选地,本申请实施例中,上述第二特征信息包括以下至少一项:预设拍摄主体在拍摄场景中的位置信息、预设拍摄主体在拍摄场景中所占画面大小、预设拍摄主体与摄像头之间的距离、预设拍摄主体的类别信息。
可选地,本申请实施例中,在默认对焦模式下,电子设备可以采用虚拟对象生成算法、3D测量算法,例如即时定位与地图构建(Simultaneous Localization and Mapping,slam)、飞行时间技术(Time of Flight,TOF)等算法,也可以采用相位对焦(Phase Detection Auto Focus,PDAF)或反差式对焦(Contrast Detection Auto Focus,CDAF)等方法,获取预设拍摄主体在当前拍摄场景中的第二特征信息。
可选地,本申请实施例中,上述第一对象的特征信息(以下称为第一特征信息)可以包括以下至少一项:第一对象在拍摄场景中所占画面大小、第一对象的类别信息、第一对象在拍摄场景中的位置信息、第一对象与摄像头之间的距离。其中,第一对象在拍摄场景中所占画面大小和第一对象的类别信息由电子设备采用虚拟对象生成算法获取得到;第一对象在拍摄场景中的位置信息和第一对象与摄像头之间的距离由电子设备采用3D测量算法获取得到。
本申请实施例中,电子设备可以根据第一特征信息,即根据第一对象的类别信息、在拍摄场景中的位置信息、在拍摄场景中所占画面大小、与摄像头之间距离中的至少一项,为第一对象赋予至少一个权重值,每个权重值分别对应第一特征信息中各项信息中的一项信息,从而获得第一的最终权重,即第一特征权重。同理,针对至少一个预设拍摄主体中每个预设拍摄主体的第二特征权重的确定方法与第一特征权重的确定方法类似,此处不予赘述。
可选地,本申请实施例中,电子设备可以对至少一个权重值进行方差计算,得到第一特征权重。或者,电子设备可以按照第一特征信息中各项信息的重要程度,将重要程度最高的信息的权重值作为第一特征权重。
可选地,本申请实施例中,在加入第一对象之后,电子设备可以按照如下原则计算至少一个预设拍摄主体与第一对象的特征权重,即分别获取至少一个预设拍摄主体和第一对象的最终权重,以将权重最大的对象作为目标对象:
1)类别权重:人物形象>动物形象>其他类别(建筑、花卉、树木等权重相等)。
2)在拍摄场景中的位置权重:根据距离场景中心判断,距离中心越近,权重越大;距离中心越远,权重越小。
3)占据拍摄场景画面大小权重:画面大小占比越大,权重越大;画面大小占比越小,权重越小。
4)与摄像头之间的距离权重:距离相机越近,权重越大;距离相机越远,权重越小。
步骤202b2、电子设备在第一特征权重和至少一个第二特征权重中确定目标特征权重。
本申请实施例中,上述目标权重不小于第一特征权重和至少一个第二特征权重。
可选地,本申请实施例中,上述目标权重为第一特征权重和至少一个第二特征权重中的最大权重值。
步骤202b3、电子设备将目标特征权重对应的拍摄对象确定为目标对象,以目标对象为对焦对象,更新拍摄预览图像。
示例性地,结合图5,如图6所示,在加入第一对象大象11之后,第一对象位于靠近画面中间位置的区域,电子设备可以通过对比第一对象和至少一个预设拍摄主体(例如玩偶熊12)的位置、大小、与摄像头之间的距离等,确定出新的对焦对象为靠 近画面中间位置的对象,该对象为第一对象。
又示例性地,结合图5,如图7所示,在加入第一对象大象11之后,第一对象位于画面的左下角位置处,且在画面中的占比相对至少一个预设拍摄主体在画面中的占比较小,电子设备可以通过对比第一对象与至少一个预设拍摄主体的位置、大小、与摄像头之间的距离等,确定对焦对象保持原来的至少一个预设拍摄主体玩偶熊12不变。
本申请实施例中,电子设备可以根据至少一个预设拍摄主体和第一对象的特征信息,确定这些对象的特征权重,以决定继续保持原来的至少一个预设拍摄主体不变,还是将第一对象作为新的对焦对象,以保证确定的对焦对象的准确性,从而保证确定的对焦对象更加符合当前拍摄场景。
可选地,本申请实施例中,上述步骤202具体可以通过下述的步骤301实现。
步骤301、电子设备以目标对象为对焦对象,对拍摄预览图像执行图像处理,得到处理后的拍摄预览图像。
可选地,本申请实施例中,在默认对焦模式下,电子设备可以根据第一特征权重和第二特征权重的对比结果确定目标对象,并重新触发对焦算法和虚化算法:电子设备可以根据目标对象的位置坐标信息,进行对焦处理,再对前后景深进行虚化处理,从而达到突出对焦对象的效果,提高了画面效果。
示例性地,如图8中的(A)所示,电子设备显示默认对焦模式下的预览界面13,该预览界面13中包括第二对象,但该第二对象中不存在预设拍摄主体,在加入第一对象大象11之后,电子设备可以在预览界面13中显示第一对象和第二对象;如图8中的(B)所示,电子设备确定预览界面13中的拍摄对象中不存在预设拍摄主体,即预览界面13中的第二对象不是预设拍摄对象,此时电子设备可以将第一对象确定为目标对象,并重新触发对焦和虚化算法,以对目标对象执行对焦处理,然后对预览界面13中除目标对象之外的图像区域执行虚化处理。可以理解,图8中的(A)为重对焦与虚化之前的图像效果,图8中的(B)为重对焦与虚化之后的图像效果,重新对焦与虚化后,能够突出拍摄场景中的显著对象。
又示例性地,如图9中的(A)所示,电子设备显示默认对焦模式下的预览界面14,该预览界面14中包括第二对象,并且该第二对象中存在预设拍摄主体,在加入第一对象之后,电子设备可以在预览界面14中显示第一对象和第二对象;如图9中的(B)所示,电子设备确定预览界面14中的拍摄对象中存在预设拍摄主体,例如玩偶熊12,此时电子设备可以通过对比预设拍摄主体与第一对象的位置、所占画面的大小、与摄像头之间的距离,将摄像头之间的距离较近的对象确定为目标对象,即电子设备可以将预览界面14中的大象11确定为目标对象,并重新触发对焦和虚化算法,以对目标对象执行对焦处理,然后对预览界面14中除目标对象之外的图像区域执行虚化处理。可以理解,图9中的(A)为重对焦与虚化之前的图像效果,图9中的(B)为重对焦 与虚化之后的图像效果,重新对焦与虚化后,能够明显突出拍摄场景中第一对象的显著性。
又示例性地,如图10中的(A)所示,电子设备显示默认对焦模式下的预览界面15,该预览界面15中包括第二对象,并且该第二对象中存在预设拍摄主体,在加入第一对象之后,电子设备可以在预览界面15中显示第一对象和第二对象;如图10中的(B)所示,电子设备确定预览界面15中的拍摄对象中存在预设拍摄主体,例如玩偶熊12,此时电子设备可以通过对比预设拍摄主体与第一对象的位置、所占画面的大小、与摄像头之间的距离,将所占画面较大、且所在位置与中间位置较近的对象确定为目标对象,即电子设备可以将预览界面15中的玩偶熊12确定为目标对象,并重新触发对焦和虚化算法,以对目标对象执行对焦处理,然后对预览界面15中除目标对象之外的图像区域执行虚化处理。可以理解,图10中的(A)为重对焦与虚化之前的图像效果,图10中的(B)为重对焦与虚化之后的图像效果,重新对焦与虚化后,能够保持对焦对象的显著性,从而明显突出拍摄场景的对焦对象。
可选地,本申请实施例中,在虚拟对象运动追焦模式下,电子设备可以根据第一对象的数量和特征信息,判断第一对象的显著性程度,以显著程度最高的第一对象作为对焦对象,通过实时跟踪第一对象的运动位置变化进行重对焦与虚化,突出了第一对象的在拍摄场景中的显著性,提升图像的对焦或虚化效果。
本申请实施例提供一种图像处理方法,在拍摄场景中加入第一对象之后,电子设备可以根据第一对象和预设拍摄主体选择拍摄场景中的对焦主体,以重新触发对焦和虚化策略,生成更佳的拍照或视频效果,即在加入第一对象使得构图发生变化后,电子设备会根据当前的构图情况执行重新对焦和虚化操作,以保证重新对焦和虚化操作后的图像能够满足当前场景的拍摄需求,提升了整体图像的对焦或虚化效果,从而提升了电子设备在AR拍摄场景中拍摄图像的效果。
可选地,本申请实施例中,上述步骤301具体可以通过下述的步骤301a实现。
步骤301a、电子设备根据目标对象的空间位置信息以目标对象为对焦对象,对拍摄预览图像进行第一图像处理,得到处理后的拍摄预览图像。
本申请实施例中,在虚拟对象运动追焦模式下,电子设备可以采用3D测量技术获取目标对象的空间位置信息,例如空间坐标等信息,并控制马达实现对目标对象的第一图像处理;并且,电子设备可以记录此时的目标对象在拍摄场景中的特征信息。
可选地,本申请实施例中,上述目标对象在拍摄场景中的特征信息包括以下至少一项:目标对象的大小、目标对象的颜色、目标对象的亮度等特征信息。
本申请实施例中,电子设备可以获取并记录目标对象在空间中的位置信息,以确定目标对象为对焦对象,以保证确定的对焦对象的准确性,从而对拍摄预览图像进行图像处理,以保证确定的对焦对象更加符合当前拍摄预览图像。
可选地,本申请实施例中,上述步骤301具体可以通过下述的步骤301b和步骤 301c实现。
步骤301b、在目标对象移动的情况下,电子设备根据目标对象在拍摄场景中的特征信息和拍摄预览图像中拍摄对象的特征信息,确定目标对象的平面位置。
本申请实施例中,上述拍摄对象可以为第二对象。
本申请实施例中,在虚拟对象运动追焦模式下,电子设备可以根据目标对象在拍摄场景中的特征信息,即根据目标对象的大小、颜色、亮度中的至少一项,通过尺度不变特征变换(Scale-invariant Feature Transform,SIFT)匹配算法对检测出的所有对象进行匹配,确定目标对象的平面位置。其中,SIFT特征匹配算法可以使对象经过旋转、缩放、亮度变化等操作后保持不变。
步骤301c、电子设备根据目标对象的平面位置和第一距离信息,以目标对象为对焦对象,对拍摄预览图像进行第二图像处理,得到处理后的拍摄预览图像。
本申请实施例中,上述第一距离信息为目标对象对应的空间位置与摄像头之间的距离。
可选地,本申请实施例中,电子设备可以通过3D测量技术获取第一距离信息,并控制马达对拍摄预览图像进行第二图像处理。电子设备通过实时跟踪获得第一对象位置,对目标对象进行重新对焦处理,突出了拍摄场景中的显著性,提升电子设备在AR拍摄场景中的图像拍摄效果。
可选地,在本申请实施例的一种实现方式中,上述目标对象为第一对象。上述步骤202具体可以通过下述的步骤202c实现。
步骤202c、在第一对象移动的情况下,电子设备对第一对象进行运动追焦,并更新拍摄预览图像。
可选地,本申请实施例中,在虚拟对象运动追焦模式下,在第一对象移动的情况下,电子设备可以对第一对象进行运动追焦,并更新拍摄预览图像。
需要说明的是,虚拟对象运动追焦模式可以理解为:电子设备以虚拟对象作为对焦主体对象,并根据虚拟对象的运动位置变化进行追焦与虚化操作。
可选地,在本申请实施例的另一种实现方式中,上述目标对象为第一对象,该第一对象包括第一虚拟对象和第二虚拟对象。上述步骤202具体可以通过下述的步骤202d实现。
步骤202d、在第一虚拟对象的特征权重大于第二虚拟对象的特征权重的情况下,电子设备对第一虚拟对象进行运动追焦,并更新拍摄预览图像。
本申请实施例中,上述特征权重用于指示拍摄对象在预览图像中拍摄显著性。
可选地,本申请实施例中,若第一对象包括第一虚拟对象和第二虚拟对象,则电子设备可以通过目标/语义检测与3D测量(例如slam、TOF)技术,获取第一虚拟对象和第二虚拟对象的特征信息。并根据第一虚拟对象和第二虚拟对象的特征信息,选择出一个最为显著的虚拟对象,即特征权重最大的虚拟对象作为目标对象进行运动追 焦。
可选地,本申请实施例中,上述特征信息包括类别信息、在拍摄场景中所占画面的大小、在拍摄场景中的位置信息、与摄像头之间的距离中的至少一项。
可选地,在本申请实施例中,在上述步骤202之后,本申请实施例提供的图像处理方法还包括下述的步骤401和步骤402。
步骤401、在目标对象移动至拍摄场景外的时间小于或等于预设时间的情况下,在目标对象再次运动至拍摄场景内时,电子设备以目标对象为对焦对象,更新拍摄预览图像。
步骤402、在目标对象移动至拍摄场景外的时间大于预设时间的情况下,电子设备以第三对象为对焦对象,更新拍摄预览图像,第三对象为拍摄预览图像中的拍摄对象。
需要说明的是,电子设备重新确定对焦对象并更新拍摄预览图像的方法,可以参见上述实施例中的步骤202及其相关方案,在此不再赘述。
本申请实施例中,电子设备可以判断目标对象移动至拍摄场景外的时间长度,以确定再次执行图像处理的对象,从而使得在运动模式下,电子设备能够根据目标对象的移动情况,更新拍摄预览图像,以实时适应拍摄场景中的构图的变化情况,从而提升电子设备在AR拍摄场景中的图像拍摄效果。
需要说明的是,本申请实施例提供的图像采集方法,执行主体可以为图像采集装置,或者电子设备,或者还可以为电子设备中的功能模块或实体。本申请实施例中以图像采集装置执行图像采集方法为例,说明本申请实施例提供的图像采集装置。
图11示出了本申请实施例中涉及的图像处理装置的一种可能的结构示意图。如图11所示,该图像处理装置70可以包括:显示模块71和更新模块72。
其中,显示模块71,用于显示拍摄预览图像,拍摄预览图像包括第一对象和第二对象,第一对象为虚拟拍摄对象,第二对象为拍摄预览图像对应的拍摄场景中的对象。更新模块72,用于以目标对象为对焦对象,更新拍摄预览图像。其中,目标对象包括以下其中之一:第一对象、第二对象。
在一种可能的实现方式中,上述更新模块72,具体用于在第二对象中不包括预设拍摄主体的情况下,以第一对象为对焦对象,更新拍摄预览图像;或者,在第二对象中包括至少一个预设拍摄主体的情况下,在至少一个预设拍摄主体和第一对象中确定目标对象,以目标对象为对焦对象,更新拍摄预览图像。
在一种可能的实现方式中,上述更新模块72,具体用于获取第一对象在预览图像中的第一特征权重和至少一个预设拍摄主体在预览图像中的至少一个第二特征权重;并在第一特征权重和至少一个第二特征权重中确定目标特征权重,目标权重不小于第一特征权重和至少一个第二特征权重;以及将目标特征权重对应的拍摄对象确定为目标对象;其中,特征权重用于指示拍摄对象在预览图像中拍摄显著性。
在一种可能的实现方式中,上述目标对象为第一对象;上述更新模块72,具体用于在第一对象移动的情况下,对第一对象进行运动追焦,并更新拍摄预览图像。
在一种可能的实现方式中,上述目标对象为第一对象,第一对象包括第一虚拟对象和第二虚拟对象;上述更新模块72,具体用于在第一虚拟对象的特征权重大于第二虚拟对象的特征权重的情况下,对第一虚拟对象进行运动追焦,并更新拍摄预览图像;其中,特征权重用于指示拍摄对象在预览图像中拍摄显著性。
在一种可能的实现方式中,上述更新模块72,具体用于以目标对象为对焦对象,对拍摄预览图像执行图像处理,得到处理后的拍摄预览图像。
在一种可能的实现方式中,上述更新模块72,具体用于根据目标对象的空间位置信息以目标对象为对焦对象,对拍摄预览图像进行第一图像处理,得到处理后的拍摄预览图像。
在一种可能的实现方式中,上述更新模块72,具体用于在目标对象移动的情况下,根据目标对象在拍摄场景中的特征信息和拍摄预览图像中拍摄对象的特征信息,确定目标对象的平面位置;并根据目标对象的平面位置和第一距离信息,以目标对象为对焦对象,对拍摄预览图像进行第二图像处理,得到处理后的拍摄预览图像,第一距离信息为目标对象对应的空间位置与摄像头之间的距离。
在一种可能的实现方式中,上述更新模块72,还用于在以目标对象为对焦对象,更新所述拍摄预览图像之后,在目标对象移动至拍摄场景外的时间小于或等于预设时间的情况下,在目标对象再次运动至拍摄场景内时,以目标对象为对焦对象,更新拍摄预览图像;或者,在目标对象移动至拍摄场景外的时间大于预设时间的情况下,以第三对象为对焦对象,更新拍摄预览图像,第三对象为拍摄预览图像中的拍摄对象。
本申请实施例提供一种图像处理装置,在拍摄场景中加入第一对象之后,图像处理装置可以根据第一对象和预设拍摄主体选择拍摄场景中的对焦主体,以重新触发对焦和虚化策略,生成更佳的拍照或视频效果,即在加入第一对象使得构图发生变化后,图像处理装置会根据当前的构图情况执行重新对焦和虚化操作,以保证重新对焦和虚化操作后的图像能够满足当前场景的拍摄需求,提升了整体图像的对焦或虚化效果,从而提升了图像处理装置在AR拍摄场景中拍摄得到的图像的效果。
本申请实施例中的图像处理装置可以是装置,也可以是电子设备中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,还可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等, 本申请实施例不作具体限定。
本申请实施例中的图像处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的图像处理装置能够实现上述方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图12所示,本申请实施例还提供一种电子设备90,包括处理器91和存储器92,存储器92上存储有可在所述处理器91上运行的程序或指令,该程序或指令被处理器91执行时实现上述图像处理方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图13为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备100包括但不限于:射频单元101、网络模块102、音频输出单元103、输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、以及处理器110等部件。
本领域技术人员可以理解,电子设备100还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图13中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
显示单元106,用于显示拍摄预览图像,拍摄预览图像包括第一对象和第二对象,第一对象为虚拟拍摄对象,第二对象为拍摄预览图像对应的拍摄场景中的对象。
处理器110,用于以目标对象为对焦对象,更新拍摄预览图像。
本申请实施例提供的一种电子设备,在拍摄场景中加入第一对象之后,电子设备可以根据第一对象和预设拍摄主体选择拍摄场景中的对焦主体,以重新触发对焦和虚化策略,生成更佳的拍照或视频效果,即在加入第一对象使得构图发生变化后,电子设备会根据当前的构图情况执行重新对焦和虚化操作,以保证重新对焦和虚化操作后的图像能够满足当前场景的拍摄需求,提升了整体图像的对焦或虚化效果,从而提升了电子设备在AR拍摄场景中拍摄得到的图像的效果。
可选地,处理器110,具体用于在第二对象中不包括预设拍摄主体的情况下,以第一对象为对焦对象,更新拍摄预览图像;或者,在第二对象中包括至少一个预设拍摄主体的情况下,在至少一个预设拍摄主体和第一对象中确定目标对象,以目标对象为对焦对象,更新拍摄预览图像。
可选地,处理器110,具体用于获取第一对象在预览图像中的第一特征权重和至 少一个预设拍摄主体在预览图像中的至少一个第二特征权重;并在第一特征权重和至少一个第二特征权重中确定目标特征权重,目标权重不小于第一特征权重和至少一个第二特征权重;以及将目标特征权重对应的拍摄对象确定为目标对象;其中,特征权重用于指示拍摄对象在预览图像中拍摄显著性。
可选地,上述目标对象为第一对象。处理器110,具体用于在第一对象移动的情况下,对第一对象进行运动追焦,并更新拍摄预览图像。
可选地,上述目标对象为第一对象,第一对象包括第一虚拟对象和第二虚拟对象。处理器110,具体用于在第一虚拟对象的特征权重大于第二虚拟对象的特征权重的情况下,对第一虚拟对象进行运动追焦,并更新拍摄预览图像;其中,特征权重用于指示拍摄对象在预览图像中拍摄显著性。
可选地,处理器110,具体用于以目标对象为对焦对象,对拍摄预览图像执行图像处理,得到处理后的拍摄预览图像。
可选地,处理器110,具体用于根据目标对象的空间位置信息以目标对象为对焦对象,对拍摄预览图像进行第一图像处理,得到处理后的拍摄预览图像。
可选地,处理器110,具体用于在目标对象移动的情况下,根据目标对象在拍摄场景中的特征信息和拍摄预览图像中拍摄对象的特征信息,确定目标对象的平面位置;并根据目标对象的平面位置和第一距离信息,以目标对象为对焦对象,对拍摄预览图像进行第二图像处理,得到处理后的拍摄预览图像,第一距离信息为目标对象对应的空间位置与摄像头之间的距离。
可选地,处理器110,还用于在以目标对象为对焦对象,更新所述拍摄预览图像之后,在目标对象移动至拍摄场景外的时间小于或等于预设时间的情况下,在目标对象再次运动至拍摄场景内时,以目标对象为对焦对象,更新拍摄预览图像;或者,在目标对象移动至拍摄场景外的时间大于预设时间的情况下,以第三对象为对焦对象,更新拍摄预览图像,第三对象为拍摄预览图像中的拍摄对象。
本申请实施例提供的电子设备能够实现上述方法实施例实现的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本实施例中各种实现方式具有的有益效果具体可以参见上述方法实施例中相应实现方式所具有的有益效果,为避免重复,此处不再赘述。
应理解的是,本申请实施例中,输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元106可包括显示面板1061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板1061。用户输入单元107包括触控面板1071以及其他输入设备1072中的至少一种。触控面板1071,也称为触摸屏。触控面板1071可包括触摸检测装置和触摸控制器两个部分。其他输入设备1072可以包括但不限于物理键盘、功能键(比 如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器109可以包括易失性存储器或非易失性存储器,或者,存储器109可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器109包括但不限于这些和任意其它适合类型的存储器。
处理器110可包括一个或多个处理单元;可选的,处理器110集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非 排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (20)

  1. 一种图像处理方法,所述方法包括:
    显示拍摄预览图像,所述拍摄预览图像包括第一对象和第二对象,所述第一对象为虚拟拍摄对象,所述第二对象为所述拍摄预览图像对应的拍摄场景中的对象;
    以目标对象为对焦对象,更新所述拍摄预览图像;
    其中,所述目标对象包括以下其中之一:所述第一对象、所述第二对象。
  2. 根据权利要求1所述的方法,其中,所述以目标对象为对焦对象,更新所述拍摄预览图像,包括:
    在所述第二对象中不包括预设拍摄主体的情况下,以所述第一对象为对焦对象,更新所述拍摄预览图像;
    在所述第二对象中包括至少一个预设拍摄主体的情况下,在所述至少一个预设拍摄主体和所述第一对象中确定目标对象,以所述目标对象为对焦对象,更新所述拍摄预览图像。
  3. 根据权利要求2所述的方法,其中,所述在所述至少一个预设拍摄主体和所述第二对象中确定目标对象,包括:
    获取所述第一对象在所述预览图像中的第一特征权重和所述至少一个预设拍摄主体在所述预览图像中的至少一个第二特征权重;
    在所述第一特征权重和所述至少一个第二特征权重中确定目标特征权重,所述目标权重不小于所述第一特征权重和所述至少一个第二特征权重;
    将所述目标特征权重对应的拍摄对象确定为目标对象;
    其中,所述特征权重用于指示拍摄对象在预览图像中拍摄显著性。
  4. 根据权利要求1所述的方法,其中,所述目标对象为所述第一对象,所述以目标对象为对焦对象,更新所述拍摄预览图像,包括:
    在所述第一对象移动的情况下,对所述第一对象进行运动追焦,并更新所述拍摄预览图像。
  5. 根据权利要求1所述的方法,其中,所述目标对象为所述第一对象,所述第一对象包括第一虚拟对象和第二虚拟对象,所述以目标对象为对焦对象,更新所述拍摄预览图像包括:
    在所述第一虚拟对象的特征权重大于所述第二虚拟对象的特征权重的情况下,对所述第一虚拟对象进行运动追焦,并更新所述拍摄预览图像;
    其中,所述特征权重用于指示拍摄对象在预览图像中拍摄显著性。
  6. 根据权利要求1至5中任一项所述的方法,其中,所述以目标对象为对焦对象,更新所述拍摄预览图像,包括:
    以所述目标对象为对焦对象,对所述拍摄预览图像执行图像处理,得到处理后的所述拍摄预览图像。
  7. 根据权利要求6所述的方法,其中,所述以所述目标对象为对焦对象,对所述拍摄预览图像执行图像处理,得到处理后的所述拍摄预览图像,包括:
    根据所述目标对象的空间位置信息以所述目标对象为对焦对象,对所述拍摄预览图像进行第一图像处理,得到处理后的所述拍摄预览图像。
  8. 根据权利要求6所述的方法,其中,所述以所述目标对象为对焦对象,对所述拍摄预览图像执行图像处理,得到处理后的所述拍摄预览图像,包括:
    在所述目标对象移动的情况下,根据所述目标对象在所述拍摄场景中的特征信息和所述拍摄预览图像中拍摄对象的特征信息,确定所述目标对象的平面位置;
    根据所述目标对象的平面位置和第一距离信息,以所述目标对象为对焦对象,对所述拍摄预览图像进行第二图像处理,得到处理后的所述拍摄预览图像,所述第一距离信息为所述目标对象对应的空间位置与摄像头之间的距离。
  9. 根据权利要求1所述的方法,其中,所述以目标对象为对焦对象,更新所述拍摄预览图像之后,所述方法还包括:
    在所述目标对象移动至所述拍摄场景外的时间小于或等于预设时间的情况下,在所述目标对象再次运动至所述拍摄场景内时,以所述目标对象为对焦对象,更新所述拍摄预览图像;
    在所述目标对象移动至所述拍摄场景外的时间大于预设时间的情况下,以第三对象为对焦对象,更新所述拍摄预览图像,所述第三对象为所述拍摄预览图像中的拍摄对象。
  10. 一种图像处理装置,所述装置包括:显示模块和更新模块;
    所述显示模块,用于显示拍摄预览图像,所述拍摄预览图像包括第一对象和第二对象,所述第一对象为虚拟拍摄对象,所述第二对象为所述拍摄预览图像对应的拍摄场景中的对象;
    所述更新模块,用于以目标对象为对焦对象,更新所述拍摄预览图像;
    其中,所述目标对象包括以下其中之一:所述第一对象、所述第二对象。
  11. 根据权利要求10所述的装置,其中,所述更新模块,具体用于在所述第二对象中不包括预设拍摄主体的情况下,以所述第一对象为对焦对象,更新所述拍摄预览图像;或者,在所述第二对象中包括至少一个预设拍摄主体的情况下,在所述至少一个预设拍摄主体和所述第一对象中确定目标对象,以所述目标对象为对焦对象,更新所述拍摄预览图像。
  12. 根据权利要求11所述的装置,其中,所述更新模块,具体用于获取所述第一对象在所述预览图像中的第一特征权重和所述至少一个预设拍摄主体在所述预览图像中的至少一个第二特征权重;并在所述第一特征权重和所述至少一个第二特征权重中确定目标特征权重,所述目标权重不小于所述第一特征权重和所述至少一个第二特征权重;以及将所述目标特征权重对应的拍摄对象确定为目标对象;其中,所述特征权 重用于指示拍摄对象在预览图像中拍摄显著性。
  13. 根据权利要求10所述的装置,其中,所述目标对象为所述第一对象,所述更新模块,具体用于在所述第一对象移动的情况下,对所述第一对象进行运动追焦,并更新所述拍摄预览图像。
  14. 根据权利要求10所述的装置,其中,所述目标对象为所述第一对象,所述第一对象包括第一虚拟对象和第二虚拟对象,所述更新模块,具体用于在所述第一虚拟对象的特征权重大于所述第二虚拟对象的特征权重的情况下,对所述第一虚拟对象进行运动追焦,并更新所述拍摄预览图像;其中,所述特征权重用于指示拍摄对象在预览图像中拍摄显著性。
  15. 根据权利要求10所述的装置,其中,所述更新模块,还用于在所述目标对象移动至所述拍摄场景外的时间小于或等于预设时间的情况下,在所述目标对象再次运动至所述拍摄场景内时,以所述目标对象为对焦对象,更新所述拍摄预览图像;或者,在所述目标对象移动至所述拍摄场景外的时间大于预设时间的情况下,以第三对象为对焦对象,更新所述拍摄预览图像,所述第三对象为所述拍摄预览图像中的拍摄对象。
  16. 一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至9中任一项所述的图像处理方法的步骤。
  17. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至9中任一项所述的图像处理方法的步骤。
  18. 一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1至9任一项所述的图像处理方法。
  19. 一种计算机程序产品,所述程序产品被至少一个处理器执行以实现如权利要求1至9任一项所述的图像处理方法。
  20. 一种电子设备,所述电子设备被配置成用于执行如权利要求1至9任一项所述的图像处理方法。
PCT/CN2023/109158 2022-07-29 2023-07-25 图像处理方法、装置、电子设备及存储介质 WO2024022349A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210908193.5A CN115278084A (zh) 2022-07-29 2022-07-29 图像处理方法、装置、电子设备及存储介质
CN202210908193.5 2022-07-29

Publications (1)

Publication Number Publication Date
WO2024022349A1 true WO2024022349A1 (zh) 2024-02-01

Family

ID=83771566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/109158 WO2024022349A1 (zh) 2022-07-29 2023-07-25 图像处理方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN115278084A (zh)
WO (1) WO2024022349A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278084A (zh) * 2022-07-29 2022-11-01 维沃移动通信有限公司 图像处理方法、装置、电子设备及存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109660714A (zh) * 2018-10-31 2019-04-19 百度在线网络技术(北京)有限公司 基于ar的图像处理方法、装置、设备及存储介质
CN110149482A (zh) * 2019-06-28 2019-08-20 Oppo广东移动通信有限公司 对焦方法、装置、电子设备和计算机可读存储介质
CN110581954A (zh) * 2019-09-30 2019-12-17 深圳酷派技术有限公司 一种拍摄对焦方法、装置、存储介质及终端
CN110661970A (zh) * 2019-09-03 2020-01-07 RealMe重庆移动通信有限公司 拍照方法、装置、存储介质及电子设备
CN112291480A (zh) * 2020-12-03 2021-01-29 维沃移动通信有限公司 跟踪对焦方法、跟踪对焦装置、电子设备和可读存储介质
CN112492221A (zh) * 2020-12-18 2021-03-12 维沃移动通信有限公司 拍照方法、装置、电子设备及存储介质
CN113302907A (zh) * 2020-08-24 2021-08-24 深圳市大疆创新科技有限公司 拍摄方法、装置、设备及计算机可读存储介质
WO2022000992A1 (zh) * 2020-06-28 2022-01-06 百度在线网络技术(北京)有限公司 拍摄方法、装置、电子设备和存储介质
CN115278084A (zh) * 2022-07-29 2022-11-01 维沃移动通信有限公司 图像处理方法、装置、电子设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587542A (zh) * 2009-06-26 2009-11-25 上海大学 基于眼动轨迹跟踪的景深融合增强显示方法及系统
US20130088413A1 (en) * 2011-10-05 2013-04-11 Google Inc. Method to Autofocus on Near-Eye Display
JP5967422B2 (ja) * 2012-05-23 2016-08-10 カシオ計算機株式会社 撮像装置及び撮像処理方法並びにプログラム
CN106249413B (zh) * 2016-08-16 2019-04-23 杭州映墨科技有限公司 一种模拟人眼对焦的虚拟动态景深变化处理方法
CN113064490B (zh) * 2021-04-06 2022-07-29 上海金陵电子网络股份有限公司 一种基于眼动轨迹的虚拟增强设备的识别方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109660714A (zh) * 2018-10-31 2019-04-19 百度在线网络技术(北京)有限公司 基于ar的图像处理方法、装置、设备及存储介质
CN110149482A (zh) * 2019-06-28 2019-08-20 Oppo广东移动通信有限公司 对焦方法、装置、电子设备和计算机可读存储介质
CN110661970A (zh) * 2019-09-03 2020-01-07 RealMe重庆移动通信有限公司 拍照方法、装置、存储介质及电子设备
CN110581954A (zh) * 2019-09-30 2019-12-17 深圳酷派技术有限公司 一种拍摄对焦方法、装置、存储介质及终端
WO2022000992A1 (zh) * 2020-06-28 2022-01-06 百度在线网络技术(北京)有限公司 拍摄方法、装置、电子设备和存储介质
CN113302907A (zh) * 2020-08-24 2021-08-24 深圳市大疆创新科技有限公司 拍摄方法、装置、设备及计算机可读存储介质
CN112291480A (zh) * 2020-12-03 2021-01-29 维沃移动通信有限公司 跟踪对焦方法、跟踪对焦装置、电子设备和可读存储介质
CN112492221A (zh) * 2020-12-18 2021-03-12 维沃移动通信有限公司 拍照方法、装置、电子设备及存储介质
CN115278084A (zh) * 2022-07-29 2022-11-01 维沃移动通信有限公司 图像处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN115278084A (zh) 2022-11-01

Similar Documents

Publication Publication Date Title
CN110321048B (zh) 三维全景场景信息处理、交互方法及装置
US11176355B2 (en) Facial image processing method and apparatus, electronic device and computer readable storage medium
EP3105921A1 (en) Photo composition and position guidance in an imaging device
CN112954210B (zh) 拍照方法、装置、电子设备及介质
US9137461B2 (en) Real-time camera view through drawn region for image capture
WO2019119986A1 (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN107231524A (zh) 拍摄方法及装置、计算机装置和计算机可读存储介质
WO2023174223A1 (zh) 视频录制方法、装置和电子设备
WO2024022349A1 (zh) 图像处理方法、装置、电子设备及存储介质
US20230334789A1 (en) Image Processing Method, Mobile Terminal, and Storage Medium
CN112437232A (zh) 拍摄方法、装置、电子设备及可读存储介质
CN115484403B (zh) 录像方法和相关装置
CN111654624B (zh) 拍摄提示方法、装置及电子设备
CN113840070A (zh) 拍摄方法、装置、电子设备及介质
CN111669495B (zh) 拍照方法、拍照装置和电子设备
CN114390201A (zh) 对焦方法及其装置
WO2024061134A1 (zh) 拍摄方法、装置、电子设备及介质
US20210289147A1 (en) Images with virtual reality backgrounds
WO2023093669A1 (zh) 视频拍摄方法、装置、电子设备及存储介质
WO2019000715A1 (zh) 图像处理方法及其系统
WO2022143311A1 (zh) 一种智能取景推荐的拍照方法及装置
CN112634339B (zh) 商品对象信息展示方法、装置及电子设备
CN111223192A (zh) 一种图像处理方法及其应用方法、装置及设备
CN113873160B (zh) 图像处理方法、装置、电子设备和计算机存储介质
WO2024131821A1 (zh) 拍照方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23845558

Country of ref document: EP

Kind code of ref document: A1