WO2022247768A1 - 图像处理方法和电子设备 - Google Patents

图像处理方法和电子设备 Download PDF

Info

Publication number
WO2022247768A1
WO2022247768A1 PCT/CN2022/094358 CN2022094358W WO2022247768A1 WO 2022247768 A1 WO2022247768 A1 WO 2022247768A1 CN 2022094358 W CN2022094358 W CN 2022094358W WO 2022247768 A1 WO2022247768 A1 WO 2022247768A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
layer
input
image
layers
Prior art date
Application number
PCT/CN2022/094358
Other languages
English (en)
French (fr)
Inventor
杜霆
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2022247768A1 publication Critical patent/WO2022247768A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • the present application belongs to the field of electronic technology, and in particular relates to an image processing method and electronic equipment.
  • the user selects a beautification tool to achieve a beautification effect on the characters in the image; another example, the user selects the blur tool and manually selects a target area to achieve a blur effect on the target area in the image.
  • the user may only need to perform beautification on individual characters, and the beautification effect is valid for all identifiable characters; or, the user may only need to perform blurring on a certain object, and the The operation error causes the blurred area to cover other objects.
  • the purpose of the embodiments of the present application is to provide an image processing method, which can solve the problem that in the prior art, when performing image processing on a subject in an image, it will inevitably affect other subjects, so that it cannot The problem of achieving the desired effect.
  • the embodiment of the present application provides an image processing method, the method includes: identifying the shooting object in the first target image; dividing at least one target shooting object into the target layer; receiving the target layer a first input; in response to the first input, process the at least one target object in a first target processing manner associated with the first input; wherein the first target image includes: At least two layers in a preset layer order.
  • an embodiment of the present application provides an image processing device, which includes: an identification module, configured to identify objects in a first target image; a first division module, configured to divide at least one target object into In the target layer; a first input receiving module, configured to receive a first input to the target layer; a first input response module, configured to respond to the first input, to be associated with the first input
  • the first target processing manner is to process the at least one target object; wherein, the first target image includes: at least two layers arranged in a preset layer sequence.
  • an embodiment of the present application provides an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored in the memory and operable on the processor, and the program or instruction is The processor implements the steps of the method described in the first aspect when executed.
  • an embodiment of the present application provides a readable storage medium, on which a program or an instruction is stored, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented .
  • the embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions, so as to implement the first aspect the method described.
  • an embodiment of the present application provides a computer program product, the program product is stored in a non-volatile storage medium, and the program product is executed by at least one processor to implement the computer program product described in the first aspect. method.
  • an embodiment of the present application provides an electronic device, including that the electronic device is configured to execute the method described in the first aspect.
  • the first target image may be divided into at least two layers, and each layer is used to display the corresponding object in the first target image. Based on this, all photographic objects in the first target image are divided into different layers. Furthermore, users can use any layer as the target layer to perform image processing on the target layer alone. Among them, the user can select the target layer and the first target processing method through the first input, so that the special effects generated by the first target processing method are only effective for the target object in the target layer, and will not affect other images. Objects in layers.
  • Fig. 1 is the flowchart of the image processing method of the embodiment of the present application.
  • FIG. 10 is a block diagram of an image processing device according to an embodiment of the present application.
  • FIG. 11 is one of the schematic diagrams of the hardware structure of the electronic device according to the embodiment of the present application.
  • FIG. 12 is a second schematic diagram of the hardware structure of the electronic device according to the embodiment of the present application.
  • FIG. 1 shows a flow chart of an image processing method according to an embodiment of the present application.
  • the method is applied to electronic equipment, including:
  • Step S1 Identify the shooting object in the first target image.
  • the screen image displayed on the shooting preview interface is the first target image.
  • this embodiment can be used in an image capturing scene.
  • the output screen image is the first target image.
  • this embodiment can be used in image editing scenarios.
  • the first target image includes any one of static images (such as photos, pictures, etc.) and dynamic images (such as videos, animations, etc.).
  • the shooting objects include people, objects, and scenes.
  • Step S2 Divide at least one target object into the target layer.
  • the first target image includes: at least two layers arranged in a preset layer order.
  • the first target image may be divided into at least two layers, the at least two layers are arranged in sequence according to a preset layer order, and each layer is used to display a corresponding object to be photographed.
  • any layer therein that is, the target layer, is used to display at least one target object to be photographed.
  • two people can be placed in one layer and a tree can be placed in another layer. Furthermore, in the scene where two people block the tree, the layer where the two people are located can be placed above the layer where one tree is located.
  • the preset layer order can be determined according to the final display effect, that is, it is necessary to ensure that each object in the first target image can be arranged according to the layered order.
  • the previous effect is displayed to avoid mutual occlusion between the subjects.
  • Step S3 Receive the first input to the target layer.
  • the first input includes the touch input performed by the user on the screen, and is not limited to input such as clicking, sliding, and dragging; the first input may also be the first operation, and the first operation includes the user's space operation, and is not limited to gesture operation, Facial action operations, etc., the first operation also includes the user's operation on the physical button on the device, and is not limited to operations such as pressing. Moreover, the first input includes one or more inputs, wherein the multiple inputs may be continuous or intermittent.
  • the first input is used to perform image processing on the target object in the target layer.
  • the image processing menu can be called out, and the menu includes various image processing methods, so that the user can select according to needs.
  • Step S4 In response to the first input, process at least one target object in a first target processing manner associated with the first input.
  • At least one target object in this step includes all target objects in the target layer.
  • the image processing menu includes a first target processing mode, and the user can select the first target processing mode through a first input.
  • At least one target object in the target layer can be processed.
  • the shooting preview interface displays the first target image, the first target image includes at least a target layer, and at least one target object is displayed in the target layer; the user selects the "recording" control to start recording a video; During the recording process, the user may select a first target processing method, such as a blurring processing method, for the target layer, so as to perform blurring processing on at least one target object in the target layer.
  • a first target processing method such as a blurring processing method
  • the application scenario is another example: in the editing mode, the editing interface displays the first target image, the first target image includes at least the target layer, and at least one target object is displayed in the target layer; the user can select the first target for the target layer A processing manner, such as a blurring processing manner, so as to perform blurring processing on at least one target object in the target layer.
  • the editing interface displays the first target image, the first target image includes at least the target layer, and at least one target object is displayed in the target layer; the user can select the first target for the target layer A processing manner, such as a blurring processing manner, so as to perform blurring processing on at least one target object in the target layer.
  • the user can sequentially perform corresponding processing methods for multiple target layers.
  • the user clicks on a target layer to bring up the image processing menu, which includes a variety of image processing methods, so that the user can choose according to needs; further, the user clicks on the next target layer , you can call out the image processing menu, which includes a variety of image processing methods, so that the user can choose according to needs; and so on.
  • This situation is more suitable for different processing methods corresponding to each target layer.
  • the image processing menu can be called out, which includes various image processing methods, so that the user can A choice needs to be made. This situation is more suitable for each target layer corresponding to the same processing method.
  • the user can sequentially perform multiple processing methods for one target layer.
  • the user clicks on a target layer to bring up the image processing menu, which includes a variety of image processing methods, so that the user can select the first processing method according to needs, so that the first processing method generates
  • the special effects of the target layer are maintained for a period of time; after this period, the user continues to click on the target layer to bring up the image processing menu, which includes a variety of image processing methods, so that the user can choose the second processing method according to needs , so that the special effect generated by the second processing method is maintained in another period of time in the target layer.
  • This situation is more applicable when the first target image is a dynamic image.
  • the above two time periods are two time periods without intersection, so as to avoid mutual interference of the two special effects.
  • the above two time periods are two adjacent time periods, so as to realize switching of special effects.
  • the two sub-periods are the delay time of the aforementioned 3-second special effect gradual change process.
  • the user can cancel the special effect at any time, or switch to other special effect settings.
  • the first target image may be divided into at least two layers, and each layer is used to display the corresponding object in the first target image. Based on this, all photographic objects in the first target image are divided into different layers. Furthermore, users can use any layer as the target layer to perform image processing on the target layer alone. Among them, the user can select the target layer and the first target processing method through the first input, so that the special effects generated by the first target processing method are only effective for the target object in the target layer, and will not affect other images. Objects in layers. It can be seen that, based on the image processing method provided by the embodiment of the present application, when performing image processing on a certain object in the image, other objects will not be affected, so as to achieve the desired effect.
  • step S1 it also includes:
  • Step A1 Divide the first target image into N1 layers. Wherein, N1>1, N1 is a positive integer.
  • Step A2 Divide at least one shooting object into a corresponding layer.
  • the first target image is divided into N1 layers, and each layer includes at least one shooting object.
  • Step A3 According to the outline area of at least one photographic object in the layer, determine the area of the corresponding layer, so that the corresponding area of the layer covers the corresponding area of at least one photographic object.
  • At least one shooting object in step A3 includes all shooting objects in the corresponding layer.
  • the area of each layer can be determined by an artificial intelligence (AI) algorithm according to the specific subject of the layer.
  • AI artificial intelligence
  • the area of the layer needs to satisfy: the corresponding area of the layer covers the corresponding areas of all photographed objects in the layer.
  • the subjects of the first target image include a teacher, two students and a tree, the teacher is divided into one layer, the two students are divided into another layer, and the object tree is divided into another layer Layer;
  • the area of the layer where the teacher is located can be an area covering the body range of the teacher, and the area of the layer where the student is located (the student layer in the diagram) can be covered
  • An area of the body range of two students, the area of the layer where the object tree is located (the object tree layer in the illustration) may be an area covering the range of a tree.
  • the first target image can also place areas other than teachers, students and trees on another layer as a background layer, and the area of the background layer can be the area of the entire first target image.
  • the graphic corresponding to the area of the layer may be a rectangle, a circle, or the like.
  • the graphic corresponding to the area of the layer may also be the outline shape of all the photographed objects in the layer combined.
  • the area of the layer may be larger than the contour area of the corresponding at least one photographic object, and in order to prevent the created layer from blocking the photographic object , in the layer, except for the area where the subject is located, other areas of the layer are transparent.
  • a method for determining the area of each layer is further provided.
  • the area of each layer satisfies that: the corresponding area of the layer just covers the corresponding areas of all photographed objects in the layer, which is the most suitable. In this way, not too much content irrelevant to the subject can be added to the layer, but also the necessary subject can be ensured to be located in the layer, so that image processing can be performed on the subject in any layer in a targeted manner.
  • step S2 it also includes:
  • Step B1 Divide N2 subjects in the first target image into corresponding N2 layers respectively. Wherein, N2>1, N2 is a positive integer.
  • the image processing method of this embodiment is applied to image shooting and image editing.
  • the "layering" function can be selected before taking an image, or before editing an image, so that different layers can be identified based on the first target image to be processed.
  • N2 layers can be divided, and each object occupies one layer.
  • the first target image shows a scene where students A and B bow to the teacher and there is a tree beside them.
  • the first target image is divided into a teacher layer, an object tree layer, a student A layer, and a student B layer.
  • the divided layers are displayed in the first target image with the system default layer name; or, displayed in the first target image with a user-defined layer name; or, the layer is not displayed Name, only the border of the layer is shown for user input.
  • Step B2 Receive a second input.
  • the second input is used to select the target mode identifier; or the second input is used to select at least two layers.
  • the second input includes touch input performed by the user on the screen, and is not limited to input such as clicking, sliding, dragging, etc. Facial action operations, etc., the second operation also includes the user's operation on the physical button on the device, and is not limited to operations such as pressing. Moreover, the second input includes one or more inputs, wherein the multiple inputs may be continuous or intermittent.
  • the second input is used to select a target mode identifier.
  • the logos in this application are used to indicate the text, symbols, images, interfaces, time, etc. of information, and controls or other containers can be used as carriers for displaying information, including but not limited to text logos, symbol logos, and image logos.
  • the target mode identifier in this embodiment is used to indicate the name of the corresponding target mode.
  • the single-person layer in FIG. 4 does not include a tree, and the area corresponding to the tree of the single-person layer is in a transparent state.
  • the target mode may be any one of the above modes.
  • the scene shown in the first target image can be identified based on all the shooting objects in the first target image, so that multiple mode identifications corresponding to the scene can be displayed for the user to choose. To meet the different image processing needs of users.
  • the layers where the shooting objects of the same type are located may be merged.
  • the number of corresponding modes is different, and the corresponding specific modes are also different.
  • multiple pattern identifiers can be arranged in an orderly manner. For example, according to the user's historical operation, the modes frequently selected by the user are arranged at the top; another example, according to the historical operations of other users, the modes frequently selected by most users are arranged at the front; and for example, according to various types of shooting objects In terms of ratio, etc., the modes that are more closely related to the main subject are arranged at the front.
  • the order of arrangement of the plurality of mode identifiers is also different.
  • the second input is used to select at least two layers.
  • the first target image shows a scene where students A and B bow to the teacher and there is a tree next to them; the first target image is divided into the teacher layer, the object tree layer, the student A layer, and the student B layer ; The user can choose to merge the student A layer and the student B layer into the student layer.
  • Step B3 In response to the second input, if the second input is used to select the target pattern identifier, merge at least two layers into one layer according to the target rule corresponding to the target pattern identifier; or, in the second Where the input is used to select at least two layers, at least two layers associated with the second input are merged into one layer.
  • layers may be divided according to objects in the first target image, so that one object corresponds to one layer. Then, provide a variety of modes for the user to choose from. Thus, based on the user's selection, part of the associated layers can be merged into one layer. Alternatively, the user can manually merge some associated layers into one layer. In this way, the user can perform separate image processing on the merged layer, so that all photographic objects in the layer can be processed at one time, thereby simplifying user operations.
  • the method of selecting the target mode is more intelligent, and the method of selecting the layer is more user-friendly, and the two complement each other to cover all scenes.
  • the merged layer based on this embodiment or the unmerged layer can be used as the target layer in this application.
  • the user before shooting, according to the first target image displayed on the shooting preview interface, the user can reasonably set the layer division scheme, so that during the shooting process, as well as the subsequent image editing process In all of them, they provide users with layered image processing services according to the preset layer division scheme.
  • step S2 it also includes:
  • Step C1 Receive a third input to the target layer.
  • the third input includes the touch input performed by the user on the screen, and is not limited to input such as clicking, sliding, and dragging; the third input may also be a third operation, and the third operation includes the user's space operation, not limited to gesture operation Facial action operations, etc., the third operation also includes the user's operation on the physical button on the device, not limited to operations such as pressing. Moreover, the third input includes one or more inputs, wherein the multiple inputs may be continuous or discontinuous.
  • the third input is used to select the target layer, and to select one of the processing options of delete or copy.
  • Step C2 In response to the third input, process the target layer in the first target image in a second target processing manner associated with the third input.
  • the second target processing manner includes any one of deletion processing and copy processing.
  • this region in the layer corresponds to blank content.
  • the area corresponding to the "object tree layer” is blank, and the user can perform subsequent processing to improve the display effect of the first target image.
  • the third input is also related to: the target playing period of the target layer corresponding to the second target processing mode in the first target image.
  • the user long presses the "object tree layer” to select as the target layer, and the layer processing menu can be called out, and the menu includes “delete”, “segment”, “Copy” three options, the user can first click the "Split” option to divide the entire playing period of the dynamic image into several sub-periods, and further, the user selects the "Delete” or "Copy” option for any sub-period , so as to delete or copy only the target layer corresponding to any sub-period.
  • any sub-period here is: the target playback period of the target layer corresponding to the second target processing mode in the first target image.
  • the user can not only perform unified image processing on all objects in any layer, but also delete, divide, and copy the entire layer for any layer, so as to flexibly utilize the independent functions of each layer. properties, thereby further beautifying the display effect of the first target image.
  • step S2 it also includes:
  • Step D1 Receive a fourth input to the target layer.
  • the fourth input includes the touch input performed by the user on the screen, and is not limited to input such as clicking, sliding, and dragging; Facial action operations, etc., the fourth operation also includes the user's operation on the physical button on the device, and is not limited to operations such as pressing. Moreover, the fourth input includes one or more inputs, wherein the multiple inputs may be continuous or intermittent.
  • the fourth input includes a sub-input for selecting the target layer in the first target image, a sub-input for pasting the target layer in the second target image, and a sub-input for setting the target arrangement position information where the target layer is located in the second target image enter.
  • the user can copy the target layer in the first target image, then open the second target image, right-click, select the "Paste” option, paste the target layer at the target position, and then select the target After the layer, set the target arrangement position information, that is, which layer the target layer is located in the second target image.
  • various setting options such as “on the top layer” and “on the bottom layer” are included.
  • Step D2 In response to the fourth input, display the target layer in the second target image according to the target arrangement position information associated with the fourth input.
  • the fourth input is associated with the first playback period of the target layer in the first target image and the target layer in the second target image the second playback period of .
  • the object car layer is pasted in the first target image by other images, and is located in front of all layers of the first target image, causing the object car layer to block other layers, so that the user can adaptively adjust The arrangement position of the object car layer.
  • both the first target image and the second target image in this embodiment are dynamic images, such as videos.
  • the first playing period is a certain playing period of the first target image
  • the second playing period is a certain playing period of the second target image
  • the state is not the same.
  • a car is stationary in the first 10s and moving in the next 10s, the user can copy the appropriate time period in the former video according to the scene to be shown in another video, and paste it on the A suitable time period for the latter video.
  • multiple layered images can be combined and edited, which provides material for image editing, so that rich and colorful images can be edited.
  • the first target processing manner includes at least one of lighting processing, blur processing, and whitening processing.
  • the user clicks on the screen area corresponding to the object tree layer to bring up the special effect setting menu, select the blur effect in the screen, and the screen of the object tree layer gradually changes to a blur effect (as shown in FIG. 9 ).
  • the first target processing method also includes the "theme” logo, "clips” logo, “music” logo, “text” logo, “speed” logo, “filter” logo, etc. shown in Figure 9. image processing method.
  • this application proposes a method for layered image processing, which is to record and process images in layers, such as background layer, object layer, character layer, etc. Each layer can be edited and cropped separately. A certain layer in the image can be removed from display, or a certain layer in the image can be added to the hierarchical structure of other images for combination.
  • Intelligent layering scene selection When the user is recording a video, turn on the layered video recording function, which can provide a variety of intelligent video layering scenes for selection, such as landscape layering mode, single layering mode, multi-person layering mode mode, character layering mode, when a layered scene is selected, the layered recording of the current video will be performed according to the layered mode set for the selected scene. For example, if you choose multi-person layering mode, the video recording will be divided into background layer, object layer, person A layer, and person B layer (assuming there are only 2 people in the video) for recording.
  • Layer-by-layer setting shooting special effects when recording videos, users can set and add shooting special effects for different layers. For example, in a shooting process, the video is divided into background layer, object layer, person A layer, and person B layer. Blur effects can be set for the background layer, and whitening effects can be set for the person A layer.
  • a layered video has a background layer, an object layer, a person A layer, and a person B layer, and each layer can be edited independently without affecting other layers. For example, adding a blur effect to the background layer, adding a whitening effect to the person A layer, etc., will not affect other layers.
  • Multiple layered videos can be combined and edited hierarchically. For example, if there are video 1 and video 2, the background layer of video 1 can be copied separately to replace the background layer of video 2; You can copy the Person A layer of Video 1 and add it to Video 2.
  • the above-mentioned improvements are applicable to the shooting of pictures and videos; and, the above-mentioned improvements are applicable to the editing of pictures and videos.
  • shooting special effects can be set separately for different levels without affecting each other, thereby improving the shooting quality of videos and the like, and improving user experience.
  • each layer can be edited separately without affecting each other, including directly deleting a certain layer in the video, etc., which solves the difficulty of editing videos, etc.
  • videos and pictures with different hierarchical structures can be obtained; through the combined editing of multiple layered videos and pictures, many colorful videos and pictures can be combined; Layer editing, such as deletion, segmentation, and special effect settings can be applied to a certain period of time in the video, or to the entire video; each layer can independently set shooting special effects, such as images, filters, etc., without affecting each other.
  • the image processing method provided in the embodiment of the present application may be executed by an image processing device, or a control module in the image processing device for executing the image processing method.
  • the image processing device executed by the image processing device is taken as an example to describe the image processing device provided in the embodiment of the present application.
  • FIG. 10 shows a block diagram of an image processing device according to another embodiment of the present application, which includes:
  • An identification module 10 configured to identify the subject in the first target image
  • a first division module 20 configured to divide at least one target object into target layers
  • the first input receiving module 30 is configured to receive the first input to the target layer
  • the first input response module 40 is configured to process at least one target shooting object in a first target processing manner associated with the first input in response to the first input;
  • the first target image includes: at least two layers arranged in a preset layer order.
  • the first target image may be divided into at least two layers, and each layer is used to display the corresponding object in the first target image. Based on this, all photographic objects in the first target image are divided into different layers. Furthermore, users can use any layer as the target layer to perform image processing on the target layer alone. Among them, the user can select the target layer and the first target processing method through the first input, so that the special effects generated by the first target processing method are only effective for the target object in the target layer, and will not affect other images. Objects in layers. It can be seen that, based on the image processing method provided by the embodiment of the present application, when performing image processing on a certain object in the image, other objects will not be affected, so as to achieve the desired effect.
  • the device also includes:
  • the second division module is used to divide the first target image into N1 layers; N1>1, N1 is a positive integer;
  • a third division module configured to divide at least one shooting object into a corresponding layer
  • the determination module is configured to determine the area of the corresponding layer according to the outline area of at least one photographic object in the layer, so that the corresponding area of the layer covers the corresponding area of at least one photographic object.
  • the device also includes:
  • the fourth dividing module is used to divide the N2 shooting objects in the first target image into corresponding N2 layers respectively; N2>1, N2 is a positive integer;
  • a second input receiving module configured to receive a second input
  • the second input response module is used to merge at least two layers into one layer according to the target rule corresponding to the target mode ID when the second input is used to select the target mode ID in response to the second input; Alternatively, where the second input is used to select at least two layers, the at least two layers associated with the second input are merged into one layer.
  • the device also includes:
  • the third input receiving module is used to receive the third input to the target layer
  • a third input response module configured to process the target layer in the first target image in a second target processing manner associated with the third input in response to the third input;
  • the second target processing method includes any one of deletion processing and copy processing
  • the third input is also related to: the target playing period of the target layer corresponding to the second target processing mode in the first target image.
  • the device also includes:
  • a fourth input receiving module configured to receive a fourth input to the target layer
  • the fourth input response module is used to display the target layer in the second target image according to the target arrangement position information associated with the fourth input in response to the fourth input;
  • the fourth input is associated with the first playing period of the target layer in the first target image and the target layer in the second target image the second playback period of .
  • the image processing apparatus in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal.
  • the device may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant).
  • non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
  • Network Attached Storage NAS
  • personal computer personal computer, PC
  • television television
  • teller machine or self-service machine etc.
  • the image processing device in the embodiment of the present application may be a device with an action system.
  • the action system may be an Android (Android) action system, an ios action system, or other possible action systems, which are not specifically limited in this embodiment of the present application.
  • the image processing apparatus provided in the embodiments of the present application can implement the various processes implemented in the foregoing method embodiments, and details are not repeated here to avoid repetition.
  • the embodiment of the present application further provides an electronic device 100, including a processor 101, a memory 102, and programs or instructions stored in the memory 102 and operable on the processor 101,
  • an electronic device 100 including a processor 101, a memory 102, and programs or instructions stored in the memory 102 and operable on the processor 101,
  • the program or instruction is executed by the processor 101, each process of the above-mentioned image processing method embodiment can be realized, and the same technical effect can be achieved, so in order to avoid repetition, details are not repeated here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 12 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010, etc. part.
  • the electronic device 1000 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 1010 through the power management system, so that the management of charging, discharging, and function can be realized through the power management system. Consumption management and other functions.
  • a power supply such as a battery
  • the structure of the electronic device shown in FIG. 12 does not constitute a limitation to the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine some components, or arrange different components, and details will not be repeated here. .
  • the processor 1010 is configured to identify the object in the first target image; divide at least one target object into the target layer; the user input unit 1007 is configured to receive a first input to the target layer; The processor 1010 is further configured to, in response to the first input, process the at least one target object in a first target processing manner associated with the first input; wherein, the first target image Contains: at least two layers arranged in a preset layer order.
  • the first target image may be divided into at least two layers, and each layer is used to display the corresponding object in the first target image. Based on this, all photographic objects in the first target image are divided into different layers. Furthermore, users can use any layer as the target layer to perform image processing on the target layer alone. Among them, the user can select the target layer and the first target processing method through the first input, so that the special effects generated by the first target processing method are only effective for the target object in the target layer, and will not affect other images. Objects in layers. It can be seen that, based on the image processing method provided by the embodiment of the present application, when performing image processing on a certain object in the image, other objects will not be affected, so as to achieve the desired effect.
  • the processor 1010 is further configured to divide the first target image into N1 layers; N1>1, N1 is a positive integer; divide at least one of the shooting objects into a corresponding one of the layers Middle; according to the outline area of the at least one photographed object in the layer, determine the corresponding area of the layer, so that the corresponding area of the layer covers the corresponding area of the at least one photographed object.
  • the processor 1010 is further configured to divide the N2 objects in the first target image into corresponding N2 layers respectively; N2>1, N2 is a positive integer; the user input unit 1007 , is further configured to receive a second input; the processor 1010 is further configured to, in response to the second input, in the case that the second input is used to select a target mode identifier, according to the target corresponding to the target mode identifier rule, to merge at least two layers into one said layer; or, where said second input is used to select at least two layers, to combine said at least two layers associated with said second input layers are merged into one described layer.
  • the user input unit 1007 is further configured to receive a third input to the target layer; the processor 1010 is further configured to respond to the third input to use the third input associated with the third input Two target processing methods, processing the target layer in the first target image; wherein, the second target processing method includes any one of deletion processing and copy processing; and, in the first target image
  • the third input is further related to: a target playing period of the target layer corresponding to the second target processing mode in the first target image.
  • the user input unit 1007 is further configured to receive a fourth input to the target layer; the processor 1010 is further configured to, in response to the fourth input, according to the target associated with the fourth input arranging position information, and displaying the target layer in the second target image; wherein, in the case that the first target image and the second target image are both dynamic images, the fourth input is associated with the A first playing period of the target layer in the first target image, and a second playing period of the target layer in the second target image.
  • This application proposes a method for layered image processing, which is to record and process images in layers, such as background layer, object layer, character layer, etc.
  • layers such as background layer, object layer, character layer, etc.
  • Each layer can be edited and cropped separately, and the images in the image can be A certain layer of the image is removed from the display, and a certain layer in the image can also be added to the hierarchical structure of other images for combination.
  • Intelligent layering scene selection When the user is recording a video, turn on the layered video recording function, which can provide a variety of intelligent video layering scenes for selection, such as landscape layering mode, single layering mode, multi-person layering mode mode, character layering mode, when a layered scene is selected, the layered recording of the current video will be performed according to the layered mode set for the selected scene. For example, if you choose multi-person layering mode, the video recording will be divided into background layer, object layer, person A layer, and person B layer (assuming there are only 2 people in the video) for recording.
  • Layered shooting special effects When recording videos, users can set and add shooting special effects for different layers. For example, in a shooting process, the video is divided into background layer, object layer, person A layer, and person B layer. Blur effects can be set for the background layer, and whitening effects can be set for the person A layer.
  • a layered video has a background layer, an object layer, a person A layer, and a person B layer, and each layer can be edited independently without affecting other layers. For example, adding a blur effect to the background layer, adding a whitening effect to the person A layer, etc., will not affect other layers.
  • Multiple layered videos can be combined and edited hierarchically. For example, if there are video 1 and video 2, the background layer of video 1 can be copied separately to replace the background layer of video 2; You can copy the Person A layer of Video 1 and add it to Video 2.
  • the above-mentioned improvements are applicable to the shooting of pictures and videos; and, the above-mentioned improvements are applicable to the editing of pictures and videos.
  • shooting special effects can be set separately for different levels without affecting each other, thereby improving the shooting quality of videos and the like, and improving user experience.
  • each layer can be edited separately without affecting each other, including directly deleting a certain layer in the video, etc., which solves the difficulty of editing videos, etc.
  • videos and pictures with different hierarchical structures can be obtained; through the combined editing of multiple layered videos and pictures, many colorful videos and pictures can be combined; Layer editing, such as deletion, segmentation, and special effect settings can be applied to a certain period of time in the video, or to the entire video; each layer can independently set shooting special effects, such as images, filters, etc., without affecting each other.
  • the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 is used for the image capture device (such as the image data of the still picture or video obtained by the camera) for processing.
  • the display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 1007 includes a touch panel 10071 and other input devices 10072 .
  • the touch panel 10071 is also called a touch screen.
  • the touch panel 10071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 10072 may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, and action sticks, which will not be repeated here.
  • the memory 1009 can be used to store software programs as well as various data, including but not limited to application programs and motion systems.
  • the processor 1010 may integrate an application processor and a modem processor, wherein the application processor mainly processes motion systems, user interfaces, and application programs, and the modem processor mainly processes wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 1010 .
  • the embodiment of the present application also provides a readable storage medium, the readable storage medium stores a program or an instruction, and when the program or instruction is executed by a processor, each process of the above-mentioned image processing method embodiment is realized, and can achieve the same To avoid repetition, the technical effects will not be repeated here.
  • the processor is the processor in the electronic device described in the above embodiments.
  • the readable storage medium includes computer readable storage medium, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above image processing method embodiment Each process can achieve the same technical effect, so in order to avoid repetition, it will not be repeated here.
  • chips mentioned in the embodiments of the present application may also be called system-on-chip, system-on-chip, system-on-a-chip, or system-on-a-chip.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase “comprising a " does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
  • the scope of the methods and devices in the embodiments of the present application is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in reverse order according to the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • modules, units, and subunits can be implemented in one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processor, DSP), digital signal processing equipment (DSP Device, DSPD ), Programmable Logic Device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processor, controller, microcontroller, microprocessor, used to implement the disclosure other electronic units or combinations thereof.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processor
  • DPD digital signal processing equipment
  • PLD Programmable Logic Device
  • Field-Programmable Gate Array Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the technologies described in the embodiments of the present disclosure may be implemented through modules (such as procedures, functions, etc.) that execute the functions described in the embodiments of the present disclosure.
  • Software codes can be stored in memory and executed by a processor.
  • Memory can be implemented within the processor or external to the processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种图像处理方法和电子设备,属于电子技术领域。其中,所述图像处理方法包括:识别第一目标图像中的拍摄对象;将至少一个目标拍摄对象划分在目标图层中;接收对所述目标图层的第一输入;响应于所述第一输入,以与所述第一输入相关联的第一目标处理方式,对所述至少一个目标拍摄对象进行处理;其中,所述第一目标图像包括:按照预设图层顺序排列的至少两个图层。

Description

图像处理方法和电子设备
相关申请的交叉引用
本申请要求在2021年05月26日提交中国专利局、申请号为202110581001.X、名称为“图像处理方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于电子技术领域,具体涉及一种图像处理方法和电子设备。
背景技术
目前,人们使用电子设备拍照、录像的频率越来越高。人们的拍摄过程中,或者拍摄结束后,为了使所拍摄的图像能够达到预期效果,需要对图像做一些处理。
例如,用户选择美颜工具,使得图像中的人物达到美颜效果;又如,用户选择虚化工具,并手动选择目标区域,使得图像中的目标区域达到虚化效果。而在实际操作中,用户可能只需要针对个别人物进行美颜处理,而美颜特效对所有可识别的人物均有效;或者,用户可能只需要对某个物体进行虚化处理,而由于用户手动操作的误差,导致虚化区域覆盖到其它物体上。
可见,在现有技术中,在对图像中的某个拍摄对象进行图像处理时,不可避免地,会对其他拍摄对象造成影响,从而无法达到预期效果。
发明内容
本申请实施例的目的是提供一种图像处理方法,能够解决在现有技术中,在对图像中的某个拍摄对象进行图像处理时,不可避免地,会对其他拍摄对象造成影响,从而无法达到预期效果的问题。
第一方面,本申请实施例提供了一种图像处理方法,该方法包括:识别第一目标图像中的拍摄对象;将至少一个目标拍摄对象划分在目标图层中;接收对所述目标图层的第一输入;响应于所述第一输入,以与所述第一输入相关联的第一目标处理方式,对所述至少一个目标拍摄对象进行处理;其中,所述第一目标图像包括:按照预设图层顺序排列的至少两个图层。
第二方面,本申请实施例提供了一种图像处理装置,该装置包括:识别模块,用于识别第一目标图像中的拍摄对象;第一划分模块,用于将至少一个目标拍摄对象划分在目标图层中;第一输入接收模块,用于接收对所述目标图层的第一输入;第一输入响应模块,用于响应于所述第一输入,以与所述第一输入相关联的第一目标处理方式,对所述至少一个目标拍摄对象进行处理;其中,所述第一目标图像包括:按照预设图层顺序排列的至少两个图层。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
第六方面,本申请实施例提供了一种计算机程序产品,所述程序产品被存储在非易失的存储介质中,所述程序产品被至少一个处理器执行以实现如第一方面所述的方法。
第七方面,本申请实施例提供了一种电子设备,包括所述电子设备被配置成用于执行如第一方面所述的方法。
这样,在本申请的实施例中,可将第一目标图像划分至少两个图层,每个图层分别用于显示对应的第一目标图像中的拍摄对象。基于此,第一目标图像中的所有拍摄对象被划分在不同的图层中。进一步地,用户可将任意图层作为目标图层,以单独针对目标图层进行图像处理。其中,用户可通过第一输入,选择目标图层和第一目标处理方式,以使得第一目标处理方式所产生的特效仅对目标图层中的目标拍摄对象有效,而不会影响到其它图层中的拍摄对象。
附图说明
图1是本申请实施例的图像处理方法的流程图;
图2~图9是本申请实施例的电子设备的显示示意图;
图10是本申请实施例的图像处理装置的框图;
图11是本申请实施例的电子设备的硬件结构示意图之一;
图12是本申请实施例的电子设备的硬件结构示意图之二。
具体实施例
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的图像处理方法进行详细地说明。
参见图1,示出了本申请一个实施例的图像处理方法的流程图,该方法应用于电子设备,包括:
步骤S1:识别第一目标图像中的拍摄对象。
可选地,一种情况下,在拍摄之前,拍摄预览界面显示的画面图像为第一目标图像。
对应地,本实施例可用于图像拍摄场景中。
可选地,又一种情况下,在拍摄之后,输出的画面图像为第一目标图像。
对应地,本实施例可用于图像编辑场景中。
可选地,第一目标图像包括静态图像(如照片、图片等)和动态图像(如视频、动图等)中的任一种。
在该步骤中,拍摄对象包括人、物、景。
步骤S2:将至少一个目标拍摄对象划分在目标图层中。
其中,第一目标图像包括:按照预设图层顺序排列的至少两个图层。
在该步骤中,可将第一目标图像划分为至少两个图层,至少两个图层按照预设图层顺序依次排列,每个图层用于显示对应的拍摄对象。
而对于其中的任意图层,即目标图层,用于显示至少一个目标拍摄对象。
例如,根据第一目标图像中显示的两个人和一棵树,可将两个人放置在一个图层中,将一棵树放置在另一个图层中。进一步地,在两个人遮挡树的场景中,可将两个人所在图层置于一棵树所在图层之上。
需要说明的是,至少两个图层按照预设图层顺序排列,这里的预设图层顺序可依据最终的显示效果而定,即需确保第一目标图像中的各个拍摄对象可按照分层之前的效果进行显示,以避免拍摄对象之间的相互遮挡。
步骤S3:接收对目标图层的第一输入。
第一输入包括用户在屏幕上进行的触摸输入,不限于点击、滑动、拖动等输入;第一输入还可以是第一操作,第一操作包括用户的隔空操作,不限于手势动作操作、脸部动作操作等,第一操作还包括用户在设备上对实体按键的操作,不限于按动等操作。而且,第一输入包括一个或者多个输入,其中,多个输入可以是连续的,也可以是间断的。
第一输入用于对目标图层中的目标拍摄对象进行图像处理。
例如,用户点击目标图层,可以调出图像处理菜单,菜单中包括多种图像处理方式,从而用户可根据需要进行选择。
步骤S4:响应于第一输入,以与第一输入相关联的第一目标处理方式,对至少一个目标拍摄对象进行处理。
可选地,该步骤中的至少一个目标拍摄对象包括目标图层中的所有目标拍摄对象。
可选地,图像处理菜单包括第一目标处理方式,用户可通过第一输入选择第一目标处理方式。
因此,在该步骤中,可对目标图层中的至少一个目标拍摄对象进行处理。
应用场景如:在拍摄模式下,拍摄预览界面显示第一目标图像,第一目标图像至少包括目标图层,目标图层中显示至少一个目标拍摄对象;用户选择“录像”控件,开始录制视频;在录像过程中,用户可针对目标图层,选择第一目标处理方式,如虚化处理方式,从而对目标图层中的至少一个目标拍摄对象进行虚化处理。
应用场景又如:在编辑模式下,编辑界面显示第一目标图像,第一目标图像至少包括目标图层,目标图层中显示至少一个目标拍摄对象;用户可针 对目标图层,选择第一目标处理方式,如虚化处理方式,从而对目标图层中的至少一个目标拍摄对象进行虚化处理。
其中,基于本实施例中的图像处理方式,用户可针对多个目标图层依次进行相应的处理方式。
可参考地,一种情况下,用户点击一个目标图层,可以调出图像处理菜单,菜单中包括多种图像处理方式,从而用户可根据需要进行选择;进一步地,用户点击下一个目标图层,可以调出图像处理菜单,菜单中包括多种图像处理方式,从而用户可根据需要进行选择;以此类推。此情况更适用于各个目标图层对应不同的处理方式。
可参考地,另一种情况下,用户连续点击多个目标图层(相邻两次点击间隔小于2秒),可以调出图像处理菜单,菜单中包括多种图像处理方式,从而用户可根据需要进行选择。此情况更适用于各个目标图层对应相同的处理方式。
其中,基于本实施例中的图像处理方式,用户可针对一个目标图层依次进行多个处理方式。
可参考地,一种情况下,用户点击一个目标图层,可以调出图像处理菜单,菜单中包括多种图像处理方式,从而用户可根据需要进行选择第一处理方式,从而第一处理方式产生的特效在目标图层中维持一个时段;该时段之后,用户继续点击该目标图层,可以调出图像处理菜单,菜单中包括多种图像处理方式,从而用户可根据需要进行选择第二处理方式,从而第二处理方式产生的特效在目标图层中维持在另一个时段。此情况更适用于第一目标图像为动态图像的情况。
可选地,上述两个时段为没有交集的两个时段,以避免两种特效相互干扰。
可选地,上述两个时段为两个相邻的时段,以实现特效的切换。为了保障第一目标图像的画面的连续性,上一个特效切换到下一个特效会有3秒的特效渐变过程的延时时长。即,在前面时段的临近结束的一个子时段内,第一处理方式产生的特效以渐变的形式从目标图层中消失,在后面时段的刚刚开始的一个子时段内,第二处理方式产生的特效以渐变的形式在目标图层中展示。这里两个子时段即前述的3秒的特效渐变过程的延时时长。
其中,本实施例列举的第一处理方式和第二处理方式均包括在第一目标处理方式的范围内。
可选地,在第一目标图像为动态图像的情况下,用户可随时取消特效,也可以切换到其它特效设置。
这样,在本申请的实施例中,可将第一目标图像划分至少两个图层,每个图层分别用于显示对应的第一目标图像中的拍摄对象。基于此,第一目标图像中的所有拍摄对象被划分在不同的图层中。进一步地,用户可将任意图层作为目标图层,以单独针对目标图层进行图像处理。其中,用户可通过第一输入,选择目标图层和第一目标处理方式,以使得第一目标处理方式所产生的特效仅对目标图层中的目标拍摄对象有效,而不会影响到其它图层中的拍摄对象。可见,基于本申请的实施例所提供的图像处理方法,在对图像中的某个拍摄对象进行图像处理时,不会对其他拍摄对象造成影响,从而达到预期效果。
在本申请另一个实施例的图像处理方法的流程中,在步骤S1之后,还包括:
步骤A1:将第一目标图像划分为N1个图层。其中,N1>1,N1为正整数。
步骤A2:将至少一个拍摄对象划分在对应的一个图层中。
在本实施例中,将第一目标图像划分为N1个图层,每个图层中包括至少一个拍摄对象。
步骤A3:根据图层中的至少一个拍摄对象的轮廓面积,确定对应的图层的面积,以使图层对应区域覆盖对应的至少一个拍摄对象对应区域。
其中,步骤A3中的至少一个拍摄对象包括对应图层中的所有拍摄对象。
在本实施例中,分层后,每个图层的面积可根据该层的具体拍摄对象,由人工智能(Artificial Intelligence,简称AI)算法决定。
可选地,图层的面积需满足:图层对应区域覆盖该层中所有拍摄对象对应区域。
参见图2,例如,第一目标图像的拍摄对象包括老师、两名学生和一棵树,将老师划分至一个图层,将两名学生划分至又一个图层,将物体树划分在又一个图层;对应地,老师所在图层(图示中的老师层)的面积,可以是覆盖老师的身体范围的一个面积,学生所在图层(图示中的学生层)的面积,可以是覆盖两名学生的身体范围的一个面积,物体树所在图层(图示中的物体树层)的面积,可以是覆盖一颗树的范围的一个面积。进一步地,第一目标图像还可将除了老师、学生和树以外的区域,放置在又一个图层,作为背 景层,背景层的面积可以是整个第一目标图像的面积。
可选地,图层的面积对应的图形可以是矩形、圆形等。
可选地,图层的面积对应的图形还可以是该图层所有拍摄对象组合后的轮廓形状。
需要说明的是,为了使得图层对应区域覆盖对应的至少一个拍摄对象对应区域,可以是图层的面积大于对应的至少一个拍摄对象的轮廓面积,而为了避免创建的图层对拍摄对象造成遮挡,在图层中,除了拍摄对象所在的区域以外,图层的其它区域均为透明状态。
在本实施例中,在实现对第一目标图像分层的基础上,进一步提供了一种如何确定每个图层的面积的方法。其中,各个图层的面积满足:图层对应区域刚好覆盖该图层中所有拍摄对象对应区域,为最适宜。这样,即不会在图层中加入太多与拍摄对象无关的内容,又可以确保必要的拍摄对象位于图层中,从而可针对性地对任意图层中的拍摄对象进行图像处理。
在本申请另一个实施例的图像处理方法的流程中,在步骤S2之前,还包括:
步骤B1:将第一目标图像中的N2个拍摄对象,分别对应划分在对应的N2个图层中。其中,N2>1,N2为正整数。
本实施例的图像处理方式应用于图像拍摄和图像编辑中。可选地,在拍摄图像之前、或者在编辑图像之前,可选择“分层”功能,从而可基于待处理的第一目标图像,识别出不同的层次。
在本实施例中,基于第一目标图像中识别出的N2个拍摄对象,可划分出N2个图层,每个拍摄对象占用一个图层。
参见图3,第一目标图像显示的是学生A、学生B向老师鞠躬问好的场景,旁边有一棵树。第一目标图像分为老师层、物体树层、学生A层、学生B层。
可选地,划分好的图层,以系统默认的图层名称显示在第一目标图像中;或者,以以用户自定义的图层名称显示在第一目标图像中;或者,不显示图层名称,仅显示图层的边框,以用于用户进行输入。
步骤B2:接收第二输入。
其中,第二输入用于选择目标模式标识;或者第二输入用于选择至少两个图层。
第二输入包括用户在屏幕上进行的触摸输入,不限于点击、滑动、拖动 等输入;第二输入还可以是第二操作,第二操作包括用户的隔空操作,不限于手势动作操作、脸部动作操作等,第二操作还包括用户在设备上对实体按键的操作,不限于按动等操作。而且,第二输入包括一个或者多个输入,其中,多个输入可以是连续的,也可以是间断的。
一种情况下,第二输入用于选择目标模式标识。
其中,本申请中的标识用于指示信息的文字、符号、图像、界面、时间等,可以以控件或者其他容器作为显示信息的载体,包括但不限于文字标识、符号标识、图像标识。
本实施例中的目标模式标识用于指示对应的目标模式的名称。
参见图4,根据识别到的第一目标图像中的场景,可提供“单人分层模式”、“多人分层模式”、“人物分层模式”和“风景分层模式”。
例如,参见图4,在“单人分层模式”下,所有的人被划分在一个图层中。其中,图4中的单人层不包括一棵树,单人层对应树的区域为透明状态。
在“多人分层模式”下,不同的人被划分在不同的图层中。
在“人物分层模式”下,所有的人被划分在一个图层中,所有的物被划分在一个图层中。
在“风景分层模式”下,所有的风景被划分在一个图层中。
其中,目标模式可以是上述中的任一个模式。
需要说明的是,在本实施例中,可基于第一目标图像中的所有拍摄对象,识别出第一目标图像所展示的场景,从而可显示与场景对应的多个模式标识,供用户选择,以满足用户不同的图像处理需求。
可选地,基于上述对于目标模式的解释,可将同一类型的拍摄对象所在的图层进行合并。
例如,将每个人所在的图层,合并为一个图层;又如,将每个物所在的图层,合并为一个图层。
其中,对于不同的场景,对应的模式数量不同,对应的具体模式也不相同。
另外,多个模式标识可有序排列。例如,根据用户历史操作,将用户经常选择的模式排列地靠前;又如,根据其他用户的历史操作,将大部分用户经常选择的模式排列地靠前;又如,根据各个类型的拍摄对象所占比例等,将与主要拍摄对象关联性较强的模式排列地靠前。
其中,对于不同的场景,多个模式标识的排列顺序也不相同。
可以想到,基于上一实施例的图层面积的确定方法,对应同一场景,不同的模式,各个图层的面积也会随着变化。
另一种情况下,第二输入用于选择至少两个图层。
参见图3,例如,第一目标图像显示的是学生A、学生B向老师鞠躬问好的场景,旁边有一棵树;第一目标图像分为老师层、物体树层、学生A层、学生B层;用户可选择学生A层和学生B层合并为学生层。
步骤B3:响应于第二输入,在第二输入用于选择目标模式标识的情况下,按照与目标模式标识对应的目标规则,将至少两个图层合并为一个图层;或者,在第二输入用于选择至少两个图层的情况下,将与第二输入相关联的至少两个图层合并为一个图层。
在该步骤中,基于前述的两种情况,可实现部分图层之间的结合,以便于用户对结合后的图层进行单独处理。
在本实施例中,首先可根据第一目标图像中的拍摄对象划分图层,以使得一个拍摄对象对应一个图层。然后,为用户提供多种模式,供用户选择。从而,基于用户的选择,可将部分相关联的图层合并为一个图层。或者,用户可通过手动操作,将部分相关联的图层合并为一个图层。这样,用户可针对合并后的图层,进行单独地图像处理,以使得图层中的所有拍摄对象可一次性地被处理,从而简化用户操作。其中,相比两种方法,选择目标模式的方法更加智能,选择图层的方法更加人性化,二者相互补充,以覆盖所有场景。
其中,基于本实施例中合并后的图层,或者未合并的图层,均可作为本申请中的目标图层。
在本申请另一个实施例的图像处理方法中,可在拍摄之前,根据拍摄预览界面显示的第一目标图像,由用户合理设置图层划分方案,从而在拍摄过程中,以及后续的图像编辑过程中,均按照预先设置的图层划分方案,为用户提供分层图像处理服务。
在本申请另一个实施例的图像处理方法的流程中,在步骤S2之后,还包括:
步骤C1:接收对目标图层的第三输入。
第三输入包括用户在屏幕上进行的触摸输入,不限于点击、滑动、拖动等输入;第三输入还可以是第三操作,第三操作包括用户的隔空操作,不限于手势动作操作、脸部动作操作等,第三操作还包括用户在设备上对实体按 键的操作,不限于按动等操作。而且,第三输入包括一个或者多个输入,其中,多个输入可以是连续的,也可以是间断的。
第三输入用于选择目标图层,以及选择删除或复制中任一种处理方式选项。
参见图5,例如,用户拍摄后,不需要老师和学生之间的树,则用户长按“物体树层”,以选择作为目标图层,可以调出图层处理菜单,菜单中包括“删除”和“复制”两种选项,用户点击“删除”选项。
步骤C2:响应于第三输入,以与第三输入相关联的第二目标处理方式,在第一目标图像中对目标图层进行处理。
其中,第二目标处理方式包括删除处理和复制处理中的任一种。
参见图6,用户针对“物体树层”选择“删除”选项后,“物体树层”被删除。
可选地,在第一目标图像中,对于某一图层,其某一区域上与其它图层区域重复,则该图层中的该区域对应为空白内容。
对应地,当“物体树层”被删除后,“物体树层”对应的区域为空白内容,用户可进行后续处理,以完善第一目标图像的显示效果。
其中,在第一目标图像为动态图像的情况下,第三输入还关联于:第二目标处理方式对应的目标图层在第一目标图像中的目标播放时段。
对应地,在第一目标图像为动态图像的情况下,用户长按“物体树层”,以选择作为目标图层,可以调出图层处理菜单,菜单中包括“删除”、“分割”、“复制”三种选项,用户可首先点击“分割”选项,以将动态图像的整个播放时段分割为若干个子时段,进一步地,用户针对其中的任意子时段,选择“删除”或者“复制”选项,从而仅删除或者复制该任意子时段对应的目标图层。其中,这里的任意子时段即:第二目标处理方式对应的目标图层在第一目标图像中的目标播放时段。
在本实施例中,用户不仅可以对任意图层中的所有拍摄对象进行统一的图像处理,还可针对任意图层,进行整体图层的删除、分割、复制,以灵活利用各个图层的独立性,从而进一步美化第一目标图像的显示效果。
在本申请另一个实施例的图像处理方法的流程中,在步骤S2之后,还包括:
步骤D1:接收对目标图层的第四输入。
第四输入包括用户在屏幕上进行的触摸输入,不限于点击、滑动、拖动 等输入;第四输入还可以是第四操作,第四操作包括用户的隔空操作,不限于手势动作操作、脸部动作操作等,第四操作还包括用户在设备上对实体按键的操作,不限于按动等操作。而且,第四输入包括一个或者多个输入,其中,多个输入可以是连续的,也可以是间断的。
第四输入包括在第一目标图像选择目标图层的子输入、在第二目标图像粘贴目标图层的子输入、以及在第二目标图像中设置目标图层所位于的目标排列位置信息的子输入。
例如,结合上一实施例,用户可在第一目标图像中复制目标图层,再打开第二目标图像,单击右键,选择“粘贴”选项,将目标图层粘贴在目标位置,再选中目标图层后,设置目标排列位置信息,即目标图层在第二目标图像中位于哪一层。其中,包括“位于顶层”、“位于底层”等多种设置选项。
步骤D2:响应于第四输入,按照与第四输入相关联的目标排列位置信息,在第二目标图像中显示目标图层。
其中,在第一目标图像和第二目标图像均为动态图像的情况下,第四输入关联于目标图层在第一目标图像中的第一播放时段、以及目标图层在第二目标图像中的第二播放时段。
参见图7,例如,物体车层由其它图像粘贴在第一目标图像中,位于第一目标图像的所有层次的前面,导致物体车层对其它的图层造成遮挡,从而用户可适应性地调整物体车层的排列位置。
可选地,本实施例中的第一目标图像和第二目标图像均为动态图像,如视频。
因此,当用户在第一目标图像中复制目标图层时,还需要选择所要复制的第一播放时段;以及,在第二目标图像中粘贴目标图层时,还需要选择所要粘贴的第二播放时段。
其中,第一播放时段为第一目标图像某一播放时段,第二播放时段为第二目标图像某一播放时段。
因同一拍摄对象在不同时段内,状态是不一样的。例如,在一段视频中,一辆汽车在前面的10s内是静止的,之后的10s内是运动的,用户可根据另一段视频所要展现的场景,在前者视频中复制合适的时段,以粘贴在后者视频的合适时段。
在本实施例中,多个分层图像之间可以组合编辑,为图像编辑提供了素材,从而可编辑出丰富多彩的图像。
参见图8和图9,在本申请另一个实施例的图像处理方法中,第一目标处理方式包括发光处理、模糊处理、美白处理中的至少一种。
参见图8,例如,用户点击物体树层对应的画面区域,调出特效设置菜单,选择画面中的模糊特效,物体树层的画面渐变为模糊特效(如图9所示)。
更多地,第一目标处理方式还包括图9中所示的“主题”标识、“剪辑”标识、“音乐”标识、“文字”标识、“速度”标识、“滤镜”标识等中包含的图像处理方式。
综上,在现有的图像处理方法中,存在以下不足:
一:在拍摄过程中,背景、物体、人物等都是一个整体,如果对其中一个拍摄的主体设置特效,会影响其他的主体,对各个拍摄的主体不能单独的设置特效。
二:在编辑过程中,背景、物体、人物等都是一个整体,对某个人或物添加编辑效果后,会影响到其他的人或物,如果需要移除原视频中的人或物,需要花费很多时间,而且会留下明显的痕迹。
三:不能进行多个图像单个场景的组合编辑。比如图像1和图像2,如果想要把图像2的背景替换为图像1的背景,没有办法实现。
针对上述不足,本申请提出了一种分层图像处理的方法,就是把图像进行分层录制、分层处理,比如背景层,物体层,人物层等等,每一层都可以单独编辑裁剪,可以把图像中的某一层去掉不在显示,也可以把图像中的某一层加入到其他图像的层次结构中进行组合。
具体地,包括:
一、手工控制分层拍摄:用户在录制视频时候,打开分层视频录制功能,自动识别出所拍摄视频的层次结构,比如识别出背景层、物体A层、物体B层、人A层、人B层,可以去设置调整分哪些层,哪些层可以合并,比如控制物体A层、物体B层合并为一个层拍摄。
二、智能分层场景选择:用户在录制视频时候,打开分层视频录制功能,可以给出多种智能视频分层场景供选择,比如风景分层模式、单人分层模式、多人分层模式、人物分层模式,当选择某个分层场景时候,会根据所选场景设置好的分层模式进行当前视频的分层录制。比如选择多人分层模式,视频录制时候会分背景层、物体层、人A层、人B层(假设视频中只有2个人)进行录制。
三、分层设置拍摄特效:用户在录制视频时候,可以针对不同的层来设 置添加拍摄特效。比如在一个拍摄过程中,视频分为背景层、物体层、人A层、人B层,可以针对背景层设置模糊特效,针对人A层设置美白特效。
四、分层视频编辑:比如一个分层视频有背景层、物体层、人A层、人B层,可以单独对每个层次进行视频编辑,不影响其它层。比如对背景层添加模糊效果,对人A层添加美白效果等,都不会影响其它层。
五、多个分层视频组合:多个分层视频可以进行层次的组合编辑,比如现在有视频1和视频2,可以把视频1的背景层单独复制出来,替换掉视频2的背景层;还可以把视频1的人A层复制出来加入到视频2中。
其中,上述改进适用于图片、视频的拍摄;以及,上述改进适用于图片、视频的编辑。
可见,基于本申请中的图像处理方法,可对不同层次分别设置拍摄特效,互不影响,从而提高视频等的拍摄质量,提升用户使用体验。同时,基于本申请中的图像处理方法,还可对每个层次分别进行编辑,互不影响,其中包括直接删除视频等中的某个层次,解决了视频等编辑的难点。
更多地,基于图层的自动识别和任意组合,可以得到不同层次结构的视频、图片;通过对多个分层视频、图片的组合编辑,可以组合出很多丰富多彩的视频、图片;视频中图层的编辑,比如删除、分割、特效设置可以应用到视频中的某个时间段,也可以应用到整个视频;每层可以独立设置拍摄特效,比如画面、滤镜等,互不影响。
需要说明的是,本申请实施例提供的图像处理方法,执行主体可以为图像处理装置,或者该图像处理装置中的用于执行图像处理方法的控制模块。本申请实施例中以图像处理装置执行图像处理方法为例,说明本申请实施例提供的图像处理装置。
图10示出了本申请另一个实施例的图像处理装置的框图,该装置包括:
识别模块10,用于识别第一目标图像中的拍摄对象;
第一划分模块20,用于将至少一个目标拍摄对象划分在目标图层中;
第一输入接收模块30,用于接收对目标图层的第一输入;
第一输入响应模块40,用于响应于第一输入,以与第一输入相关联的第一目标处理方式,对至少一个目标拍摄对象进行处理;
其中,第一目标图像包括:按照预设图层顺序排列的至少两个图层。
这样,在本申请的实施例中,可将第一目标图像划分至少两个图层,每个图层分别用于显示对应的第一目标图像中的拍摄对象。基于此,第一目标 图像中的所有拍摄对象被划分在不同的图层中。进一步地,用户可将任意图层作为目标图层,以单独针对目标图层进行图像处理。其中,用户可通过第一输入,选择目标图层和第一目标处理方式,以使得第一目标处理方式所产生的特效仅对目标图层中的目标拍摄对象有效,而不会影响到其它图层中的拍摄对象。可见,基于本申请的实施例所提供的图像处理方法,在对图像中的某个拍摄对象进行图像处理时,不会对其他拍摄对象造成影响,从而达到预期效果。
可选地,装置,还包括:
第二划分模块,用于将第一目标图像划分为N1个图层;N1>1,N1为正整数;
第三划分模块,用于将至少一个拍摄对象划分在对应的一个图层中;
确定模块,用于根据图层中的至少一个拍摄对象的轮廓面积,确定对应的图层的面积,以使图层对应区域覆盖对应的至少一个拍摄对象对应区域。
可选地,装置,还包括:
第四划分模块,用于将第一目标图像中的N2个拍摄对象,分别对应划分在对应的N2个图层中;N2>1,N2为正整数;
第二输入接收模块,用于接收第二输入;
第二输入响应模块,用于响应于第二输入,在第二输入用于选择目标模式标识的情况下,按照与目标模式标识对应的目标规则,将至少两个图层合并为一个图层;或者,在第二输入用于选择至少两个图层的情况下,将与第二输入相关联的至少两个图层合并为一个图层。
可选地,装置,还包括:
第三输入接收模块,用于接收对目标图层的第三输入;
第三输入响应模块,用于响应于第三输入,以与第三输入相关联的第二目标处理方式,在第一目标图像中对目标图层进行处理;
其中,第二目标处理方式包括删除处理和复制处理中的任一种;
并且,在第一目标图像为动态图像的情况下,第三输入还关联于:第二目标处理方式对应的目标图层在第一目标图像中的目标播放时段。
可选地,装置,还包括:
第四输入接收模块,用于接收对目标图层的第四输入;
第四输入响应模块,用于响应于第四输入,按照与第四输入相关联的目标排列位置信息,在第二目标图像中显示目标图层;
其中,在第一目标图像和第二目标图像均为动态图像的情况下,第四输入关联于目标图层在第一目标图像中的第一播放时段、以及目标图层在第二目标图像中的第二播放时段。
本申请实施例中的图像处理装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的图像处理装置可以为具有动作系统的装置。该动作系统可以为安卓(Android)动作系统,可以为ios动作系统,还可以为其他可能的动作系统,本申请实施例不作具体限定。
本申请实施例提供的图像处理装置能够实现上述方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图11所示,本申请实施例还提供一种电子设备100,包括处理器101,存储器102,存储在存储器102上并可在所述处理器101上运行的程序或指令,该程序或指令被处理器101执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图12为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备1000包括但不限于:射频单元1001、网络模块1002、音频输出单元1003、输入单元1004、传感器1005、显示单元1006、用户输入单元1007、接口单元1008、存储器1009、以及处理器1010等部件。
本领域技术人员可以理解,电子设备1000还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图12中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,处理器1010,用于识别第一目标图像中的拍摄对象;将至少一个目标拍摄对象划分在目标图层中;用户输入单元1007,用于接收对所述目标图层的第一输入;处理器1010,还用于响应于所述第一输入,以与所述第一输入相关联的第一目标处理方式,对所述至少一个目标拍摄对象进行处理;其中,所述第一目标图像包括:按照预设图层顺序排列的至少两个图层。
这样,在本申请的实施例中,可将第一目标图像划分至少两个图层,每个图层分别用于显示对应的第一目标图像中的拍摄对象。基于此,第一目标图像中的所有拍摄对象被划分在不同的图层中。进一步地,用户可将任意图层作为目标图层,以单独针对目标图层进行图像处理。其中,用户可通过第一输入,选择目标图层和第一目标处理方式,以使得第一目标处理方式所产生的特效仅对目标图层中的目标拍摄对象有效,而不会影响到其它图层中的拍摄对象。可见,基于本申请的实施例所提供的图像处理方法,在对图像中的某个拍摄对象进行图像处理时,不会对其他拍摄对象造成影响,从而达到预期效果。
可选地,处理器1010,还用于将所述第一目标图像划分为N1个图层;N1>1,N1为正整数;将至少一个所述拍摄对象划分在对应的一个所述图层中;根据所述图层中的所述至少一个拍摄对象的轮廓面积,确定对应的所述图层的面积,以使所述图层对应区域覆盖对应的所述至少一个拍摄对象对应区域。
可选地,处理器1010,还用于将所述第一目标图像中的N2个拍摄对象,分别对应划分在对应的N2个图层中;N2>1,N2为正整数;用户输入单元1007,还用于接收第二输入;处理器1010,还用于响应于所述第二输入,在所述第二输入用于选择目标模式标识的情况下,按照与所述目标模式标识对应的目标规则,将至少两个图层合并为一个所述图层;或者,在所述第二输入用于选择至少两个图层的情况下,将与所述第二输入相关联的所述至少两个图层合并为一个所述图层。
可选地,用户输入单元1007,还用于接收对所述目标图层的第三输入;处理器1010,还用于响应于所述第三输入,以与所述第三输入相关联的第二目标处理方式,在所述第一目标图像中对所述目标图层进行处理;其中,所述第二目标处理方式包括删除处理和复制处理中的任一种;并且,在所述第一目标图像为动态图像的情况下,所述第三输入还关联于:所述第二目标处理方式对应的所述目标图层在所述第一目标图像中的目标播放时段。
可选地,用户输入单元1007,还用于接收对所述目标图层的第四输入;处理器1010,还用于响应于所述第四输入,按照与所述第四输入相关联的目标排列位置信息,在第二目标图像中显示所述目标图层;其中,在所述第一目标图像和所述第二目标图像均为动态图像的情况下,所述第四输入关联于所述目标图层在所述第一目标图像中的第一播放时段、以及所述目标图层在所述第二目标图像中的第二播放时段。
本申请提出了一种分层图像处理的方法,就是把图像进行分层录制、分层处理,比如背景层,物体层,人物层等等,每一层都可以单独编辑裁剪,可以把图像中的某一层去掉不在显示,也可以把图像中的某一层加入到其他图像的层次结构中进行组合。
具体地,包括:
一、手工控制分层拍摄:用户在录制视频时候,打开分层视频录制功能,自动识别出所拍摄视频的层次结构,比如识别出背景层、物体A层、物体B层、人A层、人B层,可以去设置调整分哪些层,哪些层可以合并,比如控制物体A层、物体B层合并为一个层拍摄。
二、智能分层场景选择:用户在录制视频时候,打开分层视频录制功能,可以给出多种智能视频分层场景供选择,比如风景分层模式、单人分层模式、多人分层模式、人物分层模式,当选择某个分层场景时候,会根据所选场景设置好的分层模式进行当前视频的分层录制。比如选择多人分层模式,视频录制时候会分背景层、物体层、人A层、人B层(假设视频中只有2个人)进行录制。
三、分层设置拍摄特效:用户在录制视频时候,可以针对不同的层来设置添加拍摄特效。比如在一个拍摄过程中,视频分为背景层、物体层、人A层、人B层,可以针对背景层设置模糊特效,针对人A层设置美白特效。
四、分层视频编辑:比如一个分层视频有背景层、物体层、人A层、人B层,可以单独对每个层次进行视频编辑,不影响其它层。比如对背景层添加模糊效果,对人A层添加美白效果等,都不会影响其它层。
五、多个分层视频组合:多个分层视频可以进行层次的组合编辑,比如现在有视频1和视频2,可以把视频1的背景层单独复制出来,替换掉视频2的背景层;还可以把视频1的人A层复制出来加入到视频2中。
其中,上述改进适用于图片、视频的拍摄;以及,上述改进适用于图片、视频的编辑。
可见,基于本申请中的图像处理方法,可对不同层次分别设置拍摄特效,互不影响,从而提高视频等的拍摄质量,提升用户使用体验。同时,基于本申请中的图像处理方法,还可对每个层次分别进行编辑,互不影响,其中包括直接删除视频等中的某个层次,解决了视频等编辑的难点。
更多地,基于图层的自动识别和任意组合,可以得到不同层次结构的视频、图片;通过对多个分层视频、图片的组合编辑,可以组合出很多丰富多彩的视频、图片;视频中图层的编辑,比如删除、分割、特效设置可以应用到视频中的某个时间段,也可以应用到整个视频;每层可以独立设置拍摄特效,比如画面、滤镜等,互不影响。
应理解的是,本申请实施例中,输入单元1004可以包括图形处理器(Graphics Processing Unit,GPU)10041和麦克风10042,图形处理器10041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1006可包括显示面板10061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板10061。用户输入单元1007包括触控面板10071以及其他输入设备10072。触控面板10071,也称为触摸屏。触控面板10071可包括触摸检测装置和触摸控制器两个部分。其他输入设备10072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、动作杆,在此不再赘述。存储器1009可用于存储软件程序以及各种数据,包括但不限于应用程序和动作系统。处理器1010可集成应用处理器和调制解调处理器,其中,应用处理器主要处理动作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述 图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例 如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽咯,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(女口ROM叹AM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
可以理解的是,本公开实施例描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,模块、单元、子单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本公开所述功能的其它电子单元或其组合中。
对于软件实现,可通过执行本公开实施例所述功能的模块(例如过程、函数等)来实现本公开实施例所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的, 本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (15)

  1. 一种图像处理方法,所述方法包括:
    识别第一目标图像中的拍摄对象;
    将至少一个目标拍摄对象划分在目标图层中;
    接收对所述目标图层的第一输入;
    响应于所述第一输入,以与所述第一输入相关联的第一目标处理方式,对所述至少一个目标拍摄对象进行处理;
    其中,所述第一目标图像包括:按照预设图层顺序排列的至少两个图层。
  2. 根据权利要求1所述的方法,其中,所述识别第一目标图像中的拍摄对象之后,所述方法还包括:
    将所述第一目标图像划分为N1个图层;N1>1,N1为正整数;
    将至少一个所述拍摄对象划分在对应的一个所述图层中;
    根据所述图层中的所述至少一个拍摄对象的轮廓面积,确定对应的所述图层的面积,以使所述图层对应区域覆盖对应的所述至少一个拍摄对象对应区域。
  3. 根据权利要求1所述的方法,其中,所述将至少一个目标拍摄对象划分在目标图层中之前,所述方法还包括:
    将所述第一目标图像中的N2个拍摄对象,分别对应划分在对应的N2个图层中;N2>1,N2为正整数;
    接收第二输入;
    响应于所述第二输入,在所述第二输入用于选择目标模式标识的情况下,按照与所述目标模式标识对应的目标规则,将至少两个图层合并为一个所述图层;或者,在所述第二输入用于选择至少两个图层的情况下,将与所述第二输入相关联的所述至少两个图层合并为一个所述图层。
  4. 根据权利要求1所述的方法,其中,所述将至少一个目标拍摄对象划分在目标图层中之后,所述方法还包括:
    接收对所述目标图层的第三输入;
    响应于所述第三输入,以与所述第三输入相关联的第二目标处理方式,在所述第一目标图像中对所述目标图层进行处理;
    其中,所述第二目标处理方式包括删除处理和复制处理中的任一种;
    并且,在所述第一目标图像为动态图像的情况下,所述第三输入还关联于:所述第二目标处理方式对应的所述目标图层在所述第一目标图像中的目标播放时段。
  5. 根据权利要求1所述的方法,其中,所述将至少一个目标拍摄对象划分在目标图层中之后,所述方法还包括:
    接收对所述目标图层的第四输入;
    响应于所述第四输入,按照与所述第四输入相关联的目标排列位置信息,在第二目标图像中显示所述目标图层;
    其中,在所述第一目标图像和所述第二目标图像均为动态图像的情况下,所述第四输入关联于所述目标图层在所述第一目标图像中的第一播放时段、以及所述目标图层在所述第二目标图像中的第二播放时段。
  6. 一种图像处理装置,所述装置包括:
    识别模块,用于识别第一目标图像中的拍摄对象;
    第一划分模块,用于将至少一个目标拍摄对象划分在目标图层中;
    第一输入接收模块,用于接收对所述目标图层的第一输入;
    第一输入响应模块,用于响应于所述第一输入,以与所述第一输入相关联的第一目标处理方式,对所述至少一个目标拍摄对象进行处理;
    其中,所述第一目标图像包括:按照预设图层顺序排列的至少两个图层。
  7. 根据权利要求6所述的装置,其中,所述装置还包括:
    第二划分模块,用于将所述第一目标图像划分为N1个图层;N1>1,N1为正整数;
    第三划分模块,用于将至少一个所述拍摄对象划分在对应的一个 所述图层中;
    确定模块,用于根据所述图层中的所述至少一个拍摄对象的轮廓面积,确定对应的所述图层的面积,以使所述图层对应区域覆盖对应的所述至少一个拍摄对象对应区域。
  8. 根据权利要求6所述的装置,其中,所述装置还包括:
    第四划分模块,用于将所述第一目标图像中的N2个拍摄对象,分别对应划分在对应的N2个图层中;N2>1,N2为正整数;
    第二输入接收模块,用于接收第二输入;
    第二输入响应模块,用于响应于所述第二输入,在所述第二输入用于选择目标模式标识的情况下,按照与所述目标模式标识对应的目标规则,将至少两个图层合并为一个所述图层;或者,在所述第二输入用于选择至少两个图层的情况下,将与所述第二输入相关联的所述至少两个图层合并为一个所述图层。
  9. 根据权利要求6所述的装置,其中,所述装置还包括:
    第三输入接收模块,用于接收对所述目标图层的第三输入;
    第三输入响应模块,用于响应于所述第三输入,以与所述第三输入相关联的第二目标处理方式,在所述第一目标图像中对所述目标图层进行处理;
    其中,所述第二目标处理方式包括删除处理和复制处理中的任一种;
    并且,在所述第一目标图像为动态图像的情况下,所述第三输入还关联于:所述第二目标处理方式对应的所述目标图层在所述第一目标图像中的目标播放时段。
  10. 根据权利要求6所述的装置,其中,所述装置还包括:
    第四输入接收模块,用于接收对所述目标图层的第四输入;
    第四输入响应模块,用于响应于所述第四输入,按照与所述第四输入相关联的目标排列位置信息,在第二目标图像中显示所述目标图层;
    其中,在所述第一目标图像和所述第二目标图像均为动态图像的 情况下,所述第四输入关联于所述目标图层在所述第一目标图像中的第一播放时段、以及所述目标图层在所述第二目标图像中的第二播放时段。
  11. 一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-5任一项所述的图像处理方法的步骤。
  12. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-5任一项所述的图像处理方法的步骤。
  13. 一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1-5任一项所述的图像处理方法。
  14. 一种计算机程序产品,所述程序产品被存储在非易失的存储介质中,所述程序产品被至少一个处理器执行以实现如权利要求1-5任一项所述的图像处理方法。
  15. 一种电子设备,包括所述电子设备被配置成用于执行如权利要求1-5任一项所述的图像处理方法。
PCT/CN2022/094358 2021-05-26 2022-05-23 图像处理方法和电子设备 WO2022247768A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110581001.XA CN113438412A (zh) 2021-05-26 2021-05-26 图像处理方法和电子设备
CN202110581001.X 2021-05-26

Publications (1)

Publication Number Publication Date
WO2022247768A1 true WO2022247768A1 (zh) 2022-12-01

Family

ID=77802884

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094358 WO2022247768A1 (zh) 2021-05-26 2022-05-23 图像处理方法和电子设备

Country Status (2)

Country Link
CN (1) CN113438412A (zh)
WO (1) WO2022247768A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438412A (zh) * 2021-05-26 2021-09-24 维沃移动通信有限公司 图像处理方法和电子设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903213A (zh) * 2012-12-24 2014-07-02 联想(北京)有限公司 一种拍摄方法和电子设备
CN108037872A (zh) * 2017-11-29 2018-05-15 上海爱优威软件开发有限公司 一种照片编辑方法及终端设备
US20180286069A1 (en) * 2015-12-24 2018-10-04 Fujitsu Limited Image processing apparatus and image processing method
CN110418056A (zh) * 2019-07-16 2019-11-05 Oppo广东移动通信有限公司 一种图像处理方法、装置、存储介质及电子设备
CN111899155A (zh) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 视频处理方法、装置、计算机设备及存储介质
CN112419218A (zh) * 2020-11-19 2021-02-26 维沃移动通信有限公司 图像处理方法、装置及电子设备
CN112637490A (zh) * 2020-12-18 2021-04-09 咪咕文化科技有限公司 视频制作方法、装置、电子设备及存储介质
CN113438412A (zh) * 2021-05-26 2021-09-24 维沃移动通信有限公司 图像处理方法和电子设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107918563A (zh) * 2017-09-30 2018-04-17 华为技术有限公司 一种复制和粘贴的方法、数据处理装置和用户设备
CN111526380B (zh) * 2020-03-20 2023-03-31 北京达佳互联信息技术有限公司 视频处理方法、装置、服务器、电子设备及存储介质
CN112423095A (zh) * 2020-11-02 2021-02-26 广州博冠信息科技有限公司 游戏视频录制方法、装置、电子设备和存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903213A (zh) * 2012-12-24 2014-07-02 联想(北京)有限公司 一种拍摄方法和电子设备
US20180286069A1 (en) * 2015-12-24 2018-10-04 Fujitsu Limited Image processing apparatus and image processing method
CN108037872A (zh) * 2017-11-29 2018-05-15 上海爱优威软件开发有限公司 一种照片编辑方法及终端设备
CN110418056A (zh) * 2019-07-16 2019-11-05 Oppo广东移动通信有限公司 一种图像处理方法、装置、存储介质及电子设备
CN111899155A (zh) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 视频处理方法、装置、计算机设备及存储介质
CN112419218A (zh) * 2020-11-19 2021-02-26 维沃移动通信有限公司 图像处理方法、装置及电子设备
CN112637490A (zh) * 2020-12-18 2021-04-09 咪咕文化科技有限公司 视频制作方法、装置、电子设备及存储介质
CN113438412A (zh) * 2021-05-26 2021-09-24 维沃移动通信有限公司 图像处理方法和电子设备

Also Published As

Publication number Publication date
CN113438412A (zh) 2021-09-24

Similar Documents

Publication Publication Date Title
DK180452B1 (en) USER INTERFACES FOR RECEIVING AND HANDLING VISUAL MEDIA
US9496005B2 (en) Electronic apparatus, display control method and program for displaying an image for selecting a content item to be reproduced
US20220382440A1 (en) User interfaces for managing media styles
WO2022206696A1 (zh) 拍摄界面显示方法、装置、电子设备及介质
WO2022166944A1 (zh) 拍照方法、装置、电子设备及介质
WO2022012657A1 (zh) 图像编辑方法、装置和电子设备
WO2022116885A1 (zh) 拍照方法、装置、电子设备及存储介质
US11996123B2 (en) Method for synthesizing videos and electronic device therefor
CN103442170A (zh) 一种拍摄方法及移动终端
CN105684420A (zh) 图像处理装置以及图像处理程序
US20230345113A1 (en) Display control method and apparatus, electronic device, and medium
JP2019508784A (ja) マルチメディアファイル管理方法、電子デバイス、およびグラフィカルユーザインタフェース
CN112672061B (zh) 视频拍摄方法、装置、电子设备及介质
CN113794835B (zh) 视频录制方法、装置及电子设备
CN112911147B (zh) 显示控制方法、显示控制装置及电子设备
WO2023134583A1 (zh) 视频录制方法、装置及电子设备
WO2023088183A1 (zh) 图像显示方法、装置及电子设备
WO2023083089A1 (zh) 拍摄控件显示方法, 装置, 电子设备及介质
CN112839190A (zh) 虚拟图像与现实场景同步视频录制或直播的方法
CN113905175A (zh) 视频生成方法、装置、电子设备及可读存储介质
WO2022247768A1 (zh) 图像处理方法和电子设备
WO2022135261A1 (zh) 图像显示方法、装置和电子设备
WO2023226695A9 (zh) 录像方法、装置及存储介质
WO2023226699A9 (zh) 录像方法、装置及存储介质
WO2023226694A9 (zh) 录像方法、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22810493

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22810493

Country of ref document: EP

Kind code of ref document: A1