WO2022247766A1 - 图像处理方法、装置及电子设备 - Google Patents

图像处理方法、装置及电子设备 Download PDF

Info

Publication number
WO2022247766A1
WO2022247766A1 PCT/CN2022/094353 CN2022094353W WO2022247766A1 WO 2022247766 A1 WO2022247766 A1 WO 2022247766A1 CN 2022094353 W CN2022094353 W CN 2022094353W WO 2022247766 A1 WO2022247766 A1 WO 2022247766A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
objects
reference image
input
Prior art date
Application number
PCT/CN2022/094353
Other languages
English (en)
French (fr)
Inventor
浦帅
Original Assignee
维沃移动通信(杭州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信(杭州)有限公司 filed Critical 维沃移动通信(杭州)有限公司
Publication of WO2022247766A1 publication Critical patent/WO2022247766A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present application belongs to the technical field of image processing, and in particular relates to an image processing method, device and electronic equipment.
  • a plurality of images including the plurality of objects are mainly obtained by repeatedly shooting, and then an image is artificially selected from these images as a final image. This may result in poor image rendering since there is no guarantee that every object in the final image will be captured in the best condition.
  • the purpose of the embodiments of the present application is to provide an image processing method, device, and electronic device, which can solve the problem in the prior art that images containing multiple objects have poor rendering effects.
  • the embodiment of the present application provides an image processing method, the method comprising:
  • Taking a plurality of objects as shooting objects acquiring a plurality of images captured at different times, and determining a reference image and a plurality of object images respectively corresponding to each object from the plurality of images; wherein, the reference image includes For the plurality of objects, the object image includes a corresponding object;
  • an object image corresponding to the first object in the reference image is replaced with another object image corresponding to the first object to generate a composite image.
  • the embodiment of the present application provides an apparatus for an image processing method, the apparatus including:
  • a determining module configured to take multiple objects as shooting objects, acquire a plurality of images captured at different times, and determine a reference image and a plurality of object images respectively corresponding to each object from the plurality of images; wherein, the The reference image includes the plurality of objects, and the object image includes a corresponding object;
  • a first receiving module configured to receive a user's first input for a first object in the reference image; wherein, the first object is any one of the plurality of objects;
  • a generating module configured to, in response to the first input, replace an object image corresponding to the first object in the reference image with another object image corresponding to the first object, to generate a composite image.
  • an embodiment of the present application provides an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored in the memory and operable on the processor, and the program or instruction is The processor implements the steps of the method described in the first aspect when executed.
  • an embodiment of the present application provides a readable storage medium, on which a program or an instruction is stored, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented .
  • the embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions, so as to implement the first aspect The steps of the method.
  • the embodiment of the present application based on multiple images captured at different times, the object image corresponding to the first object in the reference image is replaced with the object image captured when the first object is in the best state, In this way, the embodiment of the present application can make each object in the reference image present the photographing effect in the best state, and therefore, the presentation effect of the image can be improved.
  • Fig. 1 is one of the flowcharts of an image processing method shown according to an example embodiment
  • Fig. 2 is a schematic diagram of a photo processing page according to an example embodiment
  • Fig. 3 is the second flowchart of an image processing method according to an example embodiment
  • Fig. 4 is a schematic diagram showing a sliding preview window according to an example embodiment
  • Fig. 5 is the third flowchart of an image processing method according to an example embodiment
  • Fig. 6 is a schematic diagram of a feature label screening window shown according to an example embodiment
  • Fig. 7 is a fourth flowchart of an image processing method according to an example embodiment
  • Fig. 8 is a schematic diagram showing a collaborative editing switch button according to an example embodiment
  • Fig. 9 is a schematic diagram of a collaborative editing page according to an example embodiment
  • Fig. 10 is a fifth flowchart of an image processing method according to an example embodiment
  • Fig. 11 is a schematic diagram of an image processing application scenario according to an exemplary embodiment
  • Fig. 12 is a structural block diagram of an image processing device according to an exemplary embodiment
  • Fig. 13 is a structural block diagram of an electronic device according to an exemplary embodiment
  • FIG. 14 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the image processing method provided by this application can be applied to the scene of processing images containing multiple objects, wherein the electronic equipment used for image processing can be, for example, mobile phones, tablets, cameras, etc., which have image capture functions and image Devices that process functions.
  • the image processing method provided in the embodiment of the present application may be executed by an image processing device, or a control module in the image processing device for executing the image processing method.
  • an image processing device executing the image processing method is taken as an example to describe the device for the image processing method provided in the embodiment of the present application.
  • Fig. 1 is a flowchart of an image processing method according to an example embodiment.
  • the image processing method may include steps 110 to 130 , which are specifically as follows.
  • Step 110 taking multiple objects as shooting objects, acquiring multiple images captured at different times, and determining a reference image and multiple object images respectively corresponding to each object from the multiple images; wherein, the reference image includes multiple objects, and the object image includes a corresponding object.
  • the reference image may be an image including all objects, and the reference image may be used as a basis for object replacement.
  • the object may include a person, an animal or an object.
  • the object image may be an image of a person, an animal, or an object, one object image may only include one corresponding object, and different object images corresponding to the same object may have different poses of the object.
  • a manner of acquiring multiple images may be, for example, real-time shooting and acquisition through a camera of the electronic device, or may be directly acquired from an image database of the electronic device.
  • the reference image may be an image corresponding to a predetermined shooting time, or an image arbitrarily selected by the user from a plurality of images captured at different times, wherein the predetermined time may be a plurality of shooting times corresponding to a plurality of images The earliest moment of time.
  • the multiple object images respectively corresponding to each object may be obtained by shooting each object independently, or may be obtained by cutting the images after shooting the multiple objects as a whole, which is not limited here.
  • step 110 may specifically include:
  • Each of the plurality of objects is taken as a shooting object, and a plurality of object images respectively corresponding to each object captured at different times are acquired.
  • the reference image may be an image including multiple objects captured first, and then each of the multiple objects is captured separately, and the electronic device will recognize each of the multiple objects when shooting, Then, multiple object images respectively corresponding to each object are obtained by shooting independently for each object at different times.
  • the camera of the mobile phone can first take a group photo, and then recognize different portraits in the lens, and independently and continuously capture these portraits, so as to obtain different Multiple portraits corresponding to the portrait, and then separately process these portraits based on the group photo taken first.
  • the electronic device first recognizes each of the multiple objects, and then obtains images taken independently for each object at different times, and then it is possible to obtain the objects of each object at different times without performing other image processing operations. image, simplifying the image processing process.
  • step 110 may specifically include:
  • Each of the plurality of images is segmented according to objects to obtain a plurality of object images respectively corresponding to each object.
  • multiple objects are taken as a whole to continuously shoot, and multiple images obtained at different times may all be images including multiple objects.
  • the fifth input may be a selection input of the target image.
  • the target image may be an image arbitrarily selected by the user from multiple images taken at different times, and this target image shall be used as the reference image used when replacing the target image. .
  • the electronic device can automatically identify multiple objects in each image, and use a preset segmentation algorithm to segment each image according to the object to obtain multiple objects corresponding to each object.
  • Object images, multiple object images obtained after segmentation and multiple original images can be stored in the image library.
  • the camera of the mobile phone will continuously capture multiple images containing all the people.
  • Each image is automatically divided into different portraits according to different people, and then these divided portraits are grouped together for processing.
  • each image containing all objects at different shooting moments multiple object images corresponding to each object are obtained. Since each image obtained by shooting contains all objects, users can choose different objects according to their needs.
  • the images captured in different shooting environments corresponding to the time are used as the reference images used when replacing the target image, which improves the user experience.
  • Step 120 receiving a first input from a user for a first object in the reference image; wherein, the first object is any object in a plurality of objects.
  • the first object may be an object arbitrarily selected by the user from a plurality of objects in the reference image
  • the first input may be a switching input of the object image corresponding to the first object.
  • the switching method can be, for example, directly clicking on the corresponding area of the first object to sequentially switch to object images captured at other times, or it can first trigger a preview of the first object corresponding to the object images captured at other times, and then click the object
  • the image selection method is switched to the object image captured at other times.
  • the user can select any individual portrait 22 in the image processing interface 20 based on the reference image 21 for processing. After clicking the individual portrait 22, the portrait 22 In the highlighted state, the portrait 22 of the person can be switched, and the portrait 22 of the person in the reference image 21 can be replaced with the portrait corresponding to the person taken at other times.
  • Step 130 in response to the first input, replace the object image corresponding to the first object in the reference image with other object images corresponding to the first object to generate a composite image.
  • the other object image corresponding to the first object may be a satisfactory object image corresponding to the first object selected by the user, or any object image corresponding to the first object, and it is confirmed that the object image corresponding to the first object is to be replaced.
  • a composite image can be generated.
  • the object image corresponding to the first object in the reference image is replaced with the object image captured when the first object is in the best state.
  • the present application The embodiment can make each object in the reference image present the photographing effect in the best state, and therefore, the rendering effect of the image can be improved.
  • step 130 may specifically include steps 1301-1303, specifically as follows:
  • Step 1301 displaying at least one first object image; wherein, the first object image is an object image corresponding to the first object.
  • the first object image may be all object images corresponding to the first object, or may be one or more object images that need to be displayed.
  • an object image corresponding to the first object may be displayed to the user, so that the user can select and switch based on the displayed image.
  • the manner of displaying the at least one first object image includes, but is not limited to, popping up a preset preview interface and displaying it in a tiled manner in the interface, or displaying a sliding preview window corresponding to the first object, and A set number of images of the first object are sequentially displayed in the sliding preview window by sliding, wherein both the preset preview interface and the sliding preview window can be displayed in an area corresponding to the first object in the reference image.
  • step 1301 may specifically include:
  • the sliding preview window is used to display a set number of first object images
  • a sliding preview window is displayed.
  • the slide preview window may be displayed in the area corresponding to the first object in the reference image, and is used to preview a set number of images of the first object currently displayed, wherein the set number may be the number that the slide preview window can hold.
  • object images corresponding to the first object at different times can be displayed in the sliding preview window, and the user can switch object images captured at different times by sliding up and down, or of course by sliding left and right, here Not limited.
  • the user can select a portrait 410 corresponding to the target person based on the reference image 41 in the image processing interface 40, and perform separate processing. At this time, the portrait 410 is highlighted. , while other portraits except portrait 410 are displayed in a blurred state. The user can slide up and down in the sliding preview window 42 to preview the portrait corresponding to the target person at other times, such as the portrait 411 .
  • the first object image can be conveniently displayed by sliding the preview window, which is convenient for the user to preview object images at different times.
  • the above step 1301 may specifically include:
  • the first object image displayed in the slide preview window is updated.
  • the third input may be a sliding input based on a sliding preview window
  • the specific process may be to arrange the first object images in order of shooting time, and display the previous object image or the next object of the current object image sequentially by sliding up and down.
  • the image can also be slid left and right, which is not limited here.
  • the user can preview all the first object images by sliding the preview window, so as to better select the first object image, and then select satisfactory object images corresponding to the first object at different times.
  • Step 1302 receiving a second input from the user on the target object image in the first object image.
  • the second input may be a selection input of the target object image
  • the target object image may be the most satisfactory image selected by the user from multiple object images corresponding to the first object.
  • Step 1303 in response to the second input, replace the object image corresponding to the first object in the reference image with the target object image to generate a composite image.
  • the sliding preview window to preview multiple object images based on the object images corresponding to the first object captured at different times, it is convenient for the user to preview the object images captured at other times, and further facilitate the user to view the object images. Make a selection.
  • the first object images to be displayed may be selectively screened.
  • the above step 130 may specifically include step 1304-step 1307, specifically as follows:
  • Step 1304 based on a plurality of object images corresponding to the first object, obtain a plurality of feature labels corresponding to the first object.
  • the feature tag may be, for example, an expression tag and/or an action tag of the first object, where the expression tag may include, for example, smiling, laughing, and pouting, and the action tag may include, for example, jumping, waving, and clapping.
  • obtaining multiple feature labels corresponding to the first object based on the multiple object images corresponding to the first object involved in the above step 1304 may specifically include:
  • a plurality of feature labels corresponding to the first object are generated according to the feature information.
  • the preset feature type can be the expression type or action type of the object
  • the feature information can be the image feature data obtained according to the expression type or action type, which can be specifically obtained by using artificial intelligence (Artificial Intelligence, AI).
  • AI Artificial Intelligence
  • a plurality of feature labels corresponding to the first object may be generated after the feature information aggregation process.
  • one object image may correspond to one or more feature labels, for example, if two feature information of smiling and waving are extracted from the target portrait, then the target portrait may be associated with the two feature labels of smiling and waving.
  • the expression feature information extracted from the multiple portraits corresponding to the target person And jump feature data, based on these feature data can generate smile and jump tags.
  • Step 1305 displaying multiple feature labels corresponding to the first object.
  • the multiple feature labels corresponding to the first object can be displayed in a tiled manner in the area corresponding to the first object in the reference image, and the features that are not in the multiple object images corresponding to the first object will not be displayed in this area , or it is grayed out, that is, it cannot be selected by clicking.
  • the display interface of the feature label as shown in FIG. 6 can be opened, wherein the multiple features corresponding to the target person
  • the tags are displayed in tiled form in the area 61 corresponding to the target person in the reference image, where the area 61 may include an expression area 610 and an action area 620 , and the user can select feature tags in this area 61 .
  • Step 1306 receiving a fourth input from the user on a target feature tag in the plurality of feature tags.
  • the target feature label can be arbitrarily selected by the user from multiple feature labels, and is used to filter the object image to be displayed.
  • the number of selected target feature labels can be one or multiple, that is, the user can Only one feature label is selected for screening images satisfying the one feature label, or multiple feature labels can be selected simultaneously for screening images satisfying the multiple feature labels.
  • the fourth input may be a selection input of the target feature label.
  • Step 1307 in response to the fourth input, determine at least one first object image associated with the target feature label from the plurality of object images corresponding to the first object.
  • At least one first object image associated with the target feature tag can be automatically identified according to the target feature tag, and the obtained at least one first object image is displayed, thereby narrowing down a selectable range of object images corresponding to the first object.
  • the user can click on the tags in the expression area 610 and the action area 620 to filter portraits, and click the smile tag 611 to automatically identify the portraits associated with the smile of the target person at different times , so as to select a satisfactory portrait associated with the smile tag 611.
  • the user can also select the smile tag 611 and the jump tag 621 at the same time, so as to select a satisfactory portrait associated with the smile tag 611 and the jump tag 621 .
  • the user can slide to the position corresponding to the first portrait that satisfies the target feature screening conditions after screening, click the confirmation button 62 to select, and return to the processing interface of the reference image, or click the return button 63 to return directly without changing the selection.
  • At least one first object image associated with the target feature label can be screened out, In this way, the user's selection range for the first object image can be narrowed down, which is convenient for the user to quickly select the desired object image.
  • the image processing process may also be completed in a coordinated manner by multiple devices. Based on this, in a possible embodiment, as shown in FIG. 7, after the above step 110, the image processing method may further include step 140-step 160, specifically as follows:
  • Step 140 establish a connection with the target terminal.
  • the target terminal may be other terminal devices participating in image processing, and the number of terminals participating in the image processing process at the same time may be multiple.
  • the way to establish a connection with the target terminal includes, but is not limited to, the user of the target terminal establishes a connection with the terminal device where the image is located through, for example, shaking the device, Bluetooth, or entering a password, and enters a mode of cooperating to process the image.
  • the user can choose to collaborate to process the photo, and after clicking to enable the collaboration switch 81 in the lower left corner, within a certain distance, the user enters the camera composite group photo mode, and the target terminal Users can participate in group photo processing by establishing a connection by shaking the mobile phone, Bluetooth or entering a password.
  • Step 150 receiving the first instruction sent by the target terminal.
  • the first instruction may be a switching instruction for an object image corresponding to the second object in the reference image.
  • the second object may be any one of objects other than the first object among the plurality of objects.
  • the target terminal user may send a first instruction to the device that captures the image, that is, the local device, so that the local device can select the image corresponding to the second object in the reference image switch to the object image selected for the target end user.
  • the selection process of the target terminal user for the plurality of object images corresponding to the second object is similar to the above-mentioned selection process for the plurality of object images corresponding to the first object, and will not be repeated here.
  • Step 160 in response to the first instruction, replace the object image corresponding to the second object in the reference image with another object image corresponding to the second object.
  • the local device may replace the object image corresponding to the second object in the reference image.
  • the user can choose to collaboratively process the photo, establish a connection with the target terminal user within a certain distance, and enter the mode of cooperating and processing the group photo together.
  • the target terminal user completes the selection, he can send the first Instruction, the local device can replace the object image corresponding to the second object in the reference image with the object image selected by the target terminal user.
  • the method may specifically include:
  • the target terminal receiving a second instruction sent by the target terminal; wherein, the second instruction is a processing instruction for a second object in the reference image;
  • the local device may set the second object in the reference image to a state of prohibiting user input, wherein the state of prohibiting user input may be grayed out. That is to say, at the same time, only one terminal can operate on the same object in the image. If an object in the image is being edited, the object will be grayed out, that is, it cannot be clicked on for processing.
  • the portrait 91 in the lower right corner of the image is being operated by another terminal, and this portrait 91 is currently being edited. Therefore, this The portrait 91 is grayed out, that is, the portrait 91 cannot be clicked for processing.
  • the image processing method may include steps 1001-1010, which will be explained in detail below.
  • Step 1001 click on the camera.
  • a page as shown in FIG. 11 will be displayed, in which there is an icon 92 for combining a group photo.
  • Step 1002 click to start compositing combined image mode.
  • the user clicks the group photo icon to start the group photo mode.
  • Step 1003 start shooting for x seconds.
  • the filming is continued for x seconds.
  • Step 1004 end shooting.
  • pressing the end shooting button ends the shooting.
  • Step 1005 choose whether to cooperate to synthesize images, if you choose to cooperate to synthesize images, then execute step 1006; if you choose not to cooperate to synthesize images, then execute step 1007.
  • step 1006 If the user chooses to collaboratively synthesize images, execute step 1006; if the user chooses not to collaboratively synthesize images, execute step 1007.
  • Step 1006 other terminals join in the combined image collaboration.
  • users can participate in photo collaboration by shaking.
  • Step 1007 enter the image synthesis page.
  • Step 1008 filter images according to expressions and actions.
  • the user can filter multiple portraits corresponding to the target person by selecting the corresponding expression and action tags to obtain one or more portraits satisfying the screening conditions.
  • Step 1009 independently select images according to personnel.
  • a plurality of portraits may be selected according to persons.
  • Step 1010 generate a combined image.
  • the user can click the generate button to generate the image.
  • the object image corresponding to the first object in the reference image is replaced with the object image captured when the first object is in the best state.
  • the present application The embodiment can make each object in the reference image present the photographing effect in the best state, and therefore, the rendering effect of the image can be improved.
  • the present application also provides an image processing device.
  • the image processing apparatus provided by the embodiment of the present application will be described in detail below with reference to FIG. 12 .
  • Fig. 12 is a structural block diagram of an image processing device according to an exemplary embodiment.
  • the image processing device 1200 may include:
  • the determining module 1201 is configured to take multiple objects as shooting objects, acquire multiple images captured at different times, and determine a reference image and multiple object images respectively corresponding to each object from the multiple images; wherein, the reference image includes a plurality of objects, and the object image includes a corresponding object;
  • the first receiving module 1202 is configured to receive a user's first input for a first object in the reference image; wherein, the first object is any object in a plurality of objects;
  • the generation module 1203 is configured to, in response to the first input, replace the object image corresponding to the first object in the reference image with other object images corresponding to the first object, to generate a composite image.
  • the generating module 1203 may specifically include:
  • the first display submodule is configured to display at least one first object image; wherein, the first object image is an object image corresponding to the first object;
  • the first receiving submodule is configured to receive a second input from the user on the target object image in the first object image
  • the first generation sub-module is configured to replace the object image corresponding to the first object in the reference image with the target object image in response to the second input, and generate a composite image.
  • the first display submodule includes:
  • the adding unit is used to add a plurality of first object images to the sliding preview window when the number of the first object images is multiple; wherein, the sliding preview window is used to display a set number of first object images;
  • the display unit is configured to display a sliding preview window in the region corresponding to the first object in the reference image.
  • the first display submodule after displaying the sliding preview window in the region corresponding to the first object in the reference image, the first display submodule further includes:
  • a receiving unit configured to receive a third input from the user based on the sliding preview window
  • An updating unit configured to update the first object image displayed in the sliding preview window in response to the third input.
  • the generation module 1203 also includes:
  • the first acquisition submodule is used to acquire a plurality of feature labels corresponding to the first object based on the plurality of object images corresponding to the first object before displaying at least one first object image;
  • the second display submodule is used to display a plurality of feature labels corresponding to the first object
  • the second receiving submodule is used to receive the user's fourth input on the target feature tag in the plurality of feature tags
  • the second generation sub-module is configured to determine at least one first object image associated with the target feature label from the plurality of object images corresponding to the first object in response to the fourth input.
  • the first acquisition submodule includes:
  • An extraction unit configured to extract feature information corresponding to a preset feature type from a plurality of object images corresponding to the first object according to a preset feature type
  • a generating unit configured to generate a plurality of feature labels corresponding to the first object according to feature information.
  • the image processing device also includes:
  • connection module is used to communicate with the target terminal after taking a plurality of objects as shooting objects, acquiring a plurality of images captured at different times, and determining a reference image and a plurality of object images respectively corresponding to each object from the plurality of images. establish connection;
  • the second receiving module is configured to receive the first instruction sent by the target terminal; wherein, the first instruction is a switching instruction for the object image corresponding to the second object in the reference image;
  • a replacement module configured to replace the object image corresponding to the second object in the reference image with other object images corresponding to the second object in response to the first instruction.
  • the image processing device also includes:
  • the third receiving module is configured to receive a second instruction sent by the target terminal before receiving the first instruction sent by the target terminal; wherein, the second instruction is a processing instruction for the second object in the reference image;
  • a setting module configured to set the second object in the reference image to a state of prohibiting user input in response to the second instruction.
  • the determining module 1201 includes:
  • the second acquisition sub-module is used to acquire the captured reference image containing multiple objects
  • the third acquisition sub-module is configured to take each of the plurality of objects as a shooting object, and acquire a plurality of object images respectively corresponding to each object captured at different times.
  • the determining module 1201 includes:
  • the fourth acquisition sub-module is used to take multiple objects as shooting objects, and acquire multiple images including multiple objects captured at different times;
  • the third receiving submodule is used to receive the fifth input from the user on the target image in the plurality of images
  • a determining submodule configured to determine the target image as the reference image in response to the fifth input
  • the segmentation sub-module is used to segment each of the plurality of images according to the object to obtain a plurality of object images respectively corresponding to each object.
  • the object image corresponding to the first object in the reference image is replaced with the object image captured when the first object is in the best state.
  • the present application The embodiment can make each object in the reference image present the photographing effect in the best state, and therefore, the rendering effect of the image can be improved.
  • the image processing apparatus in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal.
  • the device may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant).
  • non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
  • Network Attached Storage NAS
  • personal computer personal computer, PC
  • television television
  • teller machine or self-service machine etc.
  • the image processing device in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in this embodiment of the present application.
  • the image processing device provided in the embodiment of the present application can realize various processes realized by the method embodiments in Fig. 1 to Fig. 11 , and to avoid repetition, details are not repeated here.
  • the embodiment of the present application further provides an electronic device 1300, including a processor 1301, a memory 1302, and programs or instructions stored in the memory 1302 and operable on the processor 1301,
  • an electronic device 1300 including a processor 1301, a memory 1302, and programs or instructions stored in the memory 1302 and operable on the processor 1301,
  • the program or instruction is executed by the processor 1301
  • each process of the above-mentioned image processing method embodiment can be realized, and the same technical effect can be achieved, so in order to avoid repetition, details are not repeated here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 14 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 1400 includes, but is not limited to: a radio frequency unit 1401, a network module 1402, an audio output unit 1403, an input unit 1404, a sensor 1405, a display unit 1406, a user input unit 1407, an interface unit 1408, a memory 1409, and a processor 1410, etc. part.
  • the electronic device 1400 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 1410 through the power management system, so that the management of charging, discharging, and function can be realized through the power management system. Consumption management and other functions.
  • a power supply such as a battery
  • the structure of the electronic device shown in FIG. 14 does not constitute a limitation to the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine certain components, or arrange different components, and details will not be repeated here. .
  • the input unit 1404 is configured to take a plurality of objects as shooting objects, acquire a plurality of images captured at different times, and determine a reference image and a plurality of object images respectively corresponding to each object from the plurality of images.
  • the user input unit 1407 is configured to receive a user's first input on the first object in the reference image.
  • the processor 1410 is configured to, in response to the first input, replace an object image corresponding to the first object in the reference image with another object image corresponding to the first object, to generate a composite image.
  • the object image corresponding to the first object in the reference image is replaced with the object image captured when the first object is in the best state.
  • the present application The embodiment can make each object in the reference image present the photographing effect in the best state, and therefore, the rendering effect of the image can be improved.
  • the display unit 1406 is further configured to display at least one first object image.
  • the user input unit 1407 is further configured to receive a second input from the user on the target object image in the first object image.
  • the processor 1410 is further configured to, in response to the second input, replace the object image corresponding to the first object in the reference image with the target object image to generate a composite image.
  • the processor 1410 is further configured to add multiple first object images to the sliding preview window when there are multiple first object images.
  • the display unit 1406 is further configured to display a sliding preview window in a region corresponding to the first object in the reference image when there are multiple first object images.
  • the user input unit 1407 is further configured to receive a third input from the user based on the sliding preview window after the sliding preview window is displayed in the region corresponding to the first object in the reference image.
  • the processor 1410 is further configured to update the image of the first object displayed in the sliding preview window in response to a third input after the area corresponding to the first object in the reference image is displayed in the sliding preview window.
  • the input unit 1404 is further configured to acquire multiple feature labels corresponding to the first object based on multiple object images corresponding to the first object before displaying at least one first object image.
  • the display unit 1406 is further configured to display multiple feature labels corresponding to the first object before displaying at least one first object image.
  • the user input unit 1407 is further configured to receive a fourth user input on a target feature tag in the plurality of feature tags before displaying at least one first object image.
  • the processor 1410 is further configured to, before displaying the at least one first object image, determine at least one first object associated with the target feature label from a plurality of object images corresponding to the first object in response to a fourth input.
  • An object image is further configured to, before displaying the at least one first object image, determine at least one first object associated with the target feature label from a plurality of object images corresponding to the first object in response to a fourth input.
  • the processor 1410 is further configured to, according to the preset feature type, extract feature information corresponding to the preset feature type from multiple object images corresponding to the first object.
  • the processor 1410 is further configured to generate a plurality of feature labels corresponding to the first object according to feature information.
  • the network module 1402 is further configured to take multiple objects as shooting objects, acquire multiple images captured at different times, and determine a reference image and multiple objects respectively corresponding to each object from the multiple images After the image, establish a connection with the target terminal.
  • the user input unit 1407 is further configured to take multiple objects as shooting objects, acquire multiple images captured at different times, and determine a reference image and multiple corresponding to each object from the multiple images. After the object is imaged, the first instruction sent by the target terminal is received.
  • the processor 1410 is further configured to take multiple objects as shooting objects, acquire multiple images captured at different times, and determine a reference image and multiple objects respectively corresponding to each object from the multiple images After imaging, in response to the first instruction, the object image corresponding to the second object in the reference image is replaced with another object image corresponding to the second object.
  • the user input unit 1407 is further configured to receive a second instruction sent by the target terminal before receiving the first instruction sent by the target terminal.
  • the processor 1410 is further configured to, before receiving the first instruction sent by the target terminal, set the second object in the reference image to a state of prohibiting user input in response to the second instruction.
  • the input unit 1404 is further configured to acquire a captured reference image that includes multiple objects.
  • the input unit 1404 is further configured to take each of the multiple objects as a photographing object, and acquire multiple object images respectively corresponding to each object captured at different times.
  • the input unit 1404 is further configured to take multiple objects as shooting objects, and acquire multiple images including multiple objects captured at different times.
  • the user input unit 1407 is further configured to receive a fifth input from the user on the target image in the multiple images.
  • the processor 1410 is further configured to determine the target image as the reference image in response to the fifth input.
  • the processor 1410 is further configured to segment each of the multiple images according to the object to obtain multiple object images respectively corresponding to each object.
  • the preset feature type is targeted to be screened, and then a satisfactory first object image associated with the target feature label is screened to achieve Personalized editing of group photos, so that each object in the reference image can be more efficiently shot when it is in its best state.
  • the input unit 1404 may include a graphics processor (Graphics Processing Unit, GPU) 14041 and a microphone 14042, and the graphics processor 14041 is used for the image capture device (such as the image data of the still picture or video obtained by the camera) for processing.
  • the display unit 1406 may include a display panel 14061, and the display panel 14061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 1407 includes a touch panel 14071 and other input devices 14072 . Touch panel 14071, also called touch screen.
  • the touch panel 14071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 14072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • Memory 1409 can be used to store software programs as well as various data, including but not limited to application programs and operating systems.
  • the processor 1410 may integrate an application processor and a modem processor, wherein the application processor mainly processes operating systems, user interfaces, and application programs, and the modem processor mainly processes wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 1410 .
  • the embodiment of the present application also provides a readable storage medium, the readable storage medium stores a program or an instruction, and when the program or instruction is executed by a processor, each process of the above-mentioned image processing method embodiment is realized, and can achieve the same To avoid repetition, the technical effects will not be repeated here.
  • the processor is the processor in the electronic device described in the above embodiments.
  • the readable storage medium includes computer readable storage medium, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above image processing method embodiment Each process can achieve the same technical effect, so in order to avoid repetition, it will not be repeated here.
  • chips mentioned in the embodiments of the present application may also be called system-on-chip, system-on-chip, system-on-a-chip, or system-on-a-chip.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase “comprising a " does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
  • the scope of the methods and devices in the embodiments of the present application is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in reverse order according to the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种图像处理方法、装置及电子设备,属于图像处理技术领域。该图像处理方法包括以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从所述多个图像中确定基准图像和与每个对象分别对应的多个对象图像;其中,所述基准图像中包括所述多个对象,所述对象图像中包括对应的一个对象;接收用户针对所述基准图像中的第一对象的第一输入;其中,所述第一对象为所述多个对象中的任一对象;响应于所述第一输入,将所述基准图像中与所述第一对象对应的对象图像替换为与所述第一对象对应的其他对象图像,生成合成图像。

Description

图像处理方法、装置及电子设备
相关申请的交叉引用
本申请要求享有于2021年05月28日提交的名称为“图像处理方法、装置及电子设备”的中国专利申请202110595299.X的优先权,该申请的全部内容通过引用并入本文中。
技术领域
本申请属于图像处理技术领域,具体涉及一种图像处理方法、装置及电子设备。
背景技术
随着电子设备中拍照功能的普及,人们越来越多地使用电子设备进行拍照,尤其在针对多个对象进行合照的场景下,如何得到能够让每个对象都呈现最佳状态的图像,成为图像处理过程中亟待解决的问题。
现有技术中,主要通过采用重复拍摄的方式,获取多个包含该多个对象的图像,进而人为地从这些图像中选择一个图像作为最终图像。这样,由于无法保证最终图像中的每个对象都是在最佳状态下拍摄的,因此,可能会导致图像呈现效果较差的问题。
发明内容
本申请实施例的目的是提供一种图像处理方法、装置及电子设备,能够解决现有技术中包含多个对象的图像,其呈现效果较差的问题。
第一方面,本申请实施例提供了一种图像处理方法,该方法包括:
以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从所述多个图像中确定基准图像和与每个对象分别对应的多个对象图像;其中,所述基准图像中包括所述多个对象,所述对象图像中包括对应的一个 对象;
接收用户针对所述基准图像中的第一对象的第一输入;其中,所述第一对象为所述多个对象中的任一对象;
响应于所述第一输入,将所述基准图像中与所述第一对象对应的对象图像替换为与所述第一对象对应的其他对象图像,生成合成图像。
第二方面,本申请实施例提供了一种图像处理方法的装置,该装置包括:
确定模块,用于以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从所述多个图像中确定基准图像和与每个对象分别对应的多个对象图像;其中,所述基准图像中包括所述多个对象,所述对象图像中包括对应的一个对象;
第一接收模块,用于接收用户针对所述基准图像中的第一对象的第一输入;其中,所述第一对象为所述多个对象中的任一对象;
生成模块,用于响应于所述第一输入,将所述基准图像中与所述第一对象对应的对象图像替换为与所述第一对象对应的其他对象图像,生成合成图像。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法的步骤。
在本申请实施例中,通过基于不同时刻拍摄得到的多个图像,对基准图像中与第一对象对应的对象图像进行替换,以替换为第一对象呈现最佳状态时拍摄得到的对象图像,这样,本申请实施例能够使基准图像中的每 个对象均呈现出最佳状态时的拍摄效果,因此,可以提高图像的呈现效果。
附图说明
图1是根据一示例实施例示出的一种图像处理方法的流程图之一;
图2是根据一示例实施例示出的合照处理页的示意图;
图3是根据一示例实施例示出的一种图像处理方法的流程图之二;
图4是根据一示例实施例示出的滑动预览窗口的示意图;
图5是根据一示例实施例示出的一种图像处理方法的流程图之三;
图6是根据一示例实施例示出的特征标签筛选窗口的示意图;
图7是根据一示例实施例示出的一种图像处理方法的流程图之四;
图8是根据一示例实施例示出的协作编辑开关按钮的示意图;
图9是根据一示例实施例示出的协作编辑页面的示意图;
图10是根据一示例实施例示出的一种图像处理方法的流程图之五;
图11是根据一示例性实施例示出的图像处理应用场景的示意图;
图12是根据一示例性实施例示出的一种图像处理装置的结构框图;
图13是根据一示例性实施例示出的一种电子设备的结构框图;
图14为实现本申请实施例的一种电子设备的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以 是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的图像处理方法及电子设备进行详细地说明。
本申请所提供的图像处理方法,可以应用于对包含多个对象的图像进行处理的场景中,其中,用于进行图像处理的电子设备例如可以是手机、平板、相机等具有图像拍摄功能和图像处理功能的设备。
需要说明的是,本申请实施例提供的图像处理方法,执行主体可以为图像处理装置,或者该图像处理装置中的用于执行图像处理方法的控制模块。本申请实施例中以图像处理装置执行图像处理方法为例,说明本申请实施例提供的图像处理方法的装置。
图1是根据一示例实施例示出的一种图像处理方法的流程图。
如图1所示,该图像处理方法可以包括步骤110至步骤130,具体如下所示。
步骤110,以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从多个图像中确定基准图像和与每个对象分别对应的多个对象图像;其中,基准图像中包括多个对象,对象图像中包括对应的一个对象。
本申请实施例中基准图像可以是包含所有对象的图像,该基准图像可以作为对象替换时的基础。其中,对象可以包括人、动物或者物体。对象图像可以是人、动物或者物体的图像,一个对象图像中可以仅包括对应的一个对象,且对应于同一对象的不同对象图像中对象的姿态可能不同。获取多个图像的方式例如可以是通过电子设备的摄像头进行实时拍摄采集获取,也可以是直接从电子设备的图像数据库中获取。
可选地,该基准图像可以是预定拍摄时刻对应的图像,也可以是用户从不同时刻拍摄得到的多个图像中任意选取的一个图像,其中,预定时刻可以是多个图像对应的多个拍摄时刻中最早的时刻。与每个对象分别对应的多个对象图像可以通过对每个对象独立拍摄获取,也可以通过以多个对象为整体拍摄后对图像进行切割获取,在此不做限定。
在一种可选的实施方式中,上述步骤110具体可以包括:
获取拍摄得到的包含多个对象的基准图像;
分别以多个对象中的每个对象为拍摄对象,获取不同时刻拍摄得到的与每个对象分别对应的多个对象图像。
本申请实施例中基准图像可以是首先拍摄的包含多个对象的图像,再对多个对象中的每个对象进行单独拍摄,电子设备在拍摄时会识别出多个对象中的每个对象,然后在不同时刻按每个对象独立拍摄获取与每个对象分别对应的多个对象图像。
在一个具体的例子中,用户在用手机拍摄多人合照时,手机摄像头可以先拍摄一张合照,然后分别识别出镜头中不同的人像,并对这些人像进行独立连续的抓拍,从而得到不同的人像对应的多个人像,后续再基于首先拍摄得到的合照对这些人像进行单独处理。
这样,通过电子设备先识别出多个对象中的每个对象,然后获取在不同时刻按每个对象独立拍摄的图像,进而可以不用进行其他图像处理操作就能够得到每个对象在不同时刻的对象图像,简化图像处理过程。
在一种可选的实施方式中,上述步骤110具体还可以包括:
以多个对象为拍摄对象,获取不同时刻拍摄得到的包含多个对象的多个图像;
接收用户对多个图像中目标图像的第五输入;
响应于第五输入,确定目标图像为基准图像;
对多个图像中的每个图像分别按照对象进行分割,得到与每个对象分别对应的多个对象图像。
本申请实施例中以多个对象为整体进行连续拍摄,得到的不同时刻的多个图像均可以是包含多个对象的图像。第五输入可以是对目标图像的选择输入,相应地,目标图像可以是用户从不同时刻拍摄得到的多个图像中任意选择的一个图像,将此目标图像作为对象图像替换时所使用的基准图像。
示例性地,电子设备在拍摄得到图像后,可自动识别每个图像中的多个对象,并用预设的分割算法对每个图像分别按照对象进行图像分割,得到与每个对象分别对应的多个对象图像,分割后得到的多个对象图像和多 个原图像都可以保存在图像库中。
在一个具体的例子中,用户在用手机拍摄多人合照时,手机摄像头会连续拍摄多个包含所有人的图像,用户可以任意选择其中一张合照作为基准图像,并自动按预设分割算法把每个图像按照不同的人自动分割成不同人像,后续再对这些分割后的人像进行合照处理。
这样,通过对不同拍摄时刻的包含所有对象的每个图像进行分割,得到与每个对象分别对应的多个对象图像,由于拍摄得到的每个图像都包含所有对象,因此用户可以根据需求选择不同时刻对应的不同拍摄环境下拍摄的图像,作为对象图像替换时所使用的基准图像,提升了用户体验。
步骤120,接收用户针对基准图像中的第一对象的第一输入;其中,第一对象为多个对象中的任一对象。
这里,第一对象可以是用户从基准图像中的多个对象中任意选择的一个对象,第一输入可以是对第一对象对应的对象图像的切换输入。切换方式例如可以是直接通过点击该第一对象对应区域的方式依次切换为其他时刻拍摄得到的对象图像,也可以是先触发预览该第一对象对应其他时刻拍摄得到的对象图像,再通过点击对象图像进行选择的方式切换为其他时刻拍摄得到的对象图像。
在一个具体例子中,如图2所示,用户可在图像拍摄完毕后,在图像处理界面20中,基于基准图像21,选择其中任意一个单独人像22进行处理,点击单独人像22后,人像22呈现高亮状态,进而可以对这个人的人像22进行切换,将基准图像21中这个人的人像22替换为其他时刻拍摄的这个人对应的人像。
步骤130,响应于第一输入,将基准图像中与第一对象对应的对象图像替换为与第一对象对应的其他对象图像,生成合成图像。
其中,与第一对象对应的其他对象图像可以是用户挑选出的一个满意的与第一对象对应的对象图像,也可以是任意一个与第一对象对应的对象图像,确认要替换的与第一对象对应的其他对象图像之后,即可生成合成图像。
由此,通过基于不同时刻拍摄得到的多个图像,对基准图像中与第一 对象对应的对象图像进行替换,以替换为第一对象呈现最佳状态时拍摄得到的对象图像,这样,本申请实施例能够使基准图像中的每个对象均呈现出最佳状态时的拍摄效果,因此,可以提高图像的呈现效果。
基于此,在一种可能的实施例中,如图3所示,上述步骤130具体可以包括步骤1301-步骤1303,具体如下所示:
步骤1301,显示至少一个第一对象图像;其中,第一对象图像为与第一对象对应的对象图像。
其中,第一对象图像可以是与第一对象对应的全部对象图像,也可以是其中需要显示的一个或多个对象图像。示例性地,可向用户展示与第一对象对应的对象图像,以便用户基于该展示的图像进行选择和切换。
具体的,显示该至少一个第一对象图像的方式包括但不限于,弹出预设预览界面并在该界面中以平铺的方式进行显示,或者显示与第一对象对应的滑动预览窗口,并在滑动预览窗口中通过滑动的方式依次显示设定数量的第一对象图像,其中,预设预览界面与滑动预览窗口均可以显示于基准图像中与第一对象对应的区域。
在一种可选的实施方式中,在第一对象图像的数量为多个的情况下,上述步骤1301具体可以包括:
将多个第一对象图像添加至滑动预览窗口中;其中,滑动预览窗口用于显示设定数量的第一对象图像;
在基准图像中与第一对象对应的区域,显示滑动预览窗口。
这里,滑动预览窗口可以显示于基准图像中与第一对象对应的区域,用于预览当前显示的设定数量的第一对象图像,其中,设定数量可以是滑动预览窗口可以容纳的数量。
示例性地,可以在滑动预览窗口中显示不同时刻的与第一对象对应的对象图像,用户可以通过上下滑动的方式切换不同时刻拍摄得到的对象图像,当然也可以通过左右滑动的方式,在此不作限定。
在一个具体例子中,如图4所示,用户可在图像处理界面40中,基于基准图像41选择对应于目标人员的人像410,并进行单独处理,此时,人像410呈现高亮显示的状态,而除人像410以外的其他人像呈现虚化显 示的状态。用户可以在滑动预览窗口42中通过上下滑动的方式,预览该目标人员其他时刻对应的人像,例如人像411。
这样,通过滑动预览窗口可以方便地对第一对象图像进行展示,便于用户对不同时刻的对象图像进行预览。
在一种可选的实施方式中,在基准图像中与第一对象对应的区域,显示滑动预览窗口之后,上述步骤1301具体还可以包括:
接收用户基于滑动预览窗口的第三输入;
响应于第三输入,更新滑动预览窗口中显示的第一对象图像。
其中,第三输入可以是基于滑动预览窗口的滑动输入,具体过程可以是对第一对象图像按拍摄时间顺序进行排列,通过上下滑动的方式依次显示当前对象图像的上一个对象图像或者下一个对象图像,当然也可以是左右滑动的方式,在此不作限定。
这样,用户通过滑动预览窗口可以预览到全部的第一对象图像,便于更好地对第一对象图像进行选择,进而选择出满意的不同时刻的与第一对象对应的对象图像。
步骤1302,接收用户对第一对象图像中的目标对象图像的第二输入。
其中,第二输入可以是对目标对象图像的选择输入,目标对象图像可以是用户从与第一对象对应的多个对象图像中挑选出的最为满意的一个图像。
步骤1303,响应于第二输入,将基准图像中与第一对象对应的对象图像替换为目标对象图像,生成合成图像。
由此,通过基于在不同时刻拍摄下得到的与第一对象对应的对象图像,利用滑动预览窗口对多个对象图像进行预览,可以便于用户预览其他时刻拍摄的对象图像,进而便于用户对对象图像进行选择。
另外,上述在显示至少一个第一对象图像之前,还可以先对需要显示的第一对象图像进行针对性地筛选。基于此,在一种可能的实施例中,如图5所示,在上述步骤1301之前,上述步骤130具体还可以包括步骤1304-步骤1307,具体如下所示:
步骤1304,基于与第一对象对应的多个对象图像,获取与第一对象对 应的多个特征标签。
示例性的,特征标签例如可以是第一对象的表情标签和/或动作标签,其中,表情标签可以包括例如微笑、大笑和嘟嘴等,动作标签可以包括例如跳跃、挥手和鼓掌等。
在一种可选的实施方式中,上述步骤1304中涉及的基于与第一对象对应的多个对象图像,获取与第一对象对应的多个特征标签,具体还可以包括:
按照预设特征类型,从与第一对象对应的多个对象图像中提取与预设特征类型对应的特征信息;
根据特征信息生成与第一对象对应的多个特征标签。
示例性的,预设特征类型可以是对象的表情类型或者动作类型,特征信息可以是根据表情类型或者动作类型所获取的图像特征数据,具体可利用人工智能(Artificial Intelligence,AI)识别获取与第一对象对应的多个对象图像中每个对象图像对应的特征信息。根据每个对象图像对应的特征信息,经过特征信息聚合过程后,可生成与第一对象对应的多个特征标签。其中,一个对象图像可对应一个或多个特征标签,例如从目标人像中提取出微笑和挥手两个特征信息,则该目标人像可与微笑和挥手两个特征标签相关联。
在一个具体的例子中,若目标人员对应的多个人像中有微笑、大笑的表情和挥手、跳跃的动作,则可从与目标人员对应的多个人像中提取出的表情类特征信息微笑和跳跃的特征数据,根据这些特征数据可以生成微笑和跳跃的标签。
这样,通过基于预设特征类型从与第一对象对应的多个对象图像中提取与预设特征类型对应的特征信息,根据特征信息生成与第一对象对应的多个特征标签,便于用户对当前显示的图像进行有针对性地筛选。
步骤1305,显示与第一对象对应的多个特征标签。
其中,第一对象对应的多个特征标签可以以平铺的形式显示于基准图像中与第一对象对应的区域,第一对象对应的多个对象图像中没有的特征则不会显示于此区域,或者呈现为置灰状态,即无法进行点击选择。
在一个具体的例子中,如图2所示,若用户在图像处理界面20中点击筛选按钮23,则可打开如图6所示的特征标签的显示界面,其中,目标人员对应的多个特征标签以平铺的形式显示于基准图像中与目标人员对应的区域61,其中区域61可以包含表情区域610和动作区域620,用户可以在此区域61中对特征标签进行选择。
步骤1306,接收用户对多个特征标签中的目标特征标签的第四输入。
这里,目标特征标签可以是用户从多个特征标签中任意选择的,用于筛选需要显示的对象图像的标签,选中的目标特征标签的数量可以是一个,也可以是多个,也即用户可以只选择一个特征标签用于筛选满足该一个特征标签的图像,也可以同时选择多个特征标签用于筛选同时满足该多个特征标签的图像。其中,第四输入可以是对目标特征标签的选择输入。
步骤1307,响应于第四输入,从与第一对象对应的多个对象图像中,确定与目标特征标签关联的至少一个第一对象图像。
这里,根据目标特征标签可以自动识别出与目标特征标签关联的至少一个第一对象图像,将得到的至少一个第一对象图像进行显示,进而缩小与第一对象对应的对象图像的可选范围。
在一个具体的例子中,如图6所示,用户点击表情区域610和动作区域620中的标签可对人像进行筛选,通过点击微笑标签611会自动识别出目标人员不同时刻的与微笑关联的人像,从而选择出满意的与微笑标签611关联的人像。当然,用户还可同时选择微笑标签611和跳跃标签621,从而选择出满意的与微笑标签611和跳跃标签621关联的人像。另外,用户可以筛选后滑动到第一个满足目标特征筛选条件的人像对应的位置,点击确认按钮62选择,并返回基准图像的处理界面,也可以点击返回按钮63,不改动选择,直接返回。
由此,通过基于预设特征类型生成与第一对象对应的多个特征标签,在从多个特征标签中选择目标特征标签后,可以筛选出与目标特征标签关联的至少一个第一对象图像,这样,可以缩小用户对第一对象图像的选择范围,便于用户快速选择想要的对象图像。
除了上述实施例中由图像拍摄者单独完成图像处理过程的方式之外, 还可以通过多个设备协同的方式完成图像处理过程。基于此,在一种可能的实施例中,如图7所示,在上述步骤110之后,该图像处理方法还可以包括步骤140-步骤160,具体如下所示:
步骤140,与目标终端建立连接。
其中,目标终端可以是参与处理图像的其他终端设备,同时参与图像处理过程的终端数量可以是多个。具体的,与目标终端建立连接的方式包括但不限于目标终端的用户通过例如设备摇一摇、蓝牙或者输入密码的方法与图像所在的终端设备建立连接,进入共同协作处理图像的模式。
在一个具体的例子中,如图8所示,用户在照片拍摄完毕后,可以选择协作处理照片,点击开启左下角协作开关81后,在一定距离范围内,用户进入相机合成合照模式,目标终端用户通过手机摇一摇、蓝牙或者输入密码的方法建立连接即可参与合照处理。
步骤150,接收目标终端发送的第一指令。
其中,第一指令可以为针对基准图像中与第二对象对应的对象图像的切换指令。这里,第二对象可以是多个对象中除第一对象之外的其他对象中的任一对象。
示例性地,目标终端用户在完成对第二对象对应的对象图像的选择处理后,可以发送第一指令至拍摄图像的设备,也即本地设备,使本地设备针对基准图像中与第二对象对应的对象图像进行切换,以切换为目标终端用户选择的对象图像。其中,目标终端用户对第二对象对应的多个对象图像的选择过程,与上述对第一对象对应的多个对象图像的选择过程类似,在此不再赘述。
步骤160,响应于第一指令,将基准图像中与第二对象对应的对象图像替换为与第二对象对应的其他对象图像。
本申请实施例中本地设备在接收到目标终端发送的第一指令后,可以针对基准图像中与第二对象对应的对象图像进行替换。
在一个具体的例子中,用户在照片拍摄完毕后,可以选择协作处理照片,在一定距离范围内与目标终端用户建立连接,进入共同协作处理合照的模式,目标终端用户选择完毕后可发送第一指令,本地设备即可将基准 图像中与第二对象对应的对象图像替换为目标终端用户选择的对象图像。
这样,通过建立连接实现共同协作处理图像,可以缩短处理图像时需要按人员独立进行处理的时间,也缩小了处理图像的工作量,以及提高了处理图像的效率。
在一种可选的实施方式中,在上述步骤150之前,方法具体还可以包括:
接收目标终端发送的第二指令;其中,第二指令为针对基准图像中的第二对象的处理指令;
响应于第二指令,将基准图像中的第二对象设置为禁止用户输入状态。
本申请实施例中接收到目标终端发送的第二指令后,本地设备可将基准图像中的第二对象设置为禁止用户输入状态,其中,禁止用户输入状态可以呈现为置灰的状态。也就是说,在同一时间内,针对图像中的同一对象只能由一个终端进行操作,若图像中一个对象正处于编辑状态,则此对象置灰,即不可以点击此对象进行处理。
在一个具体的例子中,如图9所示,在协作处理图像的模式下,在同一时间内图像中右下角的人像91被另一个终端操作中,此人像91正处于编辑状态,因此,此人象91置灰,即不可以点击此人象91进行处理。
这样,通过建立连接实现共同协作处理图像,且设置禁止用户输入状态使同一时间针对图像中的同一对象只能一个终端进行操作,可以缩短处理图像时需要按对象独立进行处理的时间,实现图像的个性化编辑,这样,可以更高效的使基准图像中的每个对象均呈现出最佳状态时的拍摄效果。
为了更好地描述整个方案,基于上述各种实施方式,举一个具体例子,如图10所示,该图像处理方法可以包括步骤1001-步骤1010,下面对此进行详细解释。
步骤1001,点击相机。
在一个具体例子中,用户在点击相机后会显示如图11所示的页面,页面中有合成合照的图标92。
步骤1002,点击开始合成组合图像模式。
在一个具体例子中,用户点击合成合照图标开始合成合照模式。
步骤1003,开始拍摄,持续x秒。
在一个具体例子中,持续拍摄x秒。
步骤1004,结束拍摄。
在一个具体例子中,按下结束拍摄按钮结束拍摄。
步骤1005,选择是否协作合成图像,若选择协作合成图像,则执行步骤1006;若选择不是协作合成图像,则执行步骤1007。
在一个具体例子中,可以有两种实现方式,用户若选择协作合成图像,则执行步骤1006;若选择不是协作合成图像,则执行步骤1007。
步骤1006,其他终端加入组合图像协作。
在一个具体例子中,用户摇一摇即可参与合照协作。
步骤1007,进入图像合成页。
在一个具体例子中,用户结束拍摄后或者参与合照协作后进入图像合成页进行后续的筛选操作。
步骤1008,根据表情、动作筛选图像。
在一个具体例子中,用户可通过选择相应的表情、动作标签,对目标人员对应的多个人像进行筛选,得到满足筛选条件的一个或多个人像。
步骤1009,按人员独立选择图像。
在一个具体例子中,可按照人员选择多个人像进行选择。
步骤1010,生成组合图像。
在一个具体例子中,用户按人员独立选择并替换为状态最佳的图像后,即可点击生成按钮进行图像生成。
由此,通过基于不同时刻拍摄得到的多个图像,对基准图像中与第一对象对应的对象图像进行替换,以替换为第一对象呈现最佳状态时拍摄得到的对象图像,这样,本申请实施例能够使基准图像中的每个对象均呈现出最佳状态时的拍摄效果,因此,可以提高图像的呈现效果。
需要说明的是,上述本公开实施例描述的应用场景是为了更加清楚的说明本公开实施例的技术方案,并不构成对于本公开实施例提供的技术方 案的限定,本领域普通技术人员可知,随着新应用场景的出现,本公开实施例提供的技术方案对于类似的技术问题,同样适用。
基于相同的发明构思,本申请还提供了一种图像处理装置。下面结合图12对本申请实施例提供的图像处理装置进行详细说明。
图12是根据一示例性实施例示出的一种图像处理装置的结构框图。
如图12所示,图像处理装置1200可以包括:
确定模块1201,用于以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从多个图像中确定基准图像和与每个对象分别对应的多个对象图像;其中,基准图像中包括多个对象,对象图像中包括对应的一个对象;
第一接收模块1202,用于接收用户针对基准图像中的第一对象的第一输入;其中,第一对象为多个对象中的任一对象;
生成模块1203,用于响应于第一输入,将基准图像中与第一对象对应的对象图像替换为与第一对象对应的其他对象图像,生成合成图像。
下面对上述图像处理装置1200进行详细说明,具体如下所示:
在其中一个实施例中,生成模块1203具体可以包括:
第一显示子模块,用于显示至少一个第一对象图像;其中,第一对象图像为与第一对象对应的对象图像;
第一接收子模块,用于接收用户对第一对象图像中的目标对象图像的第二输入;
第一生成子模块,用于响应于第二输入,将基准图像中与第一对象对应的对象图像替换为目标对象图像,生成合成图像。
在其中一个实施例中,第一显示子模块包括:
添加单元,用于在第一对象图像的数量为多个的情况下,将多个第一对象图像添加至滑动预览窗口中;其中,滑动预览窗口用于显示设定数量的第一对象图像;
显示单元,用于在基准图像中与第一对象对应的区域,显示滑动预览窗口。
在其中一个实施例中,在基准图像中与第一对象对应的区域,显示滑 动预览窗口之后,第一显示子模块还包括:
接收单元,用于接收用户基于滑动预览窗口的第三输入;
更新单元,用于响应于第三输入,更新滑动预览窗口中显示的第一对象图像。
在其中一个实施例中,生成模块1203还包括:
第一获取子模块,用于在显示至少一个第一对象图像之前,基于与第一对象对应的多个对象图像,获取与第一对象对应的多个特征标签;
第二显示子模块,用于显示与第一对象对应的多个特征标签;
第二接收子模块,用于接收用户对多个特征标签中的目标特征标签的第四输入;
第二生成子模块,用于响应于第四输入,从与第一对象对应的多个对象图像中,确定与目标特征标签关联的至少一个第一对象图像。
在其中一个实施例中,第一获取子模块包括:
提取单元,用于按照预设特征类型,从与第一对象对应的多个对象图像中提取与预设特征类型对应的特征信息;
生成单元,用于根据特征信息生成与第一对象对应的多个特征标签。
在其中一个实施例中,图像处理的装置还包括:
连接模块,用于在以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从多个图像中确定基准图像和与每个对象分别对应的多个对象图像之后,与目标终端建立连接;
第二接收模块,用于接收目标终端发送的第一指令;其中,第一指令为针对基准图像中与第二对象对应的对象图像的切换指令;
替换模块,用于响应于第一指令,将基准图像中与第二对象对应的对象图像替换为与第二对象对应的其他对象图像。
在其中一个实施例中,图像处理的装置还包括:
第三接收模块,用于在接收目标终端发送的第一指令之前,接收目标终端发送的第二指令;其中,第二指令为针对基准图像中的第二对象的处理指令;
设置模块,用于响应于第二指令,将基准图像中的第二对象设置为禁 止用户输入状态。
在其中一个实施例中,确定模块1201包括:
第二获取子模块,用于获取拍摄得到的包含多个对象的基准图像;
第三获取子模块,用于分别以多个对象中的每个对象为拍摄对象,获取不同时刻拍摄得到的与每个对象分别对应的多个对象图像。
在其中一个实施例中,确定模块1201包括:
第四获取子模块,用于以多个对象为拍摄对象,获取不同时刻拍摄得到的包含多个对象的多个图像;
第三接收子模块,用于接收用户对多个图像中目标图像的第五输入;
确定子模块,用于响应于第五输入,确定目标图像为基准图像;
分割子模块,用于对多个图像中的每个图像分别按照对象进行分割,得到与每个对象分别对应的多个对象图像。
由此,通过基于不同时刻拍摄得到的多个图像,对基准图像中与第一对象对应的对象图像进行替换,以替换为第一对象呈现最佳状态时拍摄得到的对象图像,这样,本申请实施例能够使基准图像中的每个对象均呈现出最佳状态时的拍摄效果,因此,可以提高图像的呈现效果。
本申请实施例中的图像处理装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的图像处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的图像处理装置能够实现图1至图11的方法实施 例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图13所示,本申请实施例还提供一种电子设备1300,包括处理器1301,存储器1302,存储在存储器1302上并可在所述处理器1301上运行的程序或指令,该程序或指令被处理器1301执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图14为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备1400包括但不限于:射频单元1401、网络模块1402、音频输出单元1403、输入单元1404、传感器1405、显示单元1406、用户输入单元1407、接口单元1408、存储器1409、以及处理器1410等部件。
本领域技术人员可以理解,电子设备1400还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1410逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图14中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,输入单元1404,用于以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从多个图像中确定基准图像和与每个对象分别对应的多个对象图像。
用户输入单元1407,用于接收用户针对基准图像中的第一对象的第一输入。
处理器1410,用于响应于第一输入,将基准图像中与第一对象对应的对象图像替换为与第一对象对应的其他对象图像,生成合成图像。
由此,通过基于不同时刻拍摄得到的多个图像,对基准图像中与第一对象对应的对象图像进行替换,以替换为第一对象呈现最佳状态时拍摄得到的对象图像,这样,本申请实施例能够使基准图像中的每个对象均呈现 出最佳状态时的拍摄效果,因此,可以提高图像的呈现效果。
可选地,显示单元1406,还用于显示至少一个第一对象图像。
可选地,用户输入单元1407,还用于接收用户对第一对象图像中的目标对象图像的第二输入。
可选地,处理器1410,还用于响应于第二输入,将基准图像中与第一对象对应的对象图像替换为目标对象图像,生成合成图像。
可选地,处理器1410,还用于在第一对象图像的数量为多个的情况下,将多个第一对象图像添加至滑动预览窗口中。
可选地,显示单元1406,还用于在第一对象图像的数量为多个的情况下,在基准图像中与第一对象对应的区域,显示滑动预览窗口。
可选地,用户输入单元1407,还用于在基准图像中与第一对象对应的区域,显示滑动预览窗口之后,接收用户基于滑动预览窗口的第三输入。
可选地,处理器1410,还用于在基准图像中与第一对象对应的区域,显示滑动预览窗口之后,响应于第三输入,更新滑动预览窗口中显示的第一对象图像。
可选地,输入单元1404,还用于在显示至少一个第一对象图像之前,基于与第一对象对应的多个对象图像,获取与第一对象对应的多个特征标签。
可选地,显示单元1406,还用于在显示至少一个第一对象图像之前,显示与第一对象对应的多个特征标签。
可选地,用户输入单元1407,还用于在显示至少一个第一对象图像之前,接收用户对多个特征标签中的目标特征标签的第四输入。
可选地,处理器1410,还用于在显示至少一个第一对象图像之前,响应于第四输入,从与第一对象对应的多个对象图像中,确定与目标特征标签关联的至少一个第一对象图像。
可选地,处理器1410,还用于按照预设特征类型,从与第一对象对应的多个对象图像中提取与预设特征类型对应的特征信息。
可选地,处理器1410,还用于根据特征信息生成与第一对象对应的多个特征标签。
可选地,网络模块1402,还用于在以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从多个图像中确定基准图像和与每个对象分别对应的多个对象图像之后,与目标终端建立连接。
可选地,用户输入单元1407,还用于在以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从多个图像中确定基准图像和与每个对象分别对应的多个对象图像之后,接收目标终端发送的第一指令。
可选地,处理器1410,还用于在以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从多个图像中确定基准图像和与每个对象分别对应的多个对象图像之后,响应于所述第一指令,将基准图像中与第二对象对应的对象图像替换为与第二对象对应的其他对象图像。
可选地,用户输入单元1407,还用于在接收目标终端发送的第一指令之前,接收目标终端发送的第二指令。
可选地,处理器1410,还用于在接收目标终端发送的第一指令之前,响应于第二指令,将基准图像中的第二对象设置为禁止用户输入状态。
可选地,输入单元1404,还用于获取拍摄得到的包含多个对象的基准图像。
可选地,输入单元1404,还用于分别以多个对象中的每个对象为拍摄对象,获取不同时刻拍摄得到的与每个对象分别对应的多个对象图像。
可选地,输入单元1404,还用于以多个对象为拍摄对象,获取不同时刻拍摄得到的包含多个对象的多个图像。
可选地,用户输入单元1407,还用于接收用户对多个图像中目标图像的第五输入。
可选地,处理器1410,还用于响应于第五输入,确定目标图像为基准图像。
可选地,处理器1410,还用于对多个图像中的每个图像分别按照对象进行分割,得到与每个对象分别对应的多个对象图像。
由此,通过基于在不同时刻拍摄下得到的与第一对象对应的对象图像,对预设特征类型进行有针对性地筛选,进而筛选出满意的与目标特征标签关联的第一对象图像,实现合照的个性化编辑,这样,可以更高效的 使基准图像中的每个对象均呈现出最佳状态时的拍摄效果。
应理解的是,本申请实施例中,输入单元1404可以包括图形处理器(Graphics Processing Unit,GPU)14041和麦克风14042,图形处理器14041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1406可包括显示面板14061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板14061。用户输入单元1407包括触控面板14071以及其他输入设备14072。触控面板14071,也称为触摸屏。触控面板14071可包括触摸检测装置和触摸控制器两个部分。其他输入设备14072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。存储器1409可用于存储软件程序以及各种数据,包括但不限于应用程序和操作系统。处理器1410可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1410中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他 变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (25)

  1. 一种图像处理方法,包括:
    以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从所述多个图像中确定基准图像和与每个对象分别对应的多个对象图像;其中,所述基准图像中包括所述多个对象,所述对象图像中包括对应的一个对象;
    接收用户针对所述基准图像中的第一对象的第一输入;其中,所述第一对象为所述多个对象中的任一对象;
    响应于所述第一输入,将所述基准图像中与所述第一对象对应的对象图像替换为与所述第一对象对应的其他对象图像,生成合成图像。
  2. 根据权利要求1所述的方法,其中,所述将所述基准图像中与所述第一对象对应的对象图像替换为与所述第一对象对应的其他对象图像,生成合成图像,包括:
    显示至少一个第一对象图像;其中,所述第一对象图像为与所述第一对象对应的对象图像;
    接收用户对所述第一对象图像中的目标对象图像的第二输入;
    响应于所述第二输入,将所述基准图像中与所述第一对象对应的对象图像替换为所述目标对象图像,生成合成图像。
  3. 根据权利要求2所述的方法,其中,在所述第一对象图像的数量为多个的情况下,所述显示至少一个第一对象图像,包括:
    将多个所述第一对象图像添加至滑动预览窗口中;其中,所述滑动预览窗口用于显示设定数量的所述第一对象图像;
    在所述基准图像中与所述第一对象对应的区域,显示所述滑动预览窗口。
  4. 根据权利要求3所述的方法,其中,在所述基准图像中与所述第一对象对应的区域,显示所述滑动预览窗口之后,所述方法还包括:
    接收用户基于所述滑动预览窗口的第三输入;
    响应于所述第三输入,更新所述滑动预览窗口中显示的第一对象图像。
  5. 根据权利要求2所述的方法,其中,在显示至少一个第一对象图像之前,所述方法还包括:
    基于与所述第一对象对应的多个对象图像,获取与所述第一对象对应的多个特征标签;
    显示与所述第一对象对应的多个特征标签;
    接收用户对所述多个特征标签中的目标特征标签的第四输入;
    响应于所述第四输入,从与所述第一对象对应的多个对象图像中,确定与所述目标特征标签关联的至少一个第一对象图像。
  6. 根据权利要求5所述的方法,其中,所述基于与所述第一对象对应的多个对象图像,获取与所述第一对象对应的多个特征标签,包括:
    按照预设特征类型,从与所述第一对象对应的多个对象图像中提取与预设特征类型对应的特征信息;
    根据所述特征信息生成与所述第一对象对应的多个特征标签。
  7. 根据权利要求2-6任一项所述的方法,其中,在以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从所述多个图像中确定基准图像和与每个对象分别对应的多个对象图像之后,所述方法还包括:
    与目标终端建立连接;
    接收所述目标终端发送的第一指令;其中,所述第一指令为针对所述基准图像中与第二对象对应的对象图像的切换指令;
    响应于所述第一指令,将所述基准图像中与所述第二对象对应的对象图像替换为与所述第二对象对应的其他对象图像。
  8. 根据权利要求7所述的方法,其中,在接收所述目标终端发送的第一指令之前,所述方法还包括:
    接收所述目标终端发送的第二指令;其中,所述第二指令为针对所述基准图像中的第二对象的处理指令;
    响应于所述第二指令,将所述基准图像中的所述第二对象设置为禁止用户输入状态。
  9. 根据权利要求1所述的方法,其中,所述以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从所述多个图像中确定基准图像和 与每个对象分别对应的多个对象图像,包括:
    获取拍摄得到的包含所述多个对象的基准图像;
    分别以所述多个对象中的每个对象为拍摄对象,获取不同时刻拍摄得到的与每个对象分别对应的多个对象图像。
  10. 根据权利要求1所述的方法,其中,所述以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从所述多个图像中确定基准图像和与每个对象分别对应的多个对象图像,包括:
    以所述多个对象为拍摄对象,获取不同时刻拍摄得到的包含所述多个对象的多个图像;
    接收用户对所述多个图像中目标图像的第五输入;
    响应于所述第五输入,确定所述目标图像为基准图像;
    对所述多个图像中的每个图像分别按照所述对象进行分割,得到与每个对象分别对应的多个对象图像。
  11. 一种图像处理的装置,包括:
    确定模块,用于以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从所述多个图像中确定基准图像和与每个对象分别对应的多个对象图像;其中,所述基准图像中包括所述多个对象,所述对象图像中包括对应的一个对象;
    第一接收模块,用于接收用户针对所述基准图像中的第一对象的第一输入;其中,所述第一对象为所述多个对象中的任一对象;
    生成模块,用于响应于所述第一输入,将所述基准图像中与所述第一对象对应的对象图像替换为与所述第一对象对应的其他对象图像,生成合成图像。
  12. 根据权利要求11所述的装置,其中,所述生成模块包括:
    第一显示子模块,用于显示至少一个第一对象图像;其中,所述第一对象图像为与所述第一对象对应的对象图像;
    第一接收子模块,用于接收用户对所述第一对象图像中的目标对象图像的第二输入;
    第一生成子模块,用于响应于所述第二输入,将所述基准图像中与所 述第一对象对应的对象图像替换为所述目标对象图像,生成合成图像。
  13. 根据权利要求12所述的装置,其中,所述第一显示子模块包括:
    添加单元,用于在所述第一对象图像的数量为多个的情况下,将多个所述第一对象图像添加至滑动预览窗口中;其中,所述滑动预览窗口用于显示设定数量的所述第一对象图像;
    显示单元,用于在所述基准图像中与所述第一对象对应的区域,显示所述滑动预览窗口。
  14. 根据权利要求13所述的装置,其中,所述第一显示子模块还包括:
    接收单元,用于在所述基准图像中与所述第一对象对应的区域,显示所述滑动预览窗口之后,接收用户基于所述滑动预览窗口的第三输入;
    更新单元,用于响应于所述第三输入,更新所述滑动预览窗口中显示的第一对象图像。
  15. 根据权利要求12所述的装置,其中,所述生成模块还包括:
    第一获取子模块,用于在显示至少一个第一对象图像之前,基于与所述第一对象对应的多个对象图像,获取与所述第一对象对应的多个特征标签;
    第二显示子模块,用于显示与所述第一对象对应的多个特征标签;
    第二接收子模块,用于接收用户对所述多个特征标签中的目标特征标签的第四输入;
    第二生成子模块,用于响应于所述第四输入,从与所述第一对象对应的多个对象图像中,确定与所述目标特征标签关联的至少一个第一对象图像。
  16. 根据权利要求15所述的装置,其中,所述第一获取子模块包括:
    提取单元,用于按照预设特征类型,从与所述第一对象对应的多个对象图像中提取与预设特征类型对应的特征信息;
    生成单元,用于根据所述特征信息生成与所述第一对象对应的多个特征标签。
  17. 根据权利要求12-16任一项所述的装置,还包括:
    连接模块,用于在以多个对象为拍摄对象,获取不同时刻拍摄得到的多个图像,并从所述多个图像中确定基准图像和与每个对象分别对应的多个对象图像之后,与目标终端建立连接;
    第二接收模块,用于接收所述目标终端发送的第一指令;其中,所述第一指令为针对所述基准图像中与第二对象对应的对象图像的切换指令;
    替换模块,用于响应于所述第一指令,将所述基准图像中与所述第二对象对应的对象图像替换为与所述第二对象对应的其他对象图像。
  18. 根据权利要求17所述的装置,还包括:
    第三接收模块,用于在接收所述目标终端发送的第一指令之前,接收所述目标终端发送的第二指令;其中,所述第二指令为针对所述基准图像中的第二对象的处理指令;
    设置模块,用于响应于所述第二指令,将所述基准图像中的所述第二对象设置为禁止用户输入状态。
  19. 根据权利要求11所述的装置,其中,所述确定模块包括:
    第二获取子模块,用于获取拍摄得到的包含所述多个对象的基准图像;
    第三获取子模块,用于分别以所述多个对象中的每个对象为拍摄对象,获取不同时刻拍摄得到的与每个对象分别对应的多个对象图像。
  20. 根据权利要求11所述的装置,其中,所述确定模块包括:
    第四获取子模块,用于以所述多个对象为拍摄对象,获取不同时刻拍摄得到的包含所述多个对象的多个图像;
    第三接收子模块,用于接收用户对所述多个图像中目标图像的第五输入;
    确定子模块,用于响应于所述第五输入,确定所述目标图像为基准图像;
    分割子模块,用于对所述多个图像中的每个图像分别按照所述对象进行分割,得到与每个对象分别对应的多个对象图像。
  21. 一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时 实现如权利要求1-10任一项所述的图像处理方法的步骤。
  22. 一种电子设备,被配置为用于执行如权利要求1-10中任一项所述的图像处理方法的步骤。
  23. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或所述指令被处理器执行时实现如权利要求1-10中任一项所述的图像处理方法的步骤。
  24. 一种计算机程序产品,所述计算机程序产品被处理器执行以实现如权利要求1-10中任一项所述的图像处理方法的步骤。
  25. 一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1-10中任一项所述的图像处理方法的步骤。
PCT/CN2022/094353 2021-05-28 2022-05-23 图像处理方法、装置及电子设备 WO2022247766A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110595299.X 2021-05-28
CN202110595299.XA CN113347355A (zh) 2021-05-28 2021-05-28 图像处理方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2022247766A1 true WO2022247766A1 (zh) 2022-12-01

Family

ID=77472611

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094353 WO2022247766A1 (zh) 2021-05-28 2022-05-23 图像处理方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN113347355A (zh)
WO (1) WO2022247766A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113347355A (zh) * 2021-05-28 2021-09-03 维沃移动通信(杭州)有限公司 图像处理方法、装置及电子设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236890A (zh) * 2010-05-03 2011-11-09 微软公司 从多个图像生成组合图像
CN104967637A (zh) * 2014-07-07 2015-10-07 腾讯科技(深圳)有限公司 操作处理方法、装置及终端
CN106204435A (zh) * 2016-06-27 2016-12-07 北京小米移动软件有限公司 图像处理方法及装置
CN108513069A (zh) * 2018-03-30 2018-09-07 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111178125A (zh) * 2018-11-13 2020-05-19 奥多比公司 用于群体肖像中的人的混合和替换的替换区域的智能标识
CN111611423A (zh) * 2019-02-22 2020-09-01 富士胶片株式会社 图像处理装置、图像处理方法及记录介质
CN113347355A (zh) * 2021-05-28 2021-09-03 维沃移动通信(杭州)有限公司 图像处理方法、装置及电子设备
JP2021150865A (ja) * 2020-03-19 2021-09-27 富士フイルム株式会社 画像処理装置、画像処理方法、及び画像処理プログラム

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004247983A (ja) * 2003-02-14 2004-09-02 Konica Minolta Holdings Inc 撮影装置及び画像処理装置並びに画像処理プログラム
JP4503933B2 (ja) * 2003-03-13 2010-07-14 オリンパス株式会社 撮像装置
CN106027900A (zh) * 2016-06-22 2016-10-12 维沃移动通信有限公司 一种拍照方法及移动终端
CN106454121B (zh) * 2016-11-11 2020-02-07 努比亚技术有限公司 双摄像头拍照方法及装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236890A (zh) * 2010-05-03 2011-11-09 微软公司 从多个图像生成组合图像
CN104967637A (zh) * 2014-07-07 2015-10-07 腾讯科技(深圳)有限公司 操作处理方法、装置及终端
CN106204435A (zh) * 2016-06-27 2016-12-07 北京小米移动软件有限公司 图像处理方法及装置
CN108513069A (zh) * 2018-03-30 2018-09-07 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111178125A (zh) * 2018-11-13 2020-05-19 奥多比公司 用于群体肖像中的人的混合和替换的替换区域的智能标识
CN111611423A (zh) * 2019-02-22 2020-09-01 富士胶片株式会社 图像处理装置、图像处理方法及记录介质
JP2021150865A (ja) * 2020-03-19 2021-09-27 富士フイルム株式会社 画像処理装置、画像処理方法、及び画像処理プログラム
CN113347355A (zh) * 2021-05-28 2021-09-03 维沃移动通信(杭州)有限公司 图像处理方法、装置及电子设备

Also Published As

Publication number Publication date
CN113347355A (zh) 2021-09-03

Similar Documents

Publication Publication Date Title
CN112954210B (zh) 拍照方法、装置、电子设备及介质
CN113093968B (zh) 拍摄界面显示方法、装置、电子设备及介质
CN112954196B (zh) 拍摄方法、装置、电子设备及可读存储介质
CN112135046A (zh) 视频拍摄方法、视频拍摄装置及电子设备
CN111722775A (zh) 图像处理方法、装置、设备及可读存储介质
CN113794829B (zh) 拍摄方法、装置及电子设备
CN113014801B (zh) 录像方法、装置、电子设备及介质
CN111770386A (zh) 视频处理方法、视频处理装置及电子设备
CN113194256B (zh) 拍摄方法、装置、电子设备和存储介质
WO2022247766A1 (zh) 图像处理方法、装置及电子设备
CN113852757B (zh) 视频处理方法、装置、设备和存储介质
CN113794831B (zh) 视频拍摄方法、装置、电子设备及介质
CN113207038B (zh) 视频处理方法、视频处理装置和电子设备
CN113596574A (zh) 视频处理方法、视频处理装置、电子设备和可读存储介质
CN111885298B (zh) 图像处理方法及装置
CN113271378A (zh) 图像处理方法、装置及电子设备
CN115967854A (zh) 拍照方法、装置及电子设备
CN113873081B (zh) 关联图像的发送方法、装置及电子设备
CN112637491A (zh) 拍摄方法和拍摄装置
CN114143455B (zh) 拍摄方法、装置及电子设备
CN115278378B (zh) 信息显示方法、信息显示装置、电子设备和存储介质
CN114286002B (zh) 图像处理电路、方法、装置、电子设备及芯片
CN112367562B (zh) 图像处理方法、装置及电子设备
CN113923367B (zh) 拍摄方法、拍摄装置
CN114520875B (zh) 视频处理方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22810491

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22810491

Country of ref document: EP

Kind code of ref document: A1