CN113347355A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113347355A
CN113347355A CN202110595299.XA CN202110595299A CN113347355A CN 113347355 A CN113347355 A CN 113347355A CN 202110595299 A CN202110595299 A CN 202110595299A CN 113347355 A CN113347355 A CN 113347355A
Authority
CN
China
Prior art keywords
image
images
objects
reference image
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110595299.XA
Other languages
Chinese (zh)
Inventor
浦帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110595299.XA priority Critical patent/CN113347355A/en
Publication of CN113347355A publication Critical patent/CN113347355A/en
Priority to PCT/CN2022/094353 priority patent/WO2022247766A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, and belongs to the technical field of image processing. The image processing method comprises the steps of taking a plurality of objects as shooting objects, obtaining a plurality of images shot at different moments, and determining a reference image and a plurality of object images corresponding to each object from the plurality of images; wherein the reference image comprises the plurality of objects, and the object image comprises a corresponding object; receiving a first input of a user for a first object in the reference image; wherein the first object is any one of the plurality of objects; in response to the first input, replacing an object image corresponding to the first object in the reference image with another object image corresponding to the first object, and generating a composite image.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method and device and electronic equipment.
Background
With the popularization of the photographing function in the electronic device, people increasingly use the electronic device to photograph, and particularly in a scene of performing a group photograph on a plurality of objects, how to obtain an image which can make each object present an optimal state becomes a problem to be solved urgently in the image processing process.
In the prior art, a plurality of images including a plurality of objects are acquired mainly by means of repeated shooting, and one image is artificially selected from the images to be used as a final image. Thus, since it is not guaranteed that each object in the final image is photographed in an optimal state, there may be caused a problem that an image presentation effect is poor.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, and an electronic device, which can solve the problem in the prior art that an image including multiple objects has a poor presentation effect.
In a first aspect, an embodiment of the present application provides an image processing method, including:
the method comprises the steps of taking a plurality of objects as shooting objects, obtaining a plurality of images shot at different moments, and determining a reference image and a plurality of object images corresponding to each object from the plurality of images; wherein the reference image comprises the plurality of objects, and the object image comprises a corresponding object;
receiving a first input of a user for a first object in the reference image; wherein the first object is any one of the plurality of objects;
in response to the first input, replacing an object image corresponding to the first object in the reference image with another object image corresponding to the first object, and generating a composite image.
In a second aspect, an embodiment of the present application provides an apparatus of an image processing method, where the apparatus includes:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for taking a plurality of objects as shooting objects, acquiring a plurality of images shot at different moments, and determining a reference image and a plurality of object images corresponding to each object from the plurality of images; wherein the reference image comprises the plurality of objects, and the object image comprises a corresponding object;
a first receiving module, configured to receive a first input of a user for a first object in the reference image; wherein the first object is any one of the plurality of objects;
and a generation module, configured to replace, in response to the first input, an object image corresponding to the first object in the reference image with another object image corresponding to the first object, and generate a composite image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the present application, the target image corresponding to the first target in the reference image is replaced with the target image captured when the first target is in the best state by replacing the target image corresponding to the first target in the reference image with the plurality of images captured at different times, so that each target in the reference image can exhibit the capturing effect in the best state, and therefore, the image rendering effect can be improved.
Drawings
FIG. 1 is one of the flow diagrams illustrating one method of image processing according to an example embodiment;
FIG. 2 is a schematic diagram illustrating a photographic processing page in accordance with an example embodiment;
FIG. 3 is a second flowchart illustrating a method of image processing according to an example embodiment;
FIG. 4 is a diagram illustrating a sliding preview window in accordance with an example embodiment;
FIG. 5 is a third flowchart illustrating a method of image processing according to an example embodiment;
FIG. 6 is a schematic diagram illustrating a feature tag filter window according to an example embodiment;
FIG. 7 is a fourth flowchart illustrating a method of image processing according to an example embodiment;
FIG. 8 is a schematic diagram illustrating a collaborative editing toggle button according to an example embodiment;
FIG. 9 is a schematic diagram illustrating a collaborative editing page, according to an example embodiment;
FIG. 10 is a fifth flowchart illustrating a method of image processing according to an example embodiment;
FIG. 11 is a schematic diagram of an image processing application scenario shown in accordance with an exemplary embodiment;
fig. 12 is a block diagram showing a configuration of an image processing apparatus according to an exemplary embodiment;
FIG. 13 is a block diagram illustrating the structure of an electronic device in accordance with an exemplary embodiment;
fig. 14 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method and the electronic device provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The image processing method provided by the application can be applied to a scene for processing an image containing a plurality of objects, wherein the electronic device for processing the image can be a device with an image shooting function and an image processing function, such as a mobile phone, a tablet, a camera and the like.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. In the embodiment of the present application, an image processing apparatus executes an image processing method as an example, and an apparatus of the image processing method provided in the embodiment of the present application is described.
FIG. 1 is a flow diagram illustrating an image processing method according to an example embodiment.
As shown in fig. 1, the image processing method may include the steps of:
step 110, taking a plurality of objects as shooting objects, acquiring a plurality of images shot at different times, and determining a reference image and a plurality of object images corresponding to each object from the plurality of images; the reference image comprises a plurality of objects, and the object image comprises a corresponding object; .
The reference image in the embodiment of the present application may be an image including all objects, and the reference image may be a basis when the objects are replaced. The object may include a human, an animal, or an object, among others. The object images may be images of a person, an animal, or an object, only a corresponding one of the objects may be included in one object image, and the postures of the objects may be different in different object images corresponding to the same object. The manner of acquiring the plurality of images may be, for example, real-time shooting, acquiring and acquiring through a camera of the electronic device, or directly acquiring from an image database of the electronic device.
Alternatively, the reference image may be an image corresponding to a predetermined shooting time, or may be one image arbitrarily selected by a user from a plurality of images shot at different times, where the predetermined time may be the earliest time among a plurality of shooting times corresponding to the plurality of images. The plurality of object images respectively corresponding to each object may be acquired by independently photographing each object, or may be acquired by cutting an image after photographing a plurality of objects as a whole, which is not limited herein.
In an optional implementation manner, the step 110 may specifically include:
acquiring a reference image which is obtained by shooting and contains a plurality of objects;
and respectively taking each object in the plurality of objects as a shooting object, and acquiring a plurality of object images which are shot at different moments and respectively correspond to the objects.
In the embodiment of the application, the reference image may be an image including a plurality of objects, which is first photographed, and then each object of the plurality of objects is separately photographed, and the electronic device may recognize each object of the plurality of objects when photographing, and then independently photograph each object at different times to obtain a plurality of object images corresponding to each object.
In a specific example, when a user takes a multi-person group photo with a mobile phone, a camera of the mobile phone may first take one group photo, then identify different persons in a lens respectively, and take independent and continuous snapshots of the persons, so as to obtain a plurality of persons corresponding to different persons, and then perform independent processing on the persons based on the group photo obtained by first taking.
Therefore, each object in the plurality of objects is recognized by the electronic equipment, then the images which are independently shot according to each object at different moments are obtained, further, the object images of each object at different moments can be obtained without other image processing operations, and the image processing process is simplified.
In an optional implementation manner, the step 110 may specifically further include:
taking a plurality of objects as shooting objects, and acquiring a plurality of images which are shot at different moments and contain the plurality of objects;
receiving a fifth input of a user to a target image in the plurality of images;
in response to a fifth input, determining the target image as a reference image;
each of the plurality of images is divided for each object to obtain a plurality of object images corresponding to each object.
In the embodiment of the present application, a plurality of objects are continuously photographed as a whole, and the obtained plurality of images at different times may be images including the plurality of objects. The fifth input may be a selection input for a target image, and accordingly, the target image may be one image arbitrarily selected by the user from a plurality of images captured at different times, and this target image is used as a reference image for replacement of the target image.
For example, after the electronic device captures the images, the electronic device may automatically recognize a plurality of objects in each image, and perform image segmentation on each image according to the objects by using a preset segmentation algorithm to obtain a plurality of object images corresponding to each object, and the plurality of object images and the plurality of original images obtained after segmentation may be stored in an image library.
In a specific example, when a user takes a plurality of photos with a mobile phone, a camera of the mobile phone continuously takes a plurality of images including all people, the user can arbitrarily select one of the photos as a reference image, automatically segments each image into different photos according to different people according to a preset segmentation algorithm, and then performs the photo-combination processing on the segmented photos.
Therefore, each image containing all the objects at different shooting moments is segmented to obtain a plurality of object images corresponding to each object, and each image obtained through shooting contains all the objects, so that a user can select the images shot in different shooting environments corresponding to different moments as a reference image used in object image replacement according to needs, and user experience is improved.
Step 120, receiving a first input of a user for a first object in a reference image; the first object is any one of a plurality of objects.
Here, the first object may be one object arbitrarily selected by the user from among a plurality of objects in the reference image, and the first input may be a switching input to an object image corresponding to the first object. The switching method may be, for example, sequentially switching to the target images captured at the other times by directly clicking the area corresponding to the first target, or switching to the target images captured at the other times by first triggering preview of the target images captured at the other times corresponding to the first target and then selecting by clicking the target images.
In a specific example, as shown in fig. 2, after the image capturing is completed, the user may select any one of the individual figures 22 based on the reference image 21 in the image processing interface 20 for processing, and after clicking the individual figure 22, the figure 22 is in a highlighted state, and further, the person 22 of the person may be switched to replace the figure 22 of the person in the reference image 21 with a figure corresponding to the person captured at another time.
Step 130, in response to the first input, replaces the object image corresponding to the first object in the reference image with another object image corresponding to the first object, and generates a composite image.
The other object image corresponding to the first object may be a satisfactory object image corresponding to the first object selected by the user, or may be any object image corresponding to the first object, and the composite image may be generated after the other object image corresponding to the first object to be replaced is confirmed.
Therefore, the target image corresponding to the first object in the reference image is replaced by the target image shot when the first object is in the best state based on the plurality of images shot at different moments, so that the shooting effect of the best state can be shown for each object in the reference image, and the image presenting effect can be improved.
Based on this, in a possible embodiment, as shown in fig. 3, the step 130 may specifically include steps 1301-1303, which are specifically as follows:
step 1301, displaying at least one first object image; the first object image is an object image corresponding to the first object.
The first object image may be all object images corresponding to the first object, or one or more object images that need to be displayed. For example, an object image corresponding to the first object may be presented to the user for the user to select and switch based on the presented image.
Specifically, the manner of displaying the at least one first object image includes, but is not limited to, popping up a preset preview interface and displaying the preset preview interface in a tiled manner in the interface, or displaying a sliding preview window corresponding to the first object and sequentially displaying a set number of first object images in the sliding preview window in a sliding manner, where the preset preview interface and the sliding preview window can both be displayed in an area corresponding to the first object in the reference image.
In an optional implementation manner, when the number of the first object images is multiple, the step 1301 may specifically include:
adding a plurality of first object images to a sliding preview window; the sliding preview window is used for displaying a set number of first object images;
a slide preview window is displayed in a region corresponding to the first object in the reference image.
Here, the slide preview window may be displayed in an area corresponding to the first object in the reference image for previewing a set number of the first object images currently displayed, wherein the set number may be a number that the slide preview window can accommodate.
For example, the object images corresponding to the first object at different times may be displayed in the sliding preview window, and the user may switch the object images captured at different times by sliding up and down, or may slide left and right, which is not limited herein.
In a specific example, as shown in fig. 4, the user may select a portrait 410 corresponding to the target person based on the reference image 41 in the image processing interface 40 and perform separate processing, in which the portrait 410 is highlighted and other portraits other than the portrait 410 are blurred. The user can preview the portrait corresponding to the target person at other times, for example, the portrait 411, by sliding up and down in the sliding preview window 42.
Therefore, the first object image can be conveniently displayed by sliding the preview window, and the user can conveniently preview the object images at different moments.
In an optional implementation manner, after displaying the sliding preview window in the area corresponding to the first object in the reference image, the step 1301 may specifically further include:
receiving a third input of the user based on the sliding preview window;
in response to a third input, the first object image displayed in the sliding preview window is updated.
The third input may be a sliding input based on a sliding preview window, and the specific process may be to arrange the first object images in the shooting time sequence, and sequentially display the previous object image or the next object image of the current object image in a manner of sliding up and down, or may be a manner of sliding left and right, which is not limited herein.
In this way, the user can preview all the first object images by sliding the preview window, so that the first object images can be selected better, and satisfactory object images corresponding to the first object at different moments can be selected.
Step 1302, receiving a second input of the target object image in the first object image from the user.
The second input may be a selection input for a target object image, and the target object image may be the most satisfactory one selected by the user from among the plurality of object images corresponding to the first object.
And a step 1303 of replacing the object image corresponding to the first object in the reference image with the target object image in response to the second input, and generating a composite image.
Therefore, the user can preview the object images shot at other times conveniently and select the object images conveniently by previewing the object images corresponding to the first object through the sliding preview window based on the object images shot at different times.
In addition, before displaying at least one first object image, the first object image to be displayed may be specifically screened. Based on this, in a possible embodiment, as shown in fig. 5, before the step 1301, the step 130 may further include steps 1304 to 1307, which are specifically shown as follows:
at step 1304, a plurality of feature labels corresponding to the first object are obtained based on the plurality of object images corresponding to the first object.
Illustratively, the feature tag may be, for example, an emoji tag and/or an action tag of the first object, wherein the emoji tag may include, for example, smiles, laughs, and bleeps, and the like, and the action tag may include, for example, jumps, waving, and applause, and the like.
In an optional implementation manner, the acquiring, based on a plurality of object images corresponding to the first object, a plurality of feature labels corresponding to the first object in step 1304 may specifically include:
extracting feature information corresponding to a preset feature type from a plurality of object images corresponding to a first object according to the preset feature type;
a plurality of feature labels corresponding to the first object are generated from the feature information.
For example, the preset feature type may be an expression type or an action type of the object, and the feature information may be image feature data obtained according to the expression type or the action type, and specifically, feature information corresponding to each of a plurality of object images corresponding to the first object may be obtained by using Artificial Intelligence (AI) recognition. According to the feature information corresponding to each object image, after a feature information aggregation process, a plurality of feature labels corresponding to the first object can be generated. One object image may correspond to one or more feature labels, for example, two feature information of smile and hand waving are extracted from a target portrait, and then the target portrait may be associated with the two feature labels of smile and hand waving.
In a specific example, if there are smiling and laughing expressions and motions of waving and jumping in the plurality of images corresponding to the target person, expression feature information smiling and jumping feature data extracted from the plurality of images corresponding to the target person may be extracted, and a label for smiling and jumping may be generated based on the feature data.
In this way, the feature information corresponding to the preset feature type is extracted from the object images corresponding to the first object based on the preset feature type, and the feature labels corresponding to the first object are generated according to the feature information, so that the user can conveniently and specifically screen the currently displayed image.
Step 1305, displaying a plurality of feature labels corresponding to the first object.
The feature labels corresponding to the first object can be displayed in a tiled manner in the area corresponding to the first object in the reference image, and features not included in the object images corresponding to the first object are not displayed in the area or are in a grayed state, i.e., cannot be selected by clicking.
In a specific example, as shown in fig. 2, if the user clicks the filter button 23 in the image processing interface 20, a display interface of feature labels as shown in fig. 6 may be opened, where a plurality of feature labels corresponding to the target person are displayed in a tiled manner in an area 61 corresponding to the target person in the reference image, where the area 61 may include an expression area 610 and an action area 620, and the user may select a feature label in the area 61.
At step 1306, a fourth input from the user to a target feature tag of the plurality of feature tags is received.
Here, the target feature tags may be tags that are arbitrarily selected by the user from a plurality of feature tags and used for filtering the object image that needs to be displayed, and the number of the selected target feature tags may be one or a plurality of, that is, the user may select only one feature tag for filtering the image that satisfies the one feature tag, or simultaneously select a plurality of feature tags for filtering the images that satisfy the plurality of feature tags. Wherein the fourth input may be a selection input for the target feature tag.
Step 1307, in response to the fourth input, determines at least one first object image associated with the target feature tag from the plurality of object images corresponding to the first object.
Here, at least one first object image associated with the target feature tag may be automatically recognized based on the target feature tag, and the obtained at least one first object image may be displayed, thereby narrowing the selectable range of the object image corresponding to the first object.
In a specific example, as shown in fig. 6, the user may select the portrait by clicking the labels in the expression area 610 and the action area 620, and by clicking the smile label 611, the portrait associated with the smile of the target person at different times is automatically identified, so that the satisfactory portrait associated with the smile label 611 is selected. Of course, the user may also select smile tag 611 and jump tag 621 at the same time, thereby selecting a satisfactory portrait associated with smile tag 611 and jump tag 621. In addition, the user can slide to the position corresponding to the first portrait meeting the target feature screening condition after screening, click the confirmation button 62 for selection, and return to the processing interface of the reference image, or click the return button 63 for direct return without changing the selection.
Therefore, the plurality of feature labels corresponding to the first object are generated based on the preset feature type, and after the target feature label is selected from the plurality of feature labels, at least one first object image associated with the target feature label can be screened out, so that the selection range of the user on the first object image can be narrowed, and the user can conveniently and quickly select the desired object image.
In addition to the manner in which the image processing process is performed by the image photographer alone in the above-described embodiment, the image processing process may be performed in a manner in which a plurality of devices cooperate. Based on this, in a possible embodiment, as shown in fig. 7, after the step 110, the image processing method may further include steps 140 to 160, which are specifically as follows:
step 140, establishing a connection with the target terminal.
The target terminal may be other terminal devices participating in processing the image, and the number of terminals participating in the image processing process may be multiple. Specifically, the method for establishing connection with the target terminal includes, but is not limited to, the user of the target terminal establishing connection with the terminal device where the image is located by, for example, shaking the device, bluetooth, or inputting a password, and entering a mode of cooperatively processing the image.
In a specific example, as shown in fig. 8, after the user finishes taking a photo, the user may select to collaboratively process the photo, and after the user clicks and turns on the lower left collaboration switch 81, the user enters a camera synthesis co-shooting mode within a certain distance range, and the target terminal user may participate in the co-shooting process by shaking a mobile phone, bluetooth, or establishing a connection by inputting a password.
Step 150, receiving a first instruction sent by the target terminal.
The first instruction may be a switching instruction for an object image corresponding to the second object in the reference image. Here, the second object may be any one of other objects than the first object among the plurality of objects.
For example, after the target terminal user completes the selection processing of the object image corresponding to the second object, the target terminal user may send a first instruction to the device that captures the image, that is, the local device, so that the local device switches the object image corresponding to the second object in the reference image to be the object image selected by the target terminal user. The process of selecting the plurality of object images corresponding to the second object by the target terminal user is similar to the process of selecting the plurality of object images corresponding to the first object, and is not repeated here.
Step 160, in response to the first instruction, replacing the object image corresponding to the second object in the reference image with another object image corresponding to the second object.
In the embodiment of the application, after receiving the first instruction sent by the target terminal, the local device may replace an object image corresponding to the second object in the reference image.
In a specific example, after the photo shooting is completed, the user may select a cooperative photo, establish a connection with the target terminal user within a certain distance range, and enter a mode of cooperative photo processing, after the selection is completed, the target terminal user may send a first instruction, and the local device may replace the object image corresponding to the second object in the reference image with the object image selected by the target terminal user.
In this way, the images are processed in a common cooperation mode through establishing connection, so that the time for processing the images independently according to the personnel can be shortened, the workload for processing the images is reduced, and the efficiency for processing the images is improved.
In an optional implementation manner, before the step 150, the method may further include:
receiving a second instruction sent by the target terminal; wherein the second instruction is a processing instruction for a second object in the reference image;
in response to a second instruction, a second object in the reference image is set to a user input disabled state.
In the embodiment of the application, after receiving a second instruction sent by the target terminal, the local device may set the second object in the reference image to a user input prohibition state, where the user input prohibition state may be presented as a grayed-out state. That is, the same object in the image can only be operated by one terminal at the same time, and if an object in the image is in an editing state, the object is grayed, that is, the object cannot be clicked for processing.
In a specific example, as shown in fig. 9, in the mode of cooperatively processing images, when the portrait 91 at the lower right corner of the image is operated by another terminal at the same time, the portrait 91 is in an editing state, and therefore, the portrait 91 is grayed out, that is, the portrait 91 cannot be clicked for processing.
In this way, the images are processed in a common cooperation mode through establishing connection, the input state of a forbidden user is set, so that the same object in the images can be operated by only one terminal at the same time, the time for processing the images independently according to the objects can be shortened, the images can be edited in a personalized mode, and therefore each object in the reference images can be made to have the shooting effect in the best state more efficiently.
To better describe the whole scheme, based on the above various embodiments, as a specific example, as shown in fig. 10, the image processing method may include steps 1001 to 1010, which will be explained in detail below.
Step 1001, click the camera.
In one specific example, the user, after clicking on the camera, would display a page as shown in FIG. 11 with a composite co-photograph icon 92.
Step 1002, click to start synthesizing the combined image pattern.
In one specific example, the user clicks on the composite lighting icon to begin the composite lighting mode.
In step 1003, shooting is started for x seconds.
In one specific example, the shot lasts x seconds.
Step 1004, ending the shooting.
In one specific example, pressing the end shot button ends shooting.
Step 1005, selecting whether to cooperate with the composite image, if selecting to cooperate with the composite image, then executing step 1006; if the selection is not a collaborative composite image, step 1007 is executed.
In a specific example, there may be two implementation manners, if the user selects the collaborative composite image, step 1006 is executed; if the selection is not a collaborative composite image, step 1007 is executed.
In step 1006, other terminals join in the combined image collaboration.
In one particular example, a user may engage in a co-photographing collaboration by shaking.
Step 1007, enter the image composition page.
In a specific example, the user enters the image composition page after finishing shooting or participating in the co-shooting cooperation to perform the subsequent screening operation.
And step 1008, screening the images according to the expressions and the actions.
In a specific example, the user may select a plurality of portraits corresponding to the target person by selecting corresponding expressions and action tags, so as to obtain one or more portraits satisfying the selection condition.
At step 1009, images are individually selected by person.
In one particular example, multiple portraits may be selected according to a person selection.
Step 1010, generate a combined image.
In a specific example, after the user presses the image selected by the user and replaced with the image in the best state, the user can click the generation button to generate the image.
Therefore, the target image corresponding to the first object in the reference image is replaced by the target image shot when the first object is in the best state based on the plurality of images shot at different moments, so that the shooting effect of the best state can be shown for each object in the reference image, and the image presenting effect can be improved.
It should be noted that the application scenarios described in the embodiment of the present disclosure are for more clearly illustrating the technical solutions of the embodiment of the present disclosure, and do not constitute a limitation on the technical solutions provided in the embodiment of the present disclosure, and as a new application scenario appears, a person skilled in the art may know that the technical solutions provided in the embodiment of the present disclosure are also applicable to similar technical problems.
Based on the same inventive concept, the application also provides an image processing device. The image processing apparatus according to the embodiment of the present application will be described in detail below with reference to fig. 12.
Fig. 12 is a block diagram illustrating a configuration of an image processing apparatus according to an exemplary embodiment.
As shown in fig. 12, the image processing apparatus 1200 may include:
a determining module 1201, configured to acquire a plurality of images captured at different times with a plurality of objects as captured objects, and determine a reference image and a plurality of object images corresponding to each object from the plurality of images; the reference image comprises a plurality of objects, and the object image comprises a corresponding object;
a first receiving module 1202, configured to receive a first input of a user for a first object in a reference image; wherein the first object is any one of a plurality of objects;
a generating module 1203 is configured to replace the object image corresponding to the first object in the reference image with another object image corresponding to the first object in response to the first input, and generate a composite image.
The following describes the image processing apparatus 1200 in detail, specifically as follows:
in one embodiment, the generating module 1203 may specifically include:
a first display sub-module for displaying at least one first object image; the first object image is an object image corresponding to the first object;
the first receiving submodule is used for receiving second input of a user to a target object image in the first object image;
and a first generation sub-module configured to generate a composite image by replacing the target object image corresponding to the first object in the reference image with the target object image in response to a second input.
In one embodiment, the first display sub-module includes:
an adding unit configured to add a plurality of first object images to the slide preview window in a case where the number of the first object images is plural; the sliding preview window is used for displaying a set number of first object images;
and a display unit for displaying a slide preview window in a region corresponding to the first object in the reference image.
In one embodiment, after displaying the sliding preview window in the area corresponding to the first object in the reference image, the first display sub-module further includes:
a receiving unit, configured to receive a third input based on the sliding preview window from the user;
and an updating unit for updating the first object image displayed in the sliding preview window in response to a third input.
In one embodiment, the generating module 1203 further includes:
a first obtaining sub-module configured to obtain, before displaying at least one first object image, a plurality of feature labels corresponding to the first object based on a plurality of object images corresponding to the first object;
a second display submodule for displaying a plurality of feature labels corresponding to the first object;
the second receiving submodule is used for receiving fourth input of a user to a target feature tag in the plurality of feature tags;
and the second generation submodule is used for responding to the fourth input and determining at least one first object image associated with the target characteristic label from a plurality of object images corresponding to the first object.
In one embodiment, the first obtaining sub-module includes:
an extraction unit configured to extract feature information corresponding to a preset feature type from a plurality of object images corresponding to a first object according to the preset feature type;
a generating unit configured to generate a plurality of feature labels corresponding to the first object based on the feature information.
In one embodiment, the apparatus for image processing further comprises:
the connection module is used for establishing connection with a target terminal after a plurality of images shot at different moments are obtained by taking a plurality of objects as shooting objects, and a reference image and a plurality of object images corresponding to each object are determined from the plurality of images;
the second receiving module is used for receiving a first instruction sent by the target terminal; the first instruction is a switching instruction aiming at an object image corresponding to a second object in the reference image;
and the replacing module is used for responding to the first instruction and replacing the object image corresponding to the second object in the reference image with other object images corresponding to the second object.
In one embodiment, the apparatus for image processing further comprises:
the third receiving module is used for receiving a second instruction sent by the target terminal before receiving the first instruction sent by the target terminal; wherein the second instruction is a processing instruction for a second object in the reference image;
and the setting module is used for responding to a second instruction and setting a second object in the reference image into a state of prohibiting user input.
In one embodiment, the determining module 1201 includes:
the second acquisition sub-module is used for acquiring a reference image which is obtained by shooting and contains a plurality of objects;
and the third acquisition sub-module is used for respectively taking each object in the plurality of objects as a shooting object and acquiring a plurality of object images which are shot at different moments and respectively correspond to the objects.
In one embodiment, the determining module 1201 includes:
the fourth acquisition sub-module is used for taking the plurality of objects as shooting objects and acquiring a plurality of images which are shot at different moments and contain the plurality of objects;
the third receiving submodule is used for receiving fifth input of a user to a target image in the plurality of images;
a determination submodule for determining the target image as a reference image in response to a fifth input;
and the segmentation submodule is used for segmenting each image in the plurality of images according to the object to obtain a plurality of object images corresponding to each object.
Therefore, the target image corresponding to the first object in the reference image is replaced by the target image shot when the first object is in the best state based on the plurality of images shot at different moments, so that the shooting effect of the best state can be shown for each object in the reference image, and the image presenting effect can be improved.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments in fig. 1 to fig. 11, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 13, an electronic device 1300 is further provided in an embodiment of the present application, and includes a processor 1301, a memory 1302, and a program or an instruction stored in the memory 1302 and capable of running on the processor 1301, where the program or the instruction is executed by the processor 1301 to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 14 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1400 includes, but is not limited to: radio unit 1401, network module 1402, audio output unit 1403, input unit 1404, sensor 1405, display unit 1406, user input unit 1407, interface unit 1408, memory 1409, and processor 1410.
Those skilled in the art will appreciate that the electronic device 1400 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1410 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The input unit 1404 is configured to acquire a plurality of images captured at different times with a plurality of objects as imaging objects, and determine a reference image and a plurality of object images corresponding to each object from the plurality of images.
A user input unit 1407 for receiving a first input by a user for a first object in the reference image.
A processor 1410 configured to replace an object image corresponding to the first object in the reference image with another object image corresponding to the first object in response to the first input, and generate a composite image.
Therefore, the target image corresponding to the first object in the reference image is replaced by the target image shot when the first object is in the best state based on the plurality of images shot at different moments, so that the shooting effect of the best state can be shown for each object in the reference image, and the image presenting effect can be improved.
Optionally, the display unit 1406 is further configured to display at least one first object image.
Optionally, the user input unit 1407 is further configured to receive a second input of the target object image in the first object image by the user.
Optionally, the processor 1410 is further configured to replace the object image corresponding to the first object in the reference image with the target object image in response to a second input, and generate a composite image.
Optionally, the processor 1410 is further configured to add a plurality of first object images to the sliding preview window in a case that the number of first object images is plural.
Optionally, the display unit 1406 is further configured to display a sliding preview window in an area corresponding to the first object in the reference image when the number of the first object images is multiple.
Optionally, the user input unit 1407 is further configured to receive a third input from the user based on the sliding preview window after the sliding preview window is displayed in the area corresponding to the first object in the reference image.
Optionally, the processor 1410 is further configured to update the first object image displayed in the sliding preview window in response to a third input after displaying the sliding preview window in a region corresponding to the first object in the reference image.
Optionally, the input unit 1404 is further configured to, before displaying the at least one first object image, acquire a plurality of feature labels corresponding to the first object based on a plurality of object images corresponding to the first object.
Optionally, the display unit 1406 is further configured to display a plurality of feature labels corresponding to the first object before displaying the at least one first object image.
Optionally, the user input unit 1407 is further configured to receive a fourth input of the target feature tag of the plurality of feature tags from the user before displaying the at least one first object image.
Optionally, the processor 1410 is further configured to determine, in response to a fourth input, at least one first object image associated with the target feature tag from a plurality of object images corresponding to the first object before displaying the at least one first object image.
Optionally, the processor 1410 is further configured to extract feature information corresponding to a preset feature type from a plurality of object images corresponding to the first object according to the preset feature type.
Optionally, the processor 1410 is further configured to generate a plurality of feature labels corresponding to the first object according to the feature information.
Optionally, the network module 1402 is further configured to establish a connection with the target terminal after acquiring a plurality of images captured at different times with a plurality of objects as the capturing objects, and determining a reference image and a plurality of object images corresponding to each object from the plurality of images.
Alternatively, the user input unit 1407 is further configured to receive the first instruction sent by the target terminal after acquiring a plurality of images captured at different times with a plurality of objects as the capturing objects, and determining the reference image and a plurality of object images corresponding to each object from the plurality of images.
Optionally, the processor 1410 is further configured to, after acquiring a plurality of images captured at different time points with a plurality of objects as capturing objects, and determining a reference image and a plurality of object images respectively corresponding to each object from the plurality of images, replace an object image corresponding to a second object in the reference image with another object image corresponding to the second object in response to the first instruction.
Optionally, the user input unit 1407 is further configured to receive a second instruction sent by the target terminal before receiving the first instruction sent by the target terminal.
Optionally, the processor 1410 is further configured to set the second object in the reference image to a state of prohibiting user input in response to the second instruction before receiving the first instruction sent by the target terminal.
Optionally, the input unit 1404 is further configured to acquire a reference image obtained by shooting and including a plurality of objects.
Optionally, the input unit 1404 is further configured to take each of the plurality of objects as a shooting object, and obtain a plurality of object images respectively corresponding to each of the objects, which are shot at different times.
Optionally, the input unit 1404 is further configured to take a plurality of objects as shooting objects, and obtain a plurality of images including the plurality of objects shot at different times.
Optionally, the user input unit 1407 is further configured to receive a fifth input of the target image in the plurality of images from the user.
Optionally, the processor 1410 is further configured to determine the target image as the reference image in response to a fifth input.
Optionally, the processor 1410 is further configured to segment each of the plurality of images according to the object, so as to obtain a plurality of object images corresponding to each object.
Therefore, the preset feature types are screened in a targeted manner based on the object images corresponding to the first object and shot at different moments, so that the first object image associated with the target feature label is screened out satisfactorily, and personalized editing of the co-shooting is realized, so that each object in the reference image can present the shooting effect in the best state more efficiently.
It should be understood that in the embodiment of the present application, the input Unit 1404 may include a Graphics Processing Unit (GPU) 14041 and a microphone 14042, and the Graphics processor 14041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1406 may include a display panel 14061, and the display panel 14061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1407 includes a touch panel 14071 and other input devices 14072. Touch panel 14071, also referred to as a touch screen. The touch panel 14071 may include two parts of a touch detection device and a touch controller. Other input devices 14072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1409 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. The processor 1410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An image processing method, comprising:
the method comprises the steps of taking a plurality of objects as shooting objects, obtaining a plurality of images shot at different moments, and determining a reference image and a plurality of object images corresponding to each object from the plurality of images; wherein the reference image comprises the plurality of objects, and the object image comprises a corresponding object;
receiving a first input of a user for a first object in the reference image; wherein the first object is any one of the plurality of objects;
in response to the first input, replacing an object image corresponding to the first object in the reference image with another object image corresponding to the first object, and generating a composite image.
2. The method according to claim 1, wherein the generating a composite image by replacing the object image corresponding to the first object in the reference image with another object image corresponding to the first object comprises:
displaying at least one first object image; wherein the first object image is an object image corresponding to the first object;
receiving a second input of a user to a target object image in the first object image;
and in response to the second input, replacing the object image corresponding to the first object in the reference image with the target object image, and generating a composite image.
3. The method according to claim 2, wherein in a case where the number of the first object images is plural, the displaying at least one first object image includes:
adding a plurality of the first object images to a sliding preview window; the sliding preview window is used for displaying a set number of first object images;
and displaying the sliding preview window in an area corresponding to the first object in the reference image.
4. The method of claim 3, wherein after displaying the sliding preview window in the area of the reference image corresponding to the first object, the method further comprises:
receiving a third input of the user based on the sliding preview window;
updating the first object image displayed in the sliding preview window in response to the third input.
5. The method of claim 2, wherein prior to displaying at least one first object image, the method further comprises:
acquiring a plurality of feature labels corresponding to the first object based on a plurality of object images corresponding to the first object;
displaying a plurality of feature tags corresponding to the first object;
receiving a fourth input of a user to a target feature tag of the plurality of feature tags;
in response to the fourth input, determining at least one first object image associated with the target feature tag from a plurality of object images corresponding to the first object.
6. The method of claim 5, wherein obtaining a plurality of feature labels corresponding to the first object based on a plurality of object images corresponding to the first object comprises:
extracting feature information corresponding to a preset feature type from a plurality of object images corresponding to the first object according to the preset feature type;
and generating a plurality of feature labels corresponding to the first object according to the feature information.
7. The method according to any one of claims 2 to 6, wherein after taking a plurality of subjects as the subject to be photographed, acquiring a plurality of images photographed at different times, and determining a reference image and a plurality of subject images respectively corresponding to each subject from the plurality of images, the method further comprises:
establishing connection with a target terminal;
receiving a first instruction sent by the target terminal; the first instruction is a switching instruction for an object image corresponding to a second object in the reference image;
and replacing the object image corresponding to the second object in the reference image with another object image corresponding to the second object in response to the first instruction.
8. The method of claim 7, wherein before receiving the first instruction sent by the target terminal, the method further comprises:
receiving a second instruction sent by the target terminal; wherein the second instruction is a processing instruction for a second object in the reference image;
setting the second object in the reference image to a user input disabled state in response to the second instruction.
9. The method according to claim 1, wherein the taking of a plurality of objects as the subject, acquiring a plurality of images taken at different times, and determining a reference image and a plurality of object images respectively corresponding to each object from the plurality of images, comprises:
acquiring a reference image which is obtained by shooting and contains the plurality of objects;
and respectively taking each object in the plurality of objects as a shooting object, and acquiring a plurality of object images which are shot at different moments and respectively correspond to the objects.
10. The method according to claim 1, wherein the taking of a plurality of objects as the subject, acquiring a plurality of images taken at different times, and determining a reference image and a plurality of object images respectively corresponding to each object from the plurality of images, comprises:
taking the plurality of objects as shooting objects, and acquiring a plurality of images which are shot at different moments and contain the plurality of objects;
receiving a fifth input of a user to a target image in the plurality of images;
determining the target image as a reference image in response to the fifth input;
and segmenting each image in the plurality of images according to the object to obtain a plurality of object images corresponding to each object.
11. An apparatus for image processing, the apparatus comprising:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for taking a plurality of objects as shooting objects, acquiring a plurality of images shot at different moments, and determining a reference image and a plurality of object images corresponding to each object from the plurality of images; wherein the reference image comprises the plurality of objects, and the object image comprises a corresponding object;
a first receiving module, configured to receive a first input of a user for a first object in the reference image; wherein the first object is any one of the plurality of objects;
and a generation module, configured to replace, in response to the first input, an object image corresponding to the first object in the reference image with another object image corresponding to the first object, and generate a composite image.
12. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 10.
CN202110595299.XA 2021-05-28 2021-05-28 Image processing method and device and electronic equipment Pending CN113347355A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110595299.XA CN113347355A (en) 2021-05-28 2021-05-28 Image processing method and device and electronic equipment
PCT/CN2022/094353 WO2022247766A1 (en) 2021-05-28 2022-05-23 Image processing method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110595299.XA CN113347355A (en) 2021-05-28 2021-05-28 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113347355A true CN113347355A (en) 2021-09-03

Family

ID=77472611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110595299.XA Pending CN113347355A (en) 2021-05-28 2021-05-28 Image processing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN113347355A (en)
WO (1) WO2022247766A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022247766A1 (en) * 2021-05-28 2022-12-01 维沃移动通信(杭州)有限公司 Image processing method and apparatus, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004247983A (en) * 2003-02-14 2004-09-02 Konica Minolta Holdings Inc Photographing apparatus, image processing apparatus, and image processing program
US20040179125A1 (en) * 2003-03-13 2004-09-16 Olympus Corporation Imaging apparatus
CN106027900A (en) * 2016-06-22 2016-10-12 维沃移动通信有限公司 Photographing method and mobile terminal
CN106454121A (en) * 2016-11-11 2017-02-22 努比亚技术有限公司 Double-camera shooting method and device
CN108513069A (en) * 2018-03-30 2018-09-07 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8515137B2 (en) * 2010-05-03 2013-08-20 Microsoft Corporation Generating a combined image from multiple images
CN104967637B (en) * 2014-07-07 2017-11-14 腾讯科技(深圳)有限公司 operation processing method, device and terminal
CN106204435A (en) * 2016-06-27 2016-12-07 北京小米移动软件有限公司 Image processing method and device
US10896493B2 (en) * 2018-11-13 2021-01-19 Adobe Inc. Intelligent identification of replacement regions for mixing and replacing of persons in group portraits
JP7034969B2 (en) * 2019-02-22 2022-03-14 富士フイルム株式会社 Image processing equipment, image processing methods, programs and recording media
JP7247136B2 (en) * 2020-03-19 2023-03-28 富士フイルム株式会社 Image processing device, image processing method, and image processing program
CN113347355A (en) * 2021-05-28 2021-09-03 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004247983A (en) * 2003-02-14 2004-09-02 Konica Minolta Holdings Inc Photographing apparatus, image processing apparatus, and image processing program
US20040179125A1 (en) * 2003-03-13 2004-09-16 Olympus Corporation Imaging apparatus
CN106027900A (en) * 2016-06-22 2016-10-12 维沃移动通信有限公司 Photographing method and mobile terminal
CN106454121A (en) * 2016-11-11 2017-02-22 努比亚技术有限公司 Double-camera shooting method and device
CN108513069A (en) * 2018-03-30 2018-09-07 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022247766A1 (en) * 2021-05-28 2022-12-01 维沃移动通信(杭州)有限公司 Image processing method and apparatus, and electronic device

Also Published As

Publication number Publication date
WO2022247766A1 (en) 2022-12-01

Similar Documents

Publication Publication Date Title
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN112165553B (en) Image generation method and device, electronic equipment and computer readable storage medium
CN112954196B (en) Shooting method, shooting device, electronic equipment and readable storage medium
KR20140098009A (en) Method and system for creating a context based camera collage
CN113093968A (en) Shooting interface display method and device, electronic equipment and medium
CN112714257B (en) Display control method, display control device, electronic device, and medium
CN113794834B (en) Image processing method and device and electronic equipment
CN111722775A (en) Image processing method, device, equipment and readable storage medium
CN111770386A (en) Video processing method, video processing device and electronic equipment
CN113794829A (en) Shooting method and device and electronic equipment
CN112449110A (en) Image processing method and device and electronic equipment
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN113207038B (en) Video processing method, video processing device and electronic equipment
CN113271378B (en) Image processing method and device and electronic equipment
WO2022247766A1 (en) Image processing method and apparatus, and electronic device
CN113596574A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
CN114125226A (en) Image shooting method and device, electronic equipment and readable storage medium
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113794831A (en) Video shooting method and device, electronic equipment and medium
CN112734661A (en) Image processing method and device
CN111800574A (en) Imaging method and device and electronic equipment
CN114143455B (en) Shooting method and device and electronic equipment
CN113873081B (en) Method and device for sending associated image and electronic equipment
CN113542599A (en) Image shooting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination