CN109120851B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN109120851B
CN109120851B CN201811105878.6A CN201811105878A CN109120851B CN 109120851 B CN109120851 B CN 109120851B CN 201811105878 A CN201811105878 A CN 201811105878A CN 109120851 B CN109120851 B CN 109120851B
Authority
CN
China
Prior art keywords
image
image processing
processing method
reserved position
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811105878.6A
Other languages
Chinese (zh)
Other versions
CN109120851A (en
Inventor
邝平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811105878.6A priority Critical patent/CN109120851B/en
Publication of CN109120851A publication Critical patent/CN109120851A/en
Application granted granted Critical
Publication of CN109120851B publication Critical patent/CN109120851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an image processing method and device and electronic equipment. The method comprises the following steps: acquiring a second image corresponding to at least one second object through a first image corresponding to at least one first object, wherein the second image is generated based on a reserved position of the at least one second object in the first image; synthesizing an image including the at least one first object and the at least one second object based on the first image and the second image. In the application, the first image with the first object provides a reserved position for the second object in the second image, so that the first object and the second object are respectively in proper positions in the images after being combined into the same image, and further the sense of incongruity caused by combination is eliminated.

Description

Image processing method and device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
Photographing is an important way for people to record daily life. In some shooting scenes, the absence situation is inevitably generated.
The current solution is to use drawing software to compose the absent object into the photographed image. However, in view of a great deal of practical experience, people often do not consider the absent object in advance during the shooting framing, and even if drawing software is subsequently used, the absent object is difficult to be integrated into the shot image without a sense of incongruity, so that the shooting result is insufficient.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and terminal equipment, which are used for solving the problem of discomfort caused by synthesizing people or objects into a photographed image in the prior art.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a second image corresponding to at least one second object through a first image corresponding to at least one first object, wherein the second image is generated based on a reserved position of the at least one second object in the first image;
synthesizing an image including the at least one first object and the at least one second object based on the first image and the second image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring a second image corresponding to at least one second object through a first image corresponding to at least one first object, wherein the second image is generated based on a reserved position of the at least one second object in the first image;
a compositing module to composite an image including the at least one first object and the at least one second object based on the first image and the second image.
In a third aspect, an embodiment of the present application provides an electronic device, including: the image processing device comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the steps of the image processing method provided by the embodiment of the application when being executed by the processor.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the image processing method provided by the embodiments of the present application.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects:
in the application, the first object in the first image and the second object in the second image are positioned according to the reserved positions, so that the first object and the second object are respectively positioned at proper positions in the images after being synthesized into the same image, and further the sense of incongruity caused by synthesis is eliminated.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2A is a first schematic diagram of an image processing method in a first implementation manner according to an embodiment of the present application;
fig. 2B is a second schematic diagram of an image processing method in a first implementation manner according to the embodiment of the present application;
fig. 2C is a third schematic diagram of an image processing method in a first implementation manner according to an embodiment of the present application;
fig. 2D is a fourth schematic diagram of an image processing method in a first implementation manner according to an embodiment of the present application;
fig. 3A is a first schematic diagram of an image processing method in a second implementation manner according to an embodiment of the present application;
fig. 3B is a second schematic diagram of an image processing method in a second implementation manner according to the embodiment of the present application;
fig. 3C is a third schematic diagram of an implementation manner of the image processing method according to the embodiment of the present application;
fig. 4A is a first schematic diagram of an implementation manner of an image processing method according to an embodiment of the present application;
fig. 4B is a second schematic diagram of an implementation manner of the image processing method according to the embodiment of the present application;
fig. 4C is a third schematic diagram of an implementation manner of the image processing method according to the embodiment of the present application;
fig. 5 is a schematic diagram of a fourth implementation manner of an image processing method according to an embodiment of the present application;
fig. 6 is a schematic logical structure diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application provides a technical scheme for eliminating the discomfort caused by the synthesis of people or objects to the image.
In one aspect, an embodiment of the present application provides an image processing method, as shown in fig. 1, including:
102, acquiring a second image corresponding to at least one second object through a first image corresponding to at least one first object, wherein the second image is generated based on a reserved position of the at least one second object in the first image;
in this step, the image capturing interface of the first image is different from the image capturing interface of the second image, that is, the first image and the second image are obtained at different times and/or by different devices. The first object and the second object may be one or a plurality of objects. Furthermore the first object and the second object may be elements that may appear in any image of a person, an object, an animal, a building, etc. For the convenience of understanding of the present solution, the following description is given by way of example.
It should be noted that the first object and the second object are not specifically limited in the embodiments of the present application. It is to be understood that the at least one first object may be the same as or different from each other; similarly, the at least one second object may be the same as or different from each other. In addition, the first object may be the same as the second object or may be different from the second object. Step 104, synthesizing an image comprising at least one first object and at least one second object based on the first image and the second image.
In this step, the synthesized image may be a second object added to the second image with the first image as a background; or, the synthesized image may be the first object added to the first image with the second image as the background; or, the synthesized image may be the second object added to the first object in the first image and the second image with the third image as a background.
In this embodiment, the first object in the first image and the second object in the second image are located at the reserved positions, so that the first object and the second object are respectively located at appropriate positions in the images after being combined into the same image, and further the sense of incongruity caused by the combination is eliminated.
On the basis of the above, in order to further remove the sense of incongruity that the first object and the second object are synthesized in the same image, the image acquisition parameters of the first image should be the same as or approximately the same as the image acquisition parameters of the second image.
Wherein, the image acquisition parameters may be, but are not limited to: exposure parameters, focal length parameters, acquisition distance parameters (distance between the acquisition position and the object), acquisition angle parameters (angle between the acquisition position and the object), and the like.
Obviously, the closer the image acquisition parameters of the first image and the second image are to each other, the more the first object and the second object are acquired through one image acquisition interface, so that the same image is synthesized with less sense of incongruity.
On the basis, the image processing method of this embodiment may further help the user determine the reserved position for the second object, that is, before step 102, the method further includes:
step 100, obtaining contour data of at least one second object;
in this step, the contour data of the second object may be any data for determining the contour of the second object; taking the second object as an example, the contour data of the second object may include height data, weight data, three-dimensional data, etc. of the second object.
A reserved position of the at least one second object in the first image is determined based on the contour data, step 101.
In this step, the area of the reserved location determined based on the contour data of the second object may be larger than and close to the area of the contour of the second object so that the second object matches the reserved location.
After the reserved position is determined, the reserved position may be displayed in the image capturing interface of the first image in step 102, so that the station position of the first object is reasonably arranged based on the reserved position indicated by the image capturing interface, and a synthesis space is left for the second object.
For example, when the first object is subjected to image acquisition, the display scale of the first object is adjusted according to the reserved position displayed in the image acquisition interface to enable the display scale of the first object to be close to the display scale of the reserved position, so that after the second object and the first object are synthesized into the same image, the disadvantage that the second object is obviously obtrusive to the first object or the first object is obviously obtrusive to the second object does not occur.
It should be noted that the reserved position displayed on the image capturing interface may serve as a function of the image capturing device. The reserved position described in this embodiment may be displayed in the image capturing interface of the second image in addition to the image capturing interface of the first image.
That is, when the second object is subjected to image acquisition, the reserved position is also displayed in the image acquisition interface of the second image to guide the second object to stand. For example, the second object is directed to stand to a reserved position in the image acquisition interface of the second image.
Further, the display position of the reserved position in the image acquisition interface may be adjusted by a user, for example, after it is detected that the user performs a sliding operation on the reserved position, the reserved position in the image acquisition interface is moved based on a sliding track of the sliding operation. The user can photograph satisfactory images more flexibly through the reasonable layout of the reserved positions.
Obviously, if the first object of the first image and the second object of the second image are both positioned according to the reserved positions, the first object and the second object can be directly synthesized into the same image through the current matting technology on the basis of not modifying the image, and therefore, the whole synthesis process can be executed by the terminal device.
The image processing method of the present embodiment performed by the terminal device will be described in detail below.
In practical applications, the image processing method of this embodiment may be applied to a first terminal (such as a mobile phone, a camera, and other devices) having a photographing function, and when the first terminal executes the image processing method of this embodiment, the following implementation manners may be included:
implementation mode one
The first image is obtained by photographing at least one first object by the first terminal, and the second image is obtained by photographing at least one second object by the first terminal based on the image acquisition parameters of the first image. The first terminal synthesizes the first image and the second image to obtain at least one first object image and at least one second object image.
By way of exemplary presentation:
all the persons taking the picture are in the same place, but one person needs to be allocated in turn to take the picture by using the camera, so that one absent person exists in each picture taking process.
As shown in fig. 2A, it is assumed that a person a is responsible for the first round of photographing, a reserved position a is determined for the person a in advance when the photographing station is closed, the person a selects image acquisition parameters (such as an angle, a distance, a background, camera related parameters, and the like) for photographing, and the person a finishes photographing to obtain a photo a. Then the photographer a returns to the reserved position a and then comes out of the photographer b in the team to take a picture.
As shown in fig. 2B, before the photographer B takes a photo, the reserved position B is determined for the photographer B in advance, and since the camera records the related image acquisition parameters before, the camera can guide the photographer B to adjust the position, angle, camera parameters and the like of the camera to finish photographing. By analogy, multiple people in the team can complete the process.
Then, as shown in fig. 2C, the photos taken by each person are combined into a group photo containing all the persons, and because the relevant image acquisition parameters are consistent during each shooting, a nearly perfect combination effect can be achieved.
As shown in fig. 2D, the execution flow of the first implementation manner includes:
step S201, all the personnel on the team stand.
Step S202, the team does not repeat a photographing person and determines the reserved position of the photographing person.
Step S203, the photographer adjusts the image acquisition parameters of the camera.
And step S204, completing the photographing by the photographing personnel and reserving the image acquisition parameters of the photographing.
Step S205, judging whether all the team personnel come out to finish the photographing; if yes, go to step 206, otherwise go back to step 201.
Step S206, the photographing is finished, and all the photos are automatically synthesized
Implementation mode two
The method comprises the steps that a first terminal obtains a first image and image acquisition parameters of the first image, the first terminal conducts image acquisition on at least one second object according to the image acquisition parameters of the first image to obtain a second image, then the first terminal takes the first image as the background of an image acquisition interface, the at least one second object is photographed based on the reserved position of the at least one second object, and an image comprising the at least one first object and the at least one second object is obtained.
By way of exemplary presentation:
under the condition that a person is absent and the photo is taken, a position is reserved for the absent person in advance before the photo is taken, the size of the reserved position depends on the profile data (for example, body shape feature data, such as height, weight, three-dimensional circumference and other information) of the absent person, the size of the position to be reserved can be calculated according to the profile data input by the user and the image acquisition parameters set by the photo taking when the mobile phone takes the photo, and the user is guided to finish the photo taking and store the image acquisition parameters at the moment as shown in fig. 3A.
Then, the absent person uses the photo as the background of the image acquisition interface of the mobile phone, as shown in fig. 3B, the camera reads the stored image acquisition parameters and performs configuration number, and the user is instructed to shoot the photo into the reserved position.
As shown in fig. 3C, the execution flow of the second implementation mode includes:
and S301, selecting the shooting position of the mobile phone.
Step S302, inputting the outline data of the absent person, and determining the reserved position of the absent person.
Step S303, displaying the reserved position in the image acquisition interface of the mobile phone, and taking pictures of the attendees and storing the image acquisition parameters according to the indication of the reserved position.
And step S304, setting the photo in the step S203 as the background of the image acquisition interface, and enabling the absent person to stand to a reserved position of the image acquisition interface.
And S305, applying the image acquisition parameters saved in the step S303 to photograph the absent person.
Implementation mode three
The first image is obtained by the first terminal performing image acquisition on at least one first object, and the field data (including image acquisition parameters and reserved position information) of the acquired first image is sent to the second terminal, so that a user of the second terminal performs image acquisition on at least one second object by using the second terminal according to the indication of the field data (including the indication of the image acquisition parameters and the indication of the reserved position) to obtain a second image. And the first terminal acquires a second image from the second terminal in real time, adds at least one second object in the second image to an image acquisition interface for acquiring the first image at present, and takes a picture to obtain an image comprising at least one first object and at least one second object.
By way of exemplary presentation:
referring to fig. 4A and 4B, an absent person establishes a real-time connection with an attendee's mobile phone, transmits image data of the absent person in real time to an attendee's mobile phone for taking a photo of the attendee, and the taken mobile phone takes a photo of the photo with the image data as a background together with other people.
As shown in fig. 4C, the execution flow of the third implementation manner includes:
in step S401, the absent person 'S cell phone establishes a connection with the attendee' S cell phone.
In step S402, the attendee' S cell phone captures an image of the attendee at the photo location.
In step S403, the mobile phone of the attendant transmits the live data of the captured image to the mobile phone of the absent person.
And S404, guiding the absent person to stand according to the received field data by the mobile phone of the absent person, and sending the captured image of the absent person to the mobile phone of the attendee.
In step S405, the cell phone of the attendee takes the image of the absent person as a photographing background, and finishes photographing with other people.
Implementation mode four
The first image may be obtained by photographing at least one first object by the second terminal, and the first terminal acquires the first image and the image acquisition parameters of the first image from the second terminal. The first terminal photographs at least one second object based on the image acquisition parameters of the first image and the reserved position of the at least one second object to obtain a second image. And then the first terminal synthesizes the first image and the second image to obtain at least one first object image and at least one second object image.
By way of exemplary presentation:
as shown in fig. 5, the position is reserved for the absent person in advance before the picture of the present person is taken, and the size of the reserved position is calculated by inputting the profile data of the absent person. And taking a picture a. Meanwhile, the mobile phone stores the image acquisition parameters during photographing, the image acquisition parameters are sent to the mobile phone of the absent person, and the absent person uses the mobile phone to take a picture of the absent person according to the image acquisition parameters and the reserved position to obtain a picture b. And the mobile phone automatically synthesizes the picture a and the picture b.
It can be seen from the above different implementation manners that the image processing method of the embodiment can directly obtain the images of all people without the need of the user to perform synthesis processing by using drawing software in the later period. On one hand, the time and the energy of the user are greatly saved; on the other hand, the user is not required to have related drawing skills, so that the application range is wider, and the practicability is higher.
The above is an introduction of the image processing method of the present embodiment, and accordingly, an embodiment of the present application further provides an image processing apparatus, as shown in fig. 6, including:
an obtaining module 61, configured to obtain, through a first image corresponding to at least one first object, a second image corresponding to at least one second object, where the second image is generated based on a reserved position of the at least one second object in the first image;
a composition module 62 for composing an image comprising the at least one first object and the at least one second object based on the first image and the second image.
In this embodiment, the first object in the first image and the second object in the second image are located at the reserved positions, so that the first object and the second object are respectively located at appropriate positions in the images after being combined into the same image, and further the sense of incongruity caused by the combination is eliminated.
Wherein the first image comprises one or more first objects and the second image comprises one or more second objects.
Wherein the image acquisition parameters of the first image are the same as the image acquisition parameters of the second image.
On the basis of the above, the image processing apparatus of the present embodiment may further include:
the reserved position determining module is used for acquiring contour data of at least one second object before a first image corresponding to the at least one first object and a second image corresponding to the at least one second object are acquired, and determining a reserved position of the at least one second object in the first image based on the contour data.
The acquisition module is further configured to display a reserved position of the at least one second object in the first image in an image acquisition interface of the second image.
The acquisition module is further configured to use the first image or at least one first object in the first image as a background of an image acquisition interface of the second image.
Obviously, the image processing apparatus of the present embodiment is the main execution body of the image processing method provided in the present application, and therefore the image processing apparatus of the present embodiment can also achieve the technical effects that the image processing method can achieve.
Therefore, it should be understood that the image processing apparatus of the present embodiment can implement the functions and steps shown in fig. 1, fig. 2D, and fig. 3C, and detailed description thereof is omitted here.
In addition, another embodiment of the present invention also provides an electronic device, as shown in fig. 7, including:
at least one processor 701, a memory 702, at least one network interface 704, and a user interface 703. The various components in the terminal 700 are coupled together by a bus system 705. It is understood that the bus system 705 is used to enable communications among the components. The bus system 705 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various busses are labeled in figure 7 as the bus system 705.
The user interface 703 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It is to be understood that the memory 702 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (SRAM, Static RAM), Dynamic random access memory (DRAM, Dynamic RAM), Synchronous Dynamic random access memory (SDRAM, Synchronous DRAM), double data Rate Synchronous Dynamic random access memory (DDRSDRAM, double data Rate SDRAM), Enhanced Synchronous Dynamic random access memory (ESDRAM, Enhanced SDRAM), Synchronous link Dynamic random access memory (SLDRAM, Synchronous DRAM), and Direct memory bus random access memory (DRRAM, Direct RAM). The memory 702 of the systems and methods described in this embodiment of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 702 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 7021 and application programs 7022.
The operating system 7021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 7022 includes various applications, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. Programs that implement methods in accordance with embodiments of the present invention can be included within application program 7022.
In an embodiment of the present invention, the electronic device 700 further includes: a computer program stored on a memory 702 and executable on a processor 701, the computer program when executed by the processor 701 implementing the steps of:
acquiring a second image corresponding to at least one second object through a first image corresponding to at least one first object, wherein the second image is generated based on a reserved position of the at least one second object in the first image;
synthesizing an image including the at least one first object and the at least one second object based on the first image and the second image.
Optionally, the first image comprises one or more first objects and the second image comprises one or more second objects.
Optionally, the image acquisition parameters of the first image are the same as the image acquisition parameters of the second image.
Optionally, before acquiring a second image corresponding to at least one second object through the first image corresponding to at least one first object, the computer program when executed by the processor 701 implements the following steps:
acquiring contour data of the at least one second object;
determining a reserved position of the at least one second object in the first image based on the contour data.
Optionally, when the processor 701 executes the following steps to obtain the second image corresponding to the at least one second object through the first image corresponding to the at least one first object:
displaying the reserved position of the at least one second object in the first image in an image acquisition interface of the second image.
Optionally, when the processor 701 executes the following steps to obtain the second image corresponding to the at least one second object through the first image corresponding to the at least one first object:
the first image or at least one first object in the first image is used as a background of an image acquisition interface of the second image.
The embodiments of the image processing method disclosed above may be applied in the processor 701 or implemented by the processor 701. The processor 701 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 701. The Processor 701 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may reside in ram, flash memory, rom, prom, or eprom, registers, among other computer-readable storage media known in the art. The computer readable storage medium is located in the memory 702, and the processor 701 reads the information in the memory 702, and performs the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (asics), Digital Signal Processors (DSPDs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this disclosure. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Therefore, it should be understood that the image processing apparatus of the present embodiment can implement the image processing method shown in fig. 1, and can implement the functions of the image processing apparatus in the embodiments shown in fig. 1, fig. 2D, and fig. 3C, which are not described in detail herein.
Furthermore, another embodiment of the present invention also provides a computer-readable storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the steps of:
acquiring a second image corresponding to at least one second object through a first image corresponding to at least one first object, wherein the second image is generated based on a reserved position of the at least one second object in the first image;
synthesizing an image including the at least one first object and the at least one second object based on the first image and the second image.
Optionally, the first image comprises one or more first objects and the second image comprises one or more second objects.
Optionally, the image acquisition parameters of the first image are the same as the image acquisition parameters of the second image.
Optionally, before acquiring a second image corresponding to at least one second object by corresponding to the first image by at least one first object, the computer program when executed by the processor implements the following steps:
acquiring contour data of the at least one second object;
determining a reserved position of the at least one second object in the first image based on the contour data.
Optionally, when the processor executes the first image corresponding to the at least one first object and acquires the second image corresponding to the at least one second object, the following steps are specifically implemented:
displaying the reserved position of the at least one second object in the first image in an image acquisition interface of the second image.
Optionally, when the processor executes the first image corresponding to the at least one first object and acquires the second image corresponding to the at least one second object, the following steps are specifically implemented:
the first image or at least one first object in the first image is used as a background of an image acquisition interface of the second image.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (12)

1. An image processing method, comprising:
acquiring a second image corresponding to at least one second object through a first image corresponding to at least one first object, wherein the second image is generated based on a reserved position of the at least one second object in the first image;
synthesizing an image including the at least one first object and the at least one second object based on the first image and the second image;
before the obtaining of the second image corresponding to the at least one second object through the first image corresponding to the at least one first object, the method further includes:
acquiring contour data of the at least one second object;
determining a reserved position of the at least one second object in the first image based on the contour data.
2. The image processing method according to claim 1, further comprising:
the first image includes one or more first objects and the second image includes one or more second objects.
3. The image processing method according to claim 1,
the image acquisition parameters of the first image are the same as the image acquisition parameters of the second image.
4. The image processing method according to claim 1,
acquiring a second image corresponding to at least one second object through the first image corresponding to at least one first object, wherein the method comprises the following steps:
displaying the reserved position of the at least one second object in the first image on at least one of an image acquisition interface of the first image and an image acquisition interface of the second image.
5. The image processing method according to claim 4,
the method comprises the following steps of obtaining a second image corresponding to at least one second object through the first image corresponding to at least one first object, and further comprises the following steps:
the first image or at least one first object in the first image is used as a background of an image acquisition interface of the second image.
6. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring a second image corresponding to at least one second object through a first image corresponding to at least one first object, wherein the second image is generated based on a reserved position of the at least one second object in the first image;
a composition module for composing an image including the at least one first object and the at least one second object based on the first image and the second image;
wherein the image processing apparatus further comprises:
the reserved position determining module is used for acquiring contour data of at least one second object before a first image corresponding to the at least one first object and a second image corresponding to the at least one second object are acquired, and determining a reserved position of the at least one second object in the first image based on the contour data.
7. The image processing apparatus according to claim 6, further comprising:
the first image includes one or more first objects and the second image includes one or more second objects.
8. The image processing apparatus according to claim 6,
the image acquisition parameters of the first image are the same as the image acquisition parameters of the second image.
9. The image processing apparatus according to claim 6,
the acquisition module is further configured to display a reserved position of the at least one second object in the first image in an image acquisition interface of the second image.
10. The image processing apparatus according to claim 9,
the acquisition module is further configured to use the first image or at least one first object in the first image as a background of an image acquisition interface of the second image.
11. An electronic device, comprising: memory, processor and computer program stored on the memory and running on the processor, which computer program, when executed by the processor, carries out the steps of the image processing method according to any one of claims 1 to 5.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 5.
CN201811105878.6A 2018-09-21 2018-09-21 Image processing method and device and electronic equipment Active CN109120851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811105878.6A CN109120851B (en) 2018-09-21 2018-09-21 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811105878.6A CN109120851B (en) 2018-09-21 2018-09-21 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109120851A CN109120851A (en) 2019-01-01
CN109120851B true CN109120851B (en) 2020-09-22

Family

ID=64856103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811105878.6A Active CN109120851B (en) 2018-09-21 2018-09-21 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109120851B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445570B (en) * 2020-03-09 2021-04-27 天目爱视(北京)科技有限公司 Customized garment design production equipment and method
CN113691610B (en) * 2021-08-20 2024-04-09 Oppo广东移动通信有限公司 Data acquisition method and device, electronic equipment and storage medium
CN113691609B (en) * 2021-08-20 2024-04-09 Oppo广东移动通信有限公司 Data acquisition method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105162978A (en) * 2015-08-26 2015-12-16 努比亚技术有限公司 Method and device for photographic processing
CN107426489A (en) * 2017-05-05 2017-12-01 北京小米移动软件有限公司 Processing method, device and terminal during shooting image
CN107734142A (en) * 2017-09-15 2018-02-23 维沃移动通信有限公司 A kind of photographic method, mobile terminal and server
CN107734245A (en) * 2016-08-10 2018-02-23 中兴通讯股份有限公司 Take pictures processing method and processing device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6415028B2 (en) * 2013-07-12 2018-10-31 キヤノン株式会社 IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105162978A (en) * 2015-08-26 2015-12-16 努比亚技术有限公司 Method and device for photographic processing
CN107734245A (en) * 2016-08-10 2018-02-23 中兴通讯股份有限公司 Take pictures processing method and processing device
CN107426489A (en) * 2017-05-05 2017-12-01 北京小米移动软件有限公司 Processing method, device and terminal during shooting image
CN107734142A (en) * 2017-09-15 2018-02-23 维沃移动通信有限公司 A kind of photographic method, mobile terminal and server

Also Published As

Publication number Publication date
CN109120851A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
CN109754811B (en) Sound source tracking method, device, equipment and storage medium based on biological characteristics
CN109120851B (en) Image processing method and device and electronic equipment
US10284789B2 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
CN109495686B (en) Shooting method and equipment
JP2017531950A (en) Method and apparatus for constructing a shooting template database and providing shooting recommendation information
WO2015143842A1 (en) Mobile terminal photographing method and mobile terminal
JP2016096487A (en) Imaging system
WO2016188185A1 (en) Photo processing method and apparatus
CN109035138A (en) Minutes method, apparatus, equipment and storage medium
AU2015373730B2 (en) Picture processing method and apparatus
CN106296574A (en) 3-d photographs generates method and apparatus
CN111107267A (en) Image processing method, device, equipment and storage medium
CN106572295A (en) Image processing method and device
CN105812672B (en) It takes pictures processing method and processing device
JP2014050022A (en) Image processing device, imaging device, and program
CN106331488A (en) Interface adjusting method and device
CN111654624A (en) Shooting prompting method and device and electronic equipment
WO2015143857A1 (en) Photograph synthesis method and terminal
CN108282622B (en) Photo shooting method and device
CN111918127B (en) Video clipping method and device, computer readable storage medium and camera
WO2017096859A1 (en) Photo processing method and apparatus
KR101514346B1 (en) Apparatus and method for making a composite photograph
CN114785957A (en) Shooting method and device thereof
WO2019061020A1 (en) Image generation method, image generation device, and machine readable storage medium
US10970901B2 (en) Single-photo generating device and method and non-volatile computer-readable media thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant