CN110933300B - Image processing method and electronic terminal equipment - Google Patents

Image processing method and electronic terminal equipment Download PDF

Info

Publication number
CN110933300B
CN110933300B CN201911132637.5A CN201911132637A CN110933300B CN 110933300 B CN110933300 B CN 110933300B CN 201911132637 A CN201911132637 A CN 201911132637A CN 110933300 B CN110933300 B CN 110933300B
Authority
CN
China
Prior art keywords
image
sub
images
imaging
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911132637.5A
Other languages
Chinese (zh)
Other versions
CN110933300A (en
Inventor
肖明
李凌志
王海滨
孙文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Microphone Holdings Co Ltd
Original Assignee
Shenzhen Microphone Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Microphone Holdings Co Ltd filed Critical Shenzhen Microphone Holdings Co Ltd
Priority to CN201911132637.5A priority Critical patent/CN110933300B/en
Publication of CN110933300A publication Critical patent/CN110933300A/en
Application granted granted Critical
Publication of CN110933300B publication Critical patent/CN110933300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses an image processing method and electronic terminal equipment, wherein the image processing method comprises the following steps: acquiring at least two images, namely a first image and a second image, and dividing each image into at least two sub-images; selecting one of the images as a background image; selecting one of the images as a correction image; and selecting at least one sub-image in the corrected image, and synthesizing the sub-image of the corrected image with the background image to generate a target image. The invention enhances the human-computer interaction experience and increases the shooting interest.

Description

Image processing method and electronic terminal equipment
Technical Field
The invention relates to the technical field of terminal photographing, in particular to an image processing method and electronic terminal equipment.
Background
With the development of science and technology, mobile terminals such as mobile phones and tablet personal computers (XAD) with shooting functions are widely used in recent years, while electronic mobile terminals such as digital cameras, smart phones and tablet personal computers generally have shooting functions, so that the requirement that people can shoot at any time and any place is met.
The camera is one of basic applications in the smart phone, and the smart phone camera is used for taking pictures conveniently and quickly, and the imaging quality of the smart phone camera is good.
At present, no matter an android (android) mobile phone or an ios mobile phone with fire and heat, the camera function of the mobile phone is very important in the mobile phone and is the place which can attract eyes of a customer, the richness of the camera function of a product and corresponding performance experience influence the sales volume of the product and the selection of the product by the customer to a great extent, and finally whether the customer uses or continues to use the product is determined, so that the functions of the camera and the performance are very important to be improved.
In the prior art, in a continuous shooting mode of multiple shooting modes in a camera, at least two pictures can be continuously shot within a certain time period, and both the at least two pictures can be stored for a user to preview, or one or at least two pictures which are considered to be the most satisfactory by the user can be selected from the at least two pictures. Thus, one or at least two pictures which are considered to be the most satisfactory by the user are directly selected from the at least two pictures, and the selection space is limited, for example: however, in the case of a plurality of pictures, even when at least two pictures are taken in a continuous manner, it is difficult to select a picture that is satisfactory to all people from among the pictures.
Most of the existing image processing methods aim at still shooting or single target of pictures or one-time overall processing based on the pictures, and the satisfaction degrees of different areas of the pictures, such as people satisfied by people in multi-person group photo, dissatisfaction or shooting of multiple targets, or no special processing aiming at the preference and attention areas of users. All synthesis technologies are preset in the equipment, such as background replacement, black and white color of areas, specific part hiding, panoramic splicing and the like, and have no adjusting function based on user preference, single effect and poor human-computer interaction experience.
Disclosure of Invention
The invention aims to provide an image processing method, electronic terminal equipment and a readable storage medium, which are used for solving the problem that a satisfactory picture cannot be effectively shot by a shooting method in the prior art, and particularly, a satisfactory picture, namely an optimal picture cannot be effectively shot in multi-person photography, so that the human-computer interaction experience is poor.
In order to solve the problems, the invention is realized by the following technical scheme:
an image processing method, comprising:
acquiring at least two images, namely a first image and a second image, and dividing each image into at least two sub-images;
selecting one of the images as a background image;
selecting one of the images as a corrected image;
selecting at least one sub-image in the corrected image, and synthesizing the sub-image of the corrected image with the background image to generate a target image;
prior to the step of acquiring the first image and the second image, further comprising:
providing a first imaging region;
performing a first division of the first imaging region such that it is divided into M rows by N columns of first imaging sub-regions, marking at least one of the M rows by N columns of the first imaging sub-regions;
dividing the first imaging area for the second time to obtain a second imaging area of X rows by Y columns of target imaging sub-areas, wherein the second imaging area comprises the marked first imaging sub-area;
selecting a plurality of target imaging sub-regions;
correspondingly, the first image and/or the second image are divided into a plurality of sub-images, and the sub-images correspond to the target imaging sub-areas one to one.
Preferably, each of the images has the same number of sub-images.
Preferably, each sub-image has an image position, the image positions of the sub-images of each of said images corresponding to each other.
Preferably, the image positions of the sub-images of each of said images do not correspond or partially do not correspond to each other.
Preferably, the image position range of the sub-image of the correction image is larger than the image position range of the sub-image synthesized with the background image.
Preferably, at least two of said images are selected from newly captured images and/or originally saved images.
In another aspect, the present invention further provides an image processing method, including:
acquiring a plurality of images, and dividing at least one image into at least two sub-images;
selecting the image without at least one sub-image as a background image;
selecting at least one sub-image as a corrected image, and synthesizing the corrected image and the background image to obtain a target image;
before the step of acquiring the plurality of images, further comprising:
providing a first imaging region;
performing a first division of the first imaging region such that it is divided into M rows by N columns of first imaging sub-regions, marking at least one of the M rows by N columns of the first imaging sub-regions;
dividing the first imaging area for the second time to obtain a second imaging area of X rows by Y columns of target imaging sub-areas, wherein the second imaging area comprises the marked first imaging sub-area;
selecting a plurality of target imaging sub-regions;
correspondingly, at least one of the images is divided into a plurality of sub-images, and the sub-images correspond to the target imaging sub-areas one to one.
In another aspect, the present invention further provides an image processing method, including:
acquiring a plurality of images, and dividing each image into at least two sub-images;
selecting at least one sub-image as a target background sub-image;
selecting at least one sub-image as a target correction sub-image, and synthesizing the target background sub-image and the target correction sub-image to obtain a target image;
before the step of acquiring a plurality of images, the method further comprises:
providing a first imaging region;
performing a first division of the first imaging region such that it is divided into M rows by N columns of first imaging sub-regions, marking at least one of the M rows by N columns of the first imaging sub-regions;
dividing the first imaging area for the second time to obtain a second imaging area of X rows by Y columns of target imaging sub-areas, wherein the second imaging area comprises the marked first imaging sub-area;
selecting a plurality of target imaging sub-regions;
correspondingly, the image is divided into a plurality of sub-images, and the sub-images correspond to the target imaging sub-regions one to one.
Preferably, before the step of acquiring the first image and the second image, the method further comprises:
providing a first imaging region;
and selecting at least one first imaging sub-region from the first imaging region for marking.
Preferably, the step of selecting at least one sub-image as the target correction sub-image comprises: acquiring a sub-image corresponding to the marked first imaging sub-area in all sub-images in the image;
and selecting the sub-image which meets a first preset condition from the target imaging sub-area as the target correction sub-image.
Preferably, the step of selecting at least one sub-image as the target background sub-image comprises: selecting the sub-image satisfying a second preset condition as the target background sub-image from the sub-images except the sub-image corresponding to the marked first imaging sub-area.
Preferably, the step of selecting at least one sub-image as the target background sub-image and at least one sub-image as the target correction sub-image comprises: and previewing and displaying the sub-images, and manually selecting the sub-images to be respectively used as the target correction sub-image and the target background sub-image.
Preferably, the step of synthesizing the target correction sub-image and the target background sub-image comprises: and splicing and synthesizing the target correction subimages and the target background subimages according to the position relationship between the target correction subimages and the target background subimages, and then optimizing splicing gaps of the spliced and synthesized images to obtain the target image.
In other aspects, the present invention further provides an electronic terminal device, including: a memory, a processor and an image processing program stored on the memory and executable on the processor, the image processing program, when executed by the processor, implementing the steps of the image processing method as described above.
Compared with the prior art, the invention has the following advantages:
the method comprises the steps of dividing each image into at least two sub-images by acquiring at least two images, namely a first image and a second image; selecting one of the images as a background image; selecting one of the images as a corrected image; selecting at least one sub-image in the corrected image, and synthesizing the sub-image of the corrected image with the background image; a target image is generated. The selected background image and the corrected image can be automatically selected or selected by the manual operation of a user, and then the selected background image and the corrected image are synthesized to form a target image, wherein the target image is the optimal image.
Drawings
Fig. 1 is a schematic main flow chart of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a first imaging region obtained by dividing the first imaging region for the first time in the image processing method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a second imaging region after the first imaging region is divided and marked for the second time in the image processing method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a target image obtained in an image processing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic terminal device according to an embodiment of the present invention.
Detailed Description
An image processing method, an electronic terminal device and a readable storage medium according to the present invention will be described in detail with reference to the accompanying drawings and embodiments. Advantages and features of the present invention will become apparent from the following description and from the claims. To make the objects, features and advantages of the present invention comprehensible, reference is made to the accompanying drawings. It should be understood that the structures, ratios, sizes, and the like shown in the drawings and described in the specification are only used for matching with the disclosure of the specification, so as to be understood and read by those skilled in the art, and are not used to limit the implementation conditions of the present invention, so that the present invention has no technical significance, and any structural modification, ratio relationship change or size adjustment should still fall within the scope of the present invention without affecting the efficacy and the achievable purpose of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The core idea of the invention is to provide an image processing method, an electronic terminal device and a readable storage medium, so as to effectively improve human-computer interaction experience.
To achieve the above idea, the present invention provides an image processing method, an electronic terminal device, and a readable storage medium.
It should be noted that the image processing method according to the embodiment of the present invention can be applied to the photographing apparatus according to the embodiment of the present invention, and the photographing apparatus can be configured on an electronic device. The electronic device may be a personal computer, a mobile terminal, and the like, and the mobile terminal may be a hardware device having various operating systems, such as a mobile phone and a tablet computer.
Referring to fig. 1, a flowchart of an image processing method according to an embodiment of the present invention is schematically shown, and as shown in fig. 1, the image processing method according to this embodiment includes the following steps:
an image processing method, comprising:
step S1, acquiring at least two images; in this embodiment, at least two images, the first image and the second image, may be acquired.
Step S2, dividing each image into at least two sub-images;
step S3, selecting one of the images as a background image;
step S4, selecting one of the images as a correction image;
step S5, selecting at least one sub-image in the corrected image, and combining the sub-image of the corrected image with the background image to generate a target image.
Further, the method also comprises the following steps: storing and/or displaying the target image.
In this embodiment, the number of sub-images in each of the images is the same. Each sub-image has an image position, the image positions of the sub-images of each image correspond to each other, or the image positions of the sub-images of each image do not correspond to each other.
Optionally, the image position range of the sub-image of the corrected image is larger than the image position range of the sub-image synthesized with the background image. Optionally, the at least two images are selected from newly captured images and/or originally stored images.
Further, as shown in fig. 2, before executing the step S1, the method further includes: providing a first imaging region; the first imaging region is divided for the first time so that it is divided into M rows by N columns of first imaging sub-regions 200; in this embodiment, the first imaging region is divided for the first time, which may be divided according to the required photo size, for example, the 20M photo imaging region is divided into M1 rows × N1 columns of first imaging sub-regions 200; the 16M photo imaging area is divided into M2 rows x N2 columns of first imaging sub-areas 200. Wherein M is more than or equal to M1 and more than M2, and N is more than or equal to N1 and more than N2.
As shown in fig. 3, at least one of the first imaging sub-regions 200 in the M rows by N columns of the first imaging sub-regions 200 is marked;
the first imaging region is divided for the second time, and a second imaging region of X rows X Y columns of target imaging sub-regions 201 is obtained, where the second imaging region includes the marked first imaging sub-region. In this embodiment, the first imaging sub-regions that have been marked include the four first imaging sub-regions A, B, C and D.
Furthermore, X is less than or equal to M, and Y is less than or equal to N.
Further, the step of dividing the first imaging region for the second time includes: the method includes the steps of previewing and displaying a target shooting scene, marking targets of interest (in this embodiment, the targets of interest are faces of people and heads of pets in the target shooting scene to be shot, and specifically shown in fig. 3) in the target shooting scene in a preview display picture, wherein each target of interest corresponds to one first imaging sub-region 200, dividing the first imaging region for the second time according to the first imaging sub-region 200 corresponding to the target of interest to obtain second imaging regions of X rows and Y columns of target imaging sub-regions 201, wherein the second imaging region is smaller than or equal to the first imaging region, and the area of the target imaging sub-region 201 is equal to the area of the first imaging sub-region 200.
Further, the specific processes of step S1 and step S2 are: acquiring a photographing command, and photographing at least two images of a target photographing scene according to the photographing command; each of the images is divided into X rows by Y columns of sub-images, which correspond one-to-one with the target imaging sub-regions. In this embodiment, the photographing command may be a period of photographing time set for the target photographing scene, and the photographing command is continuously executed on the target photographing scene within the period of photographing time, so as to obtain at least two images of the same target photographing scene.
In addition, the image processing method may further include: the image is reselected as a new modified image,
selecting at least one sub-image from the new corrected image, and synthesizing the selected sub-image with the background image; a second target image is generated. The new corrected image is the selected multiple images in at least two images, the background image or the target image obtained by adopting the image processing method.
Therefore, the method can select the background image and the corrected image automatically or manually by the user, then synthesizes the selected background image and the corrected image into a target image, and particularly can generate the optimal image satisfying all people when multiple people take photos, so that the user requirements are met, namely, the method has an adjusting function based on the user preferences, the human-computer interaction experience is enhanced, and the shooting interestingness is increased.
In another embodiment, an image processing method includes: acquiring a plurality of images (in the present embodiment, only a first image and a second image may be acquired), dividing at least one of the images into at least two sub-images; selecting the image without at least one sub-image as a background image; and selecting at least one sub-image as a corrected image, and synthesizing the corrected image and the background image to obtain a target image.
In other embodiments, an image processing method includes: acquiring a plurality of images (in the present embodiment, only a first image and a second image may be acquired), dividing each of the images into at least two sub-images; selecting at least one sub-image as a target background sub-image; and selecting at least one sub-image as a target correction sub-image, and synthesizing the target background sub-image and the target correction sub-image to obtain a target image.
Preferably, before the step of acquiring a plurality of images, the method further comprises: providing a first imaging region; the first imaging region is divided for the first time so that it is divided into M rows by N columns of first imaging sub-regions 200; in this embodiment, the first imaging region is divided for the first time, which may be divided according to the required photo size, for example, the 20M photo imaging region is divided into M1 rows × N1 columns of first imaging sub-regions 200; the 16M photo imaging area is divided into M2 rows x N2 columns of first imaging sub-areas 200. Wherein M is more than or equal to M1 and more than M2, and N is more than or equal to N1 and more than N2.
With continued reference to fig. 3, at least one of the first imaging sub-regions 200 in the M rows by N columns of the first imaging sub-regions 200 is marked;
the first imaging region is divided for the second time, and a second imaging region of X rows X Y columns of target imaging sub-regions 201 is obtained, where the second imaging region includes the marked first imaging sub-region. In this embodiment, the first imaging sub-regions that have been marked include the four first imaging sub-regions A, B, C and D.
Furthermore, X is less than or equal to M, and Y is less than or equal to N.
Further, the step of dividing the first imaging region for the second time includes: the method includes the steps of previewing a target shooting scene, marking targets of interest (in this embodiment, the targets of interest are faces of people and heads of pets in the target shooting scene to be shot, and specifically shown in fig. 3) in the target shooting scene in a preview display picture, wherein each target of interest corresponds to one first imaging sub-region 200, performing second division on the first imaging sub-regions according to the first imaging sub-regions 200 corresponding to the targets of interest to obtain second imaging regions of X rows and Y columns of target imaging sub-regions 201, wherein the second imaging regions are smaller than or equal to the first imaging regions, and the areas of the target imaging sub-regions 201 are equal to the areas of the first imaging sub-regions 200.
Further, the step of acquiring a plurality of images and dividing each image into at least two sub-images specifically includes: acquiring a photographing command, and photographing at least two images of a target photographing scene according to the photographing command; each of the images is divided into X rows by Y columns of sub-images, which correspond one-to-one with the target imaging sub-regions. In this embodiment, the photographing command may be a period of photographing time set for the target photographing scene, and the photographing command is continuously executed on the target photographing scene within the period of photographing time, so as to obtain at least two images of the same target photographing scene.
Preferably, the step of selecting at least one sub-image as the target correction sub-image comprises: acquiring a sub-image corresponding to the marked first imaging sub-area in all sub-images in the image; that is, in the present embodiment, the number of the sub-images corresponding to the first imaging sub-area that has been marked is 4(A, B, C and D) × n.
The sub-image which satisfies the first preset condition is selected as the target correction sub-image in the sub-images corresponding to the marked first imaging sub-area, that is, four preferable sub-images of the first imaging sub-area 200 (target imaging sub-area 201) at positions A, B, C and D satisfactory for each user are selected as the target correction sub-images in 4(A, B, C and D) × n sub-images, and in this embodiment, the target correction sub-images are 4 (sheets).
Further, the step of selecting at least one sub-image as the target background sub-image comprises: selecting the sub-image satisfying a second preset condition as the target background sub-image from the sub-images except the sub-image corresponding to the marked first imaging sub-area. In this embodiment, except that the number of the sub-images of the target correction sub-images is n × X Y-4n, the sub-images satisfying each user are selected from the n × X Y-4n sub-images as the target correction sub-images, the number of the target correction sub-images is X × Y-4, and the target image is obtained by combining and splicing the X × Y-4 sub-images and the 4 target correction sub-images according to the position relationship between the X × Y-4 sub-images, the position relationship between the target correction sub-images and the 4 target correction sub-images, and a target preset scene (as shown in fig. 4).
Further, the step of selecting the sub-image satisfying the first preset condition as the target correction sub-image may adopt an automatic selection mode or a user manual self-defining mode, where the step of automatically selecting the mode includes: and calling an algorithm library to identify at least two sub-images at the position of the same marked first imaging sub-area to obtain a sub-image meeting the first preset condition as the target correction sub-image, and repeating the process until all the marked target correction sub-images at the position of the first imaging sub-area are identified.
Further, when the target shooting scene comprises a person, the first preset condition comprises one or more of whether the person image is clear, whether the person is smile and whether the person closes eyes. In this embodiment, the algorithm library may select the sub-image with the highest pixel value from the n sub-images located at the first imaging sub-region position of the mark a by comparing the pixel values of the n sub-images located at the first imaging sub-region position of the mark a, and so on.
The sub-images that meet the conditions that the person is smiling, the person is open, the pixel value is high, and the like at the same time can be selected as the target correction sub-images, and the number of the target correction sub-images that meet the conditions is 4.
In this embodiment, a face recognition technology may also be used to perform face recognition on the faces of all people to obtain the target correction sub-image.
When the target correction sub-images are selected in a user manual self-defining mode, n X Y sub-images in total in the n images can be displayed in a preview mode, and each user can manually select the sub-images satisfying the user as the target correction sub-images and the target background sub-images.
In this embodiment, n × X × Y sub-images in total in the n images may be displayed on a touch screen for preview, and a user may touch and click a sub-image satisfying himself or herself as the target correction sub-image and the target background sub-image, and then, the target correction sub-image and the target background sub-image are merged and synthesized to obtain the target image.
Further, selecting the sub-image satisfying a second preset condition as the target background sub-image may select an automatic selection mode or a user manual self-defining mode, where the automatic selection mode includes: and calling an algorithm library to identify at least two sub-images at the same unmarked position of the first imaging sub-area to obtain a sub-image meeting the second preset condition as the target background sub-image, and repeating the process until all the unmarked target background sub-images at the position of the first imaging sub-area are identified.
Furthermore, comparing the pixel values of n sub-images in the n images at the same position by calling an algorithm library, and selecting the sub-image with the highest pixel value as the target background sub-image; in this embodiment, the number of the target background sub-images is X × Y-4, and so on until all the target background sub-images are selected.
In this embodiment, preferably, the step of synthesizing the target corrected sub-images (4) and the target background sub-images (X × Y-4) includes: and splicing and synthesizing the target correction sub-images and the target background sub-images according to the position relationship between the target correction sub-images and the target background sub-images, and then optimizing splicing gaps of the spliced and synthesized images to obtain the target images (the target images have no splicing marks).
Therefore, in the embodiment, by acquiring a plurality of images, each image is divided into at least two sub-images; selecting at least one sub-image as a target background sub-image; and selecting at least one sub-image as a target correction sub-image, and synthesizing the target background sub-image and the target correction sub-image to obtain a target image. The selected target correction sub-image and the target background sub-image can be automatically selected or can be selected through manual operation of a user, then the selected target correction sub-image and the selected target background sub-image are synthesized to form a target image, the target image is an optimal image, particularly, the optimal image satisfying all people can be generated through the image processing method when multiple people take photos, the user requirements are met, namely, the image processing method has an adjusting function based on the user preferences, the man-machine interaction experience is enhanced, and the photographing interestingness is increased.
On the other hand, based on the same inventive concept, the present invention further provides an electronic terminal device, comprising: a memory 101, a processor 100 and a photographing program stored on the memory 101 and executable on the processor 100, the photographing program implementing the steps of the image processing method as described above when executed by the processor 100.
The electronic terminal device may be a mobile phone, a game console, a computer, a tablet device, a personal digital assistant, etc.
The electronic terminal device further includes: a power module 102, an interaction component 103, a communication module 104, a sensor module 105, and an interface 106. The processor 100 generally operates the entirety of the electronic terminal device, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processor 100 may include one or more modules that facilitate interaction between processor 100 and other modules. For example, the processor 100 may include a multimedia module to facilitate interaction between the interaction component 103 and the processor 100. The memory 101 is configured to store various types of data to support operations at the electronic terminal device. Examples of such data include instructions for any application or method operating on the electronic terminal device, contact data, phonebook data, messages, pictures, videos, etc. The memory 101 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. The interaction module 103 may also be a touch display screen for displaying the target image, or previewing the photographed image and providing a user to manually touch and select a target correction sub-image and a target background sub-image which the user feels satisfied in the image.
The power module 102 provides power to the various modules of the electronic terminal device. The power module 102 may include a power management system, one or more power sources, and other modules associated with generating, managing, and distributing power for electronic end devices. The interaction component 103 comprises a screen providing an output interface between the electronic terminal device and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the interaction component 103 includes a front facing camera and/or a rear facing camera. When the electronic terminal equipment is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The interaction component 103 further comprises an audio module configured to output and/or input audio signals. For example, the audio module includes a Microphone (MIC) configured to receive an external audio signal when the electronic terminal device is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 101 or transmitted via the communication module 104. In some embodiments, the audio module further comprises a speaker for outputting audio signals. Interface 106 is an interface to I/O to provide an interface between processor 100 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor module 105 includes one or more sensors for providing various aspects of status assessment for the electronic end devices. For example, the sensor module 105 may detect the open/closed state of the electronic terminal device, the relative positioning of the modules, e.g., the display and keypad of the electronic terminal device as components, the sensor module 105 may also detect a change in the position of the electronic terminal device or a component of the electronic terminal device, the presence or absence of user contact with the electronic terminal device, the orientation or acceleration/deceleration of the electronic terminal device and changes in its temperature. The sensor module 105 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor module 105 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor module 105 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor. The communication module 104 is configured to facilitate wired or wireless communication between the electronic terminal device and other devices. The electronic terminal device may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication module 104 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication module 104 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In still another aspect, based on the same inventive concept, the present invention further provides a computer readable storage medium, on which a photographing program is stored, and the photographing program, when executed by a processor, implements the steps of the image processing method as described above.
The computer-readable storage media of embodiments of the invention may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this context, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It should be noted that the apparatuses and methods disclosed in the embodiments herein can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments herein. In this regard, each block in the flowchart or block diagrams may represent a module, a program, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments herein may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In summary, by acquiring at least two images, a first image and a second image, each image is divided into at least two sub-images; selecting one of the images as a background image; selecting one of the images as a corrected image; selecting at least one sub-image in the corrected image, and synthesizing the sub-image of the corrected image with the background image; a target image is generated. The selected background image and the corrected image can be automatically selected or selected by the manual operation of a user, and then the selected background image and the corrected image are synthesized to form a target image, wherein the target image is the optimal image.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

Claims (14)

1. An image processing method, comprising:
acquiring at least two images, namely a first image and a second image, and dividing each image into at least two sub-images;
selecting one of the images as a background image;
selecting one of the images as a corrected image;
selecting at least one sub-image in the corrected image, and synthesizing the sub-image of the corrected image with the background image to generate a target image;
prior to the step of acquiring the first image and the second image, further comprising:
providing a first imaging region;
performing a first division of the first imaging region such that it is divided into M rows by N columns of first imaging sub-regions, marking at least one of the M rows by N columns of the first imaging sub-regions;
dividing the first imaging area for the second time to obtain a second imaging area of X rows by Y columns of target imaging sub-areas, wherein the second imaging area comprises the marked first imaging sub-area;
selecting a plurality of target imaging sub-regions;
correspondingly, the first image and/or the second image are divided into a plurality of sub-images, and the sub-images correspond to the target imaging sub-areas one to one.
2. The image processing method according to claim 1, wherein each of the images has the same number of sub-images.
3. The image processing method of claim 1, wherein each sub-image has an image position, the image positions of the sub-images of each of said images corresponding to each other.
4. The image processing method according to claim 1, wherein the image positions of the sub-images of each of said images do not correspond or partially do not correspond to each other.
5. The image processing method according to claim 3, wherein an image position range of the sub-image of the correction image is larger than an image position range of the sub-image synthesized with the background image.
6. An image processing method as claimed in claim 1, characterized in that at least two of the images are selected from newly captured images and/or originally saved images.
7. An image processing method, comprising:
acquiring a plurality of images, and dividing at least one image into at least two sub-images;
selecting the image without at least one sub-image as a background image;
selecting at least one sub-image as a corrected image, and synthesizing the corrected image and the background image to obtain a target image;
before the step of acquiring a plurality of images, the method further comprises:
providing a first imaging region;
performing a first division of the first imaging region such that it is divided into M rows by N columns of first imaging sub-regions, marking at least one of the M rows by N columns of the first imaging sub-regions;
dividing the first imaging area for the second time to obtain a second imaging area of X rows by Y columns of target imaging sub-areas, wherein the second imaging area comprises the marked first imaging sub-area;
selecting a plurality of target imaging sub-regions;
correspondingly, at least one of the images is divided into a plurality of sub-images, and the sub-images correspond to the target imaging sub-areas one to one.
8. An image processing method, comprising:
acquiring a plurality of images, and dividing each image into at least two sub-images;
selecting at least one sub-image as a target background sub-image;
selecting at least one sub-image as a target correction sub-image, and synthesizing the target background sub-image and the target correction sub-image to obtain a target image;
before the step of acquiring a plurality of images, the method further comprises:
providing a first imaging region;
performing a first division of the first imaging region such that it is divided into M rows by N columns of first imaging sub-regions, marking at least one of the M rows by N columns of the first imaging sub-regions;
dividing the first imaging area for the second time to obtain a second imaging area of X rows by Y columns of target imaging sub-areas, wherein the second imaging area comprises the marked first imaging sub-area;
selecting a plurality of target imaging sub-regions;
correspondingly, the image is divided into a plurality of sub-images, and the sub-images correspond to the target imaging sub-regions one to one.
9. The image processing method of claim 8, further comprising, prior to the step of acquiring the first image and the second image:
providing a first imaging region;
and selecting at least one first imaging sub-region from the first imaging region for marking.
10. The image processing method of claim 9, wherein the step of selecting at least one sub-image as a target modified sub-image comprises: acquiring a sub-image corresponding to the marked first imaging sub-area in all sub-images in the image;
and selecting the sub-image which meets a first preset condition from the target imaging sub-area as the target correction sub-image.
11. The image processing method of claim 10, wherein the step of selecting at least one sub-image as a target background sub-image comprises: selecting the sub-image satisfying a second preset condition as the target background sub-image from the sub-images except the sub-image corresponding to the marked first imaging sub-area.
12. The image processing method of claim 8, wherein the step of selecting at least one sub-image as a target background sub-image and at least one sub-image as a target correction sub-image comprises: and previewing and displaying the sub-images, and manually selecting the sub-images to be respectively used as the target correction sub-image and the target background sub-image.
13. The image processing method of claim 9, wherein the step of combining the target correction sub-image with the target background sub-image comprises: and splicing and synthesizing the target correction subimages and the target background subimages according to the position relationship between the target correction subimages and the target background subimages, and then optimizing splicing gaps of the spliced and synthesized images to obtain the target image.
14. An electronic terminal device, comprising: memory, a processor and an image processing program stored on the memory and executable on the processor, the image processing program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 13.
CN201911132637.5A 2019-11-18 2019-11-18 Image processing method and electronic terminal equipment Active CN110933300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911132637.5A CN110933300B (en) 2019-11-18 2019-11-18 Image processing method and electronic terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911132637.5A CN110933300B (en) 2019-11-18 2019-11-18 Image processing method and electronic terminal equipment

Publications (2)

Publication Number Publication Date
CN110933300A CN110933300A (en) 2020-03-27
CN110933300B true CN110933300B (en) 2021-06-22

Family

ID=69853433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911132637.5A Active CN110933300B (en) 2019-11-18 2019-11-18 Image processing method and electronic terminal equipment

Country Status (1)

Country Link
CN (1) CN110933300B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6621524B1 (en) * 1997-01-10 2003-09-16 Casio Computer Co., Ltd. Image pickup apparatus and method for processing images obtained by means of same
CN101617339A (en) * 2007-02-15 2009-12-30 索尼株式会社 Image processing apparatus and image processing method
CN105025215A (en) * 2014-04-23 2015-11-04 中兴通讯股份有限公司 Method and apparatus for achieving group shooting through terminal on the basis of multiple pick-up heads
JP2016086200A (en) * 2014-10-23 2016-05-19 日本放送協会 Image synthesis device and image synthesis program
CN107734255A (en) * 2017-10-16 2018-02-23 广东欧珀移动通信有限公司 Method, apparatus, mobile terminal and the readable storage medium storing program for executing that shooting is taken pictures certainly
CN110210494A (en) * 2019-05-13 2019-09-06 深圳传音控股股份有限公司 Image processing method and computer installation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5971207B2 (en) * 2012-09-18 2016-08-17 株式会社リコー Image adjustment apparatus, image adjustment method, and program
JP6950252B2 (en) * 2017-04-11 2021-10-13 富士フイルムビジネスイノベーション株式会社 Image processing equipment and programs
CN109769089B (en) * 2018-12-28 2021-03-16 维沃移动通信有限公司 Image processing method and terminal equipment
CN109961446B (en) * 2019-03-27 2021-06-01 深圳视见医疗科技有限公司 CT/MR three-dimensional image segmentation processing method, device, equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6621524B1 (en) * 1997-01-10 2003-09-16 Casio Computer Co., Ltd. Image pickup apparatus and method for processing images obtained by means of same
CN101617339A (en) * 2007-02-15 2009-12-30 索尼株式会社 Image processing apparatus and image processing method
CN105025215A (en) * 2014-04-23 2015-11-04 中兴通讯股份有限公司 Method and apparatus for achieving group shooting through terminal on the basis of multiple pick-up heads
JP2016086200A (en) * 2014-10-23 2016-05-19 日本放送協会 Image synthesis device and image synthesis program
CN107734255A (en) * 2017-10-16 2018-02-23 广东欧珀移动通信有限公司 Method, apparatus, mobile terminal and the readable storage medium storing program for executing that shooting is taken pictures certainly
CN110210494A (en) * 2019-05-13 2019-09-06 深圳传音控股股份有限公司 Image processing method and computer installation

Also Published As

Publication number Publication date
CN110933300A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN108419016B (en) Shooting method and device and terminal
CN106657780B (en) Image preview method and device
CN110677734B (en) Video synthesis method and device, electronic equipment and storage medium
US11310443B2 (en) Video processing method, apparatus and storage medium
WO2017080084A1 (en) Font addition method and apparatus
CN104869314A (en) Photographing method and device
US20170054906A1 (en) Method and device for generating a panorama
CN114500821B (en) Photographing method and device, terminal and storage medium
CN104850643B (en) Picture comparison method and device
CN105357449A (en) Shooting method and device, and image processing method and apparatus
CN111461950B (en) Image processing method and device
CN111832455A (en) Method, device, storage medium and electronic equipment for acquiring content image
CN107967233B (en) Electronic work display method and device
CN113286073A (en) Imaging method, imaging device, and storage medium
WO2020052063A1 (en) Camera module, processing method and apparatus, electronic device, and storage medium
CN110933300B (en) Image processing method and electronic terminal equipment
CN114339019B (en) Focusing method, focusing device and storage medium
EP3905660A1 (en) Method and device for shooting image, and storage medium
CN107682623B (en) Photographing method and device
CN108769780B (en) Advertisement playing method and device
CN110874829B (en) Image processing method and device, electronic device and storage medium
CN107341214B (en) Picture display method and device
CN111343375A (en) Image signal processing method and device, electronic device and storage medium
US11363190B2 (en) Image capturing method and device
CN111464753B (en) Picture shooting method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant