CN110545385A - image processing method and terminal equipment - Google Patents

image processing method and terminal equipment Download PDF

Info

Publication number
CN110545385A
CN110545385A CN201910901692.XA CN201910901692A CN110545385A CN 110545385 A CN110545385 A CN 110545385A CN 201910901692 A CN201910901692 A CN 201910901692A CN 110545385 A CN110545385 A CN 110545385A
Authority
CN
China
Prior art keywords
image
preview
terminal device
result image
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910901692.XA
Other languages
Chinese (zh)
Inventor
朱晋良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910901692.XA priority Critical patent/CN110545385A/en
Publication of CN110545385A publication Critical patent/CN110545385A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides an image processing method and terminal equipment, which are applied to the technical field of communication and are used for solving the problems of more complicated operation and poorer flexibility of a user in the process of creating interesting portrait photography. The method comprises the following steps: receiving a first input of a user under the condition that a first preview image and a first result image are displayed in a preview picture in an overlapping mode according to preset parameters; in response to a first input, shooting a first preview image and a first result image in a preview picture to obtain a second result image; the first result image comprises at least one first object, the first preview image comprises at least one second object, and the at least one first object and the at least one second object are different. The method is particularly applied to the process of generating a plurality of human images by the terminal equipment.

Description

Image processing method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and terminal equipment.
Background
In a scene in which a user creates interesting portrait photography, the user may need the same person to appear at multiple positions of the same photo to achieve the interest of portrait photography.
Specifically, the user may control the terminal device camera application to take and store a plurality of photos of the person for a plurality of times, find out the stored photos of the person from the album application, and perform post-synthesis processing on the images of the person by using a third-party editing application to obtain the photos including the plurality of images of the person.
Therefore, a user needs to sequentially open the camera application program, the photo album application program and the third party editing application program to post-process the photos of the multiple figures into one photo containing multiple figures, and operation of the user in the process of creating interesting figures is complicated and poor in flexibility.
Disclosure of Invention
The embodiment of the invention provides an image processing method and terminal equipment, and aims to solve the problems that the operation of a user is complicated and the flexibility is poor in the creation and interest portrait shooting process.
in order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
In a first aspect, an embodiment of the present invention provides an image processing method, where the method includes: receiving a first input of a user under the condition that a first preview image and a first result image are displayed in a preview picture in an overlapping mode according to preset parameters; in response to a first input, shooting a first preview image and a first result image in a preview picture to obtain a second result image; the first result image comprises at least one first object, the first preview image comprises at least one second object, and the at least one first object and the at least one second object are different.
In a second aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes: the device comprises a receiving module and a processing module; the receiving module is used for receiving a first input of a user under the condition that a first preview image and a first result image are displayed in a preview picture in an overlapping mode according to preset parameters; the processing module is used for responding to the first input received by the receiving module, shooting a first preview image and a first result image in the preview picture, and obtaining a second result image; the first result image comprises at least one first object, the first preview image comprises at least one second object, and the at least one first object and the at least one second object are different.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, and when executed by the processor, the computer program implements the steps of the image processing method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image processing method according to the first aspect.
In the embodiment of the invention, the terminal equipment can process the currently previewed image and the current result image into a new result image in real time in the process of shooting the image. The user can quickly and conveniently trigger the terminal equipment to generate the image comprising the multiple figures, and the terminal equipment does not need to switch the application program for multiple times to realize post-processing of the multiple figures to obtain one figure image. Therefore, the operation of the user in the creation interest shooting process is simplified, and the flexibility of creation interest shooting is improved.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
fig. 3 is one of schematic diagrams of display contents of a terminal device according to an embodiment of the present invention;
Fig. 4 is a second schematic diagram of the display content of the terminal device according to the embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
Fig. 6 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
it should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first object and the second object, etc. are for distinguishing different objects, not for describing a particular order of the objects.
in the scene that present user used terminal equipment to create taste portrait photography, the user can use terminal equipment (like the cell-phone)'s panorama function, and the camera lens is walked around to the user after shooing in a position fast to reach the position that the camera lens has not arrived, put the posture and carry out follow-up shooting. In this way, when a user photographs through the panorama function of the mobile phone, the person to be photographed (i.e., the person to be photographed) needs to move with high skill to photograph a satisfactory photograph, and the distance of the person to be photographed in the photograph cannot be arbitrarily given. Thus, the process of creating interesting portrait photography is rendered less flexible.
In addition, the current terminal device may be a fixed photographing apparatus, which photographs the same scene including a person a plurality of times, and then generates a work (i.e., a synthesized photograph) by post-synthesis. Among them, a plurality of times of photographing composition by fixing a photographing apparatus faces a main problem that composition cannot be intuitively obtained and a long post-processing time is required. Thus, the process of creating interesting portrait photography is rendered less flexible.
In order to solve the above problem, an embodiment of the present invention provides an image processing method for enabling a terminal device to process a currently previewed image (i.e., a first preview image) and a current result image (i.e., a first result image) into a new result image (i.e., a second result image) in real time during image capturing. The user can quickly and conveniently trigger the terminal equipment to generate the image comprising the multiple figures, and the terminal equipment does not need to switch the application program for multiple times to realize post-processing of the multiple figures to obtain one figure image. Therefore, the operation of the user in the creation interest shooting process is simplified, and the flexibility of creation interest shooting is improved.
The terminal device in the embodiment of the invention can be a mobile terminal device and can also be a non-mobile terminal device. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
It should be noted that, in the image Processing method provided in the embodiment of the present invention, the execution main body may be a terminal device, or a Central Processing Unit (CPU) of the terminal device, or a control module in the terminal device for executing the image Processing method. In the embodiment of the present invention, an image processing method executed by a terminal device is taken as an example, and the image processing method provided in the embodiment of the present invention is described.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the image processing method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application. For example, applications such as a system setup application, a system chat application, and a system camera application. And the third-party setting application, the third-party camera application, the third-party chatting application and other application programs.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image processing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image processing method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the image processing method provided by the embodiment of the invention by running the software program in the android operating system.
The following describes the image processing method provided by the embodiment of the present invention in detail with reference to the flowchart of the image processing method shown in fig. 2. Wherein, although the logical order of the image processing methods provided by embodiments of the present invention is shown in a method flow diagram, in some cases, the steps shown or described may be performed in an order different than here. For example, the image processing method illustrated in fig. 2 may include S201-S202:
S201, in the case that the first preview image and the first result image are displayed in a preview screen in an overlapped mode according to preset parameters, the terminal equipment receives first input of a user.
the preview screen may be the whole screen displayed on the screen by the terminal device, and the preview screen may include a shooting preview frame in which the previewed image (i.e., the first preview image) is displayed.
Optionally, the terminal device is installed with an application program such as a system camera application program or a third-party camera application program.
Specifically, the terminal device may perform a photographing function using a camera application.
it should be emphasized that, in the embodiment of the present invention, when the terminal device starts the camera application, the terminal device may display not only the preview image but also the result image in the preview screen. The previewed image is a picture (namely an image) acquired by the terminal equipment in real time through the camera.
The first preview image is a preview image displayed when the terminal device receives the first input, and the first result image is a result image displayed when the terminal device receives the first input.
It is understood that the first input is used to trigger the terminal device to capture images in the current preview screen, such as the first preview image and the first result image.
It should be noted that, in the embodiment of the present invention, the screen of the terminal device may be a touch screen, and the touch screen may be configured to receive an input from a user, and in response to the input, display a content corresponding to the input to the user. The first input may be a touch screen input, a fingerprint input, a gravity input, a key input, or the like. The touch screen input is input such as press input, long press input, slide input, click input, and hover input (input by a user near the touch screen) of a touch screen of the terminal device by the user. The fingerprint input is input by a user to a sliding fingerprint, a long-press fingerprint, a single-click fingerprint, a double-click fingerprint and the like of a fingerprint identifier of the terminal equipment. The gravity input is input such as shaking of the terminal equipment in a specific direction, shaking of the terminal equipment for a specific number of times and the like. The key input corresponds to a single-click input, a double-click input, a long-press input, a combination key input, and the like of the user for a key such as a power key, a volume key, a Home key, and the like of the terminal device. Specifically, the embodiment of the present invention does not specifically limit the manner of the first input, and may be any realizable manner.
optionally, in the embodiment of the present invention, in a case that the terminal device displays the shooting preview frame on the screen, a shooting control may be further displayed, which is used to trigger the terminal device to execute a shooting function, that is, to execute a shooting operation on a preview image in the shooting preview frame.
Illustratively, the first input is a click input of a shooting control displayed on a screen of the terminal device by a user.
It will be appreciated that the capture control described above may be a shutter in a camera application of the terminal device. Specifically, different from the current terminal device creating interesting shooting through a panoramic shooting function, the embodiment of the invention can control the shutter to be opened or closed for shooting. The shutter is closed when the shot person finishes shooting at one position, and the shutter is opened again until the shot person reaches the next position. The above-described photographing process does not have any skill requirement for the movement of the person to be photographed.
in the embodiment of the invention, when the terminal equipment performs one shooting operation on the preview image in the shooting preview frame, the terminal equipment can automatically save the image or select whether to save the image by a user.
It should be noted that the first result image includes at least one first object, the first preview image includes at least one second object, and the at least one first object and the at least one second object are different.
It can be understood that the foreground in one image may be a portrait and/or a still, and the following embodiments illustrate the image processing method provided by the embodiments of the present invention by taking the foreground of one image as a portrait.
It should be noted that, in the embodiment of the present invention, at least one first object may be a foreground in the first result image, and each first object may be a human image. In a scene where the number of objects in the at least one first object is multiple, the at least one first object may be a portrait of the same person or a portrait of a different person. I.e. the first resulting image may be an image (also called a photograph or picture) comprising a portrait.
similarly, at least one second object may be a foreground in the first preview image and each second object may be an image of a person. In a scene where the number of objects in the at least one second object is multiple, the at least one second object may be a portrait of the same person or a portrait of a different person. I.e. the first preview image may be an image containing a portrait.
Note that the difference between one object (denoted as object 1) and the other object (denoted as object 2) means that the person corresponding to object 1 is the same as the person corresponding to object 2 but has a different posture, or that the person corresponding to object 1 is different from the person corresponding to object 2.
it is understood that the first result image may be obtained locally by the terminal device, or captured in real time, or obtained from a server, or obtained interactively with other terminal devices.
In an exemplary embodiment of the present invention, a first result image is taken in real time before the terminal device receives the first input.
It should be noted that, in the embodiment of the present invention, the terminal device may display different images in the preview screen in an overlapping manner with preset parameters, for example, display the first preview image and the first result image in an overlapping manner with preset parameters.
Optionally, the preset parameter includes at least one of a preset transparency and a preset stacking width; the preset transparency is the transparency of the image in the preview image, and the preset superposition width is the width of different images in the preview image overlapped in the target direction.
Specifically, the preset transparency is the transparency of the image previewed in the preview screen and/or the result image. The preset superimposition width is a width in which the image previewed in the preview screen and the result image are superimposed in the target direction. The target direction may be a horizontal direction or a vertical direction based on the preview screen.
Illustratively, the predetermined transparency is a transparency of 50%, and the predetermined overlap width is 2 centimeters (cm).
Optionally, the preset parameter may be user-defined or default for a system of the terminal device.
It can be understood that, since the terminal device displays the previewed image (e.g., the first preview image) and the result image (e.g., the first result image) in superposition with the preset transparency, the display of the result image does not affect the display of the previewed image, i.e., the user cannot view the previewed image. Thus, it is possible to facilitate the user to manually align the preview image and the result image in the preview screen.
It should be noted that, displaying the image in the preview by using the preset parameters can facilitate the user to compare and view different images in the preview screen, for example, compare and view different objects in different images in the preview screen. Furthermore, it is convenient for the user to control the terminal device to adjust the relative position between the object in the previewed image and the object in the result image (e.g. the first result image).
That is to say, in the embodiment of the present invention, by displaying the current result image and the current live preview image in an overlapping manner, the user may trigger the terminal device to align the result image and the preview image in real time, and trigger the terminal device to perform a combining operation on the result image and the preview image. Therefore, the problem of alignment of multiple frames of images in multiple-frame photography can be solved.
S202, responding to the first input, the terminal equipment shoots a first preview image and a first result image in the preview picture to obtain a second result image.
Optionally, when the terminal device captures the first preview image, the first preview image may not be saved. Specifically, the rule whether the terminal device stores the first preview image may be preset by the user, or may be set by default by the terminal device.
It is emphasized that the second result image captured by the terminal device includes the at least one first object and the at least one second object. I.e. the second resulting image may be an image comprising a plurality of portraits.
It should be noted that, with the image processing method provided in the embodiment of the present invention, the terminal device may combine the currently previewed image and the current result image into a new result image in real time during the process of capturing the image. The user can quickly and conveniently trigger the terminal equipment to generate the image comprising the multiple figures, and the terminal equipment does not need to switch the application program for multiple times to realize post-processing of the multiple figures to obtain one figure image. Therefore, the operation of the user in the creation interest shooting process is simplified, and the flexibility of creation interest shooting is improved.
In a possible implementation manner, the image processing method provided in the embodiment of the present invention, after the step S202, may further include the step S203:
and S203, the terminal equipment displays the first preview image and the second result image in a preview picture in an overlapping mode according to preset parameters.
Similarly, for the description that the terminal device displays the first preview image and the second result image in an overlapping manner with preset parameters, reference may be made to the related description of the first result image in the foregoing embodiment, and details are not repeated here.
further, after the terminal device displays the second result image, the user may further control the terminal device to perform a shooting operation on the image previewed in the shooting preview frame, shoot the current preview image (e.g., the first preview image) and the second result image to obtain a new result image, and display the new result image in the preview screen. The current preview image in the preview screen may change depending on the environment, the object, and the like, that is, the first preview image may change.
It can be understood that, in the embodiment of the present invention, unlike the current method in which the terminal device performs post-synthesis after shooting a plurality of character images, the terminal device in the embodiment of the present invention may perform synthesis of images during shooting of images without the need for a user to trigger post-synthesis of the shot images, and the resulting images in the creation process may be displayed in a preview picture in real time.
It should be noted that, with the image processing method provided by the embodiment of the present invention, the terminal device may capture a preview image and a result image (e.g., a second result image) obtained by capturing a current result image in real time during the process of capturing an image, and display the result image in the preview screen. Therefore, the user can visually check the process of combining a plurality of character images into one character image, the flexibility of creation interesting shooting is further improved, and the man-machine interaction performance is improved.
in one possible implementation, the first resulting image is obtained by capturing at least one image before receiving the first input.
Specifically, before the terminal device receives the first input of the user, if the user triggers the terminal device to perform a shooting operation on an image previewed in the preview screen (i.e., an image displayed in a shooting preview frame) for the first time, the terminal device takes the image (i.e., a first frame image) as a first result image (e.g., a first result image). At this time, the first result image is obtained from the first frame image captured in the capture preview frame before the terminal device receives the first input.
If the user has triggered the terminal device to perform a shooting operation on the image previewed in the preview screen for multiple times, the terminal device may synthesize the image (i.e., the current frame image) into the currently displayed result image to obtain and display a new result image (e.g., the first result image). At this time, the first result image is obtained by taking an image previewed by a plurality of frames before the terminal device receives the first input.
Optionally, in the embodiment of the present invention, the camera application may provide multiple shooting modes. For example, a "delayed photography" mode, a "slow motion" mode, a "video" mode, a "photo" mode, a "square" mode, and a "panorama" mode, and a "fun photography" mode, etc.
When a shooting mode is used to execute a shooting function, the terminal device runs a shooting process of the shooting mode. Specifically, the user may trigger the terminal device to execute a shooting process of different shooting modes.
For example, in a case where the terminal device runs a shooting progress of a "photo" mode, the terminal device may display a shooting preview frame on the screen, and perform a save operation on the captured image once every shooting operation is performed on the image in the shooting preview frame.
For example, in a case where the terminal device runs a shooting progress in the "fun shooting" mode, the terminal device displays a shooting preview frame on the screen, and the terminal device may obtain a first result image from an image obtained by shooting a preview, and superimpose and display the currently previewed image and the first result image on the preview screen, every time the terminal device performs a shooting operation on the image in the shooting preview frame.
in this embodiment of the present invention, the user may trigger the terminal device to execute a shooting process in the "interesting shooting" mode, so that the terminal device displays the preview image and the result image (e.g., the first result image) in the preview screen.
In the embodiment of the present invention, the terminal device may perform a shooting operation and a composition operation on the image previewed in the preview screen in real time under the trigger of the user, so as to combine the previewed new image into the current result image in real time. Therefore, the terminal equipment can synthesize a plurality of character images into one character image according to the requirements of the user, namely the image containing a plurality of characters (namely objects) is obtained. Thereby, the controllability of the process of creating interesting camera shooting can be realized.
in a possible implementation manner, S203 in the image processing method provided by the embodiment of the present invention may be implemented by S203a and S203 b:
S203a, the terminal device obtains at least one second object from the first preview image.
Wherein a display position of the at least one second object in the second result image is determined by the first input.
It is understood that the user may manually align the image previewed in the photographing preview screen and the first result image through the first input, i.e., align the photographed first preview image and the first result image. The terminal device can adjust the position of the portrait in the image in the shooting preview frame, namely, adjust the position of the at least one second object in the first preview image. Furthermore, an adjustment of the position of the at least one second object in the second resulting image may be achieved.
Specifically, the terminal device may employ an image segmentation algorithm to segment the at least one second object from the first preview image.
Wherein the terminal can identify the foreground and the background in the image.
Generally, the color of the foreground and the color of the background in the image are clearly different, and the terminal can identify and segment the foreground (i.e. the object) in the image by adjusting the threshold value of the binarization function.
For example, for an image including an object (e.g., a first preview image including a portrait), the terminal may first perform a graying process and a binarization process on the image to obtain a binarized image of the image. Wherein the terminal can identify and segment an object (such as the first object) from the binarized image by adjusting the threshold of the binarization function.
Of course, the terminal device may segment the foreground (i.e., the object) in one image by other image segmentation methods. For example, the terminal device may recognize an object in one image through an algorithm of edge detection.
Optionally, the terminal may determine a foreground (i.e., an object) in an image according to a pixel value of each pixel point in the image. Such as pixel values of the foreground in an image, satisfy a range of values.
Alternatively, the terminal may determine the foreground (i.e., the object) in an image according to the contrast of the image. For example, the difference between the contrast of the foreground in an image and the contrast of the background in the image satisfies a range of values.
S203b, the terminal device synthesizes the at least one second object and the first result image into a second result image.
Specifically, the terminal device may perform fusion processing on at least one second object and the first result image to obtain a second result image, that is, a foreground of the second result image.
Optionally, the terminal device may further fuse the background in the first preview image with the background of the first result image to obtain the background of the second result image. Or after the terminal equipment acquires at least one second object in the first preview image, discarding the background in the first preview image.
In this way, since the terminal device synthesizes the at least one second object in the first preview image and the first result image into the second result image, it is possible to realize that the at least one first object and the at least one second object are included in the second result image. Namely, the terminal device can acquire an image containing a plurality of human images in real time.
Further, optionally, after S203, the image processing method provided in the embodiment of the present invention may further include S204:
And S204, the terminal equipment stores the second result image.
it will be appreciated that the terminal device may save the second resulting image automatically, or triggered by the user, after the terminal device captures the second resulting image. For example, the second result image may be saved to a storage area corresponding to an album application of the terminal device.
Optionally, after saving the second result image, the terminal device may exit the currently running shooting process, or exit the currently running camera application.
Optionally, in the embodiment of the present invention, in a case that the terminal device displays the shooting preview frame on the screen, a saving control may be further displayed, which is used to trigger the terminal device to execute a saving function, that is, to execute a saving operation on a result image (such as a second result image) displayed in real time.
For example, the user may make a click input to a save control displayed on the screen of the terminal device to trigger the terminal device to save the second result image.
Exemplarily, as shown in fig. 3, a schematic diagram of content displayed by a terminal device according to an embodiment of the present invention is provided. A shooting preview frame 31 is displayed on the screen of the terminal device in fig. 3, and the shooting preview frame 31 contains a first preview image a; the screen also displays a first resulting image B superimposed with a transparency of 50%. The first preview image a contains a portrait a1 (i.e., a second object), and the first result image B contains a portrait B1 (i.e., a first object). In addition, a "shooting" control 32 and a "save" control 33 are also displayed on the screen of the terminal device.
Further, after the user makes a click input (e.g., a first input) to the "shooting" control 32 shown in fig. 3, a shooting preview frame 31 may be displayed on the screen of the terminal device as shown in fig. 4, and a second result image C composed of the first preview image a and the first result image B is displayed in a superimposed manner with a transparency of 50%; in addition, fig. 4 shows that the first preview image D is contained in the shooting preview frame 31.
further, in the case where the contents as shown in fig. 4 are displayed on the screen of the terminal device, the terminal device receives an input to the "save" control 33 from the user, and the second result image C can be saved.
therefore, the terminal equipment can store the second result image in real time, and a user can conveniently view the second result image at any time in the follow-up process. Therefore, the man-machine interaction performance in the creation interesting shooting process is improved.
Fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present invention. The terminal device 50 shown in fig. 5 includes: a receiving module 51 and a processing module 52; a receiving module 51, configured to receive a first input of a user when the first preview image and the first result image are displayed in a preview screen in an overlapping manner with preset parameters; a processing module 52, configured to capture a first preview image and a first result image in the preview screen in response to the first input received by the receiving module 51, and obtain a second result image; the first result image comprises at least one first object, the first preview image comprises at least one second object, and the at least one first object and the at least one second object are different.
Optionally, the terminal device 50 further includes: a display module; and the display module is configured to capture the first preview image and the first result image in the preview image by the processing module 52, obtain a second result image, and display the first preview image and the second result image in the preview image by overlapping with preset parameters.
Optionally, the processing module 52 is specifically configured to obtain at least one second object from the first preview image; synthesizing the at least one second object and the first result image into a second result image; wherein a display position of the at least one second object in the second result image is determined by the first input.
Optionally, the preset parameter includes at least one of a preset transparency and a preset stacking width; the preset transparency is the transparency of the image in the preview image, and the preset superposition width is the width of different images in the preview image overlapped in the target direction.
Optionally, the first resulting image is obtained by taking at least one image before receiving the first input.
Optionally, the terminal device 50 further includes: a storage module; and the storage module is used for storing the second result image after the display module displays the first preview image and the second result image in the preview image in an overlapping manner by using the preset parameters.
the terminal device 50 provided in the embodiment of the present invention can implement each process implemented by the terminal device in the foregoing method embodiments, and for avoiding repetition, details are not described here again.
The terminal equipment provided by the embodiment of the invention can synthesize the image previewed currently in real time and the current result image into a new result image in the process of shooting the image. The user can quickly and conveniently trigger the terminal equipment to generate the image comprising the multiple figures, and the terminal equipment does not need to switch the application program for multiple times to realize post-processing of the multiple figures to obtain one figure image. Therefore, the operation of the user in the creation interest shooting process is simplified, and the flexibility of creation interest shooting is improved.
Fig. 6 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention, where the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 6 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
A user input unit 107, configured to receive a first input of a user when the first preview image and the first result image are displayed in a preview screen in an overlapping manner with a preset parameter; a processor 110 for capturing a first preview image and a first result image in a preview screen in response to a first input received by the user input unit 107, resulting in a second result image; the first result image comprises at least one first object, the first preview image comprises at least one second object, and the at least one first object and the at least one second object are different.
The terminal equipment provided by the embodiment of the invention can synthesize the currently previewed image and the current result image into a new result image in real time in the process of shooting the image. The user can quickly and conveniently trigger the terminal equipment to generate the image comprising the multiple figures, and the terminal equipment does not need to switch the application program for multiple times to realize post-processing of the multiple figures to obtain one figure image. Therefore, the operation of the user in the creation interest shooting process is simplified, and the flexibility of creation interest shooting is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
the audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 6, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the above-mentioned image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (13)

1. An image processing method, characterized in that the method comprises:
Receiving a first input of a user under the condition that a first preview image and a first result image are displayed in a preview picture in an overlapping mode according to preset parameters;
Responding to the first input, shooting the first preview image and a first result image in the preview picture to obtain a second result image;
Wherein the first result image includes at least one first object therein, the first preview image includes at least one second object therein, and the at least one first object and the at least one second object are different.
2. the method of claim 1, wherein after capturing the first preview image and the first result image in the preview screen and obtaining a second result image, the method further comprises:
And displaying a first preview image and the second result image in the preview picture in an overlapping manner according to the preset parameters.
3. The method according to claim 1 or 2, wherein the capturing the first preview image and the first result image in the preview screen to obtain a second result image comprises:
acquiring the at least one second object from the first preview image;
Synthesizing the at least one second object and the first result image into the second result image;
Wherein a display position of the at least one second object in the second resultant image is determined by the first input.
4. The method according to claim 1 or 2, wherein the preset parameters comprise at least one of a preset transparency and a preset superimposition width;
The preset transparency is the transparency of the image in the preview image, and the preset superposition width is the width of different images in the preview image overlapped in the target direction.
5. The method of claim 1, wherein the first resulting image is obtained by taking at least one image prior to receiving the first input.
6. The method according to claim 2, wherein after the first preview image and the second result image are displayed in the preview screen in an overlapping manner with the preset parameter, the method further comprises:
Saving the second result image.
7. A terminal device, characterized in that the terminal device comprises: the device comprises a receiving module and a processing module;
the receiving module is used for receiving a first input of a user under the condition that a first preview image and a first result image are superposed by preset parameters in a preview picture;
The processing module is used for responding to the first input received by the receiving module, shooting a first preview image and the first result image in the preview picture, and obtaining a second result image;
Wherein the first result image includes at least one first object therein, the first preview image includes at least one second object therein, and the at least one first object and the at least one second object are different.
8. The terminal device according to claim 7, wherein the terminal device further comprises: a display module;
The display module is used for shooting a first preview image and the first result image in the preview picture by the processing module, and after a second result image is obtained, overlapping and displaying the first preview image and the second result image in the preview picture according to the preset parameters.
9. The terminal device according to claim 7 or 8,
The processing module is specifically configured to acquire the at least one second object from the first preview image;
synthesizing the at least one second object and the first result image into the second result image;
wherein a display position of the at least one second object in the second resultant image is determined by the first input.
10. The terminal device according to claim 7 or 8, wherein the preset parameter comprises at least one of a preset transparency and a preset superimposition width;
The preset transparency is the transparency of the image in the preview image, and the preset superposition width is the width of different images in the preview image overlapped in the target direction.
11. The terminal device of claim 7, wherein the first resulting image is obtained by taking at least one image before receiving the first input.
12. The terminal device according to claim 8, wherein the terminal device further comprises: a storage module;
And the storage module is used for storing the second result image after the display module displays the first preview image and the second result image in the preview image in an overlapping manner according to the preset parameters.
13. A terminal device, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 6.
CN201910901692.XA 2019-09-23 2019-09-23 image processing method and terminal equipment Pending CN110545385A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910901692.XA CN110545385A (en) 2019-09-23 2019-09-23 image processing method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910901692.XA CN110545385A (en) 2019-09-23 2019-09-23 image processing method and terminal equipment

Publications (1)

Publication Number Publication Date
CN110545385A true CN110545385A (en) 2019-12-06

Family

ID=68714275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910901692.XA Pending CN110545385A (en) 2019-09-23 2019-09-23 image processing method and terminal equipment

Country Status (1)

Country Link
CN (1) CN110545385A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111010511A (en) * 2019-12-12 2020-04-14 维沃移动通信有限公司 Panoramic body-separating image shooting method and electronic equipment
CN111669503A (en) * 2020-06-29 2020-09-15 维沃移动通信有限公司 Photographing method and device, electronic equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103491298A (en) * 2013-09-13 2014-01-01 惠州Tcl移动通信有限公司 Multi-region real-time synthesis photographing method and touch control terminal
CN105491309A (en) * 2015-11-30 2016-04-13 东莞酷派软件技术有限公司 Image display method and device
CN105657247A (en) * 2015-11-20 2016-06-08 乐视移动智能信息技术(北京)有限公司 Secondary exposure photographing method and apparatus for electronic device
CN105959551A (en) * 2016-05-30 2016-09-21 努比亚技术有限公司 Shooting device, shooting method and mobile terminal
CN106303229A (en) * 2016-08-04 2017-01-04 努比亚技术有限公司 A kind of photographic method and device
CN109218630A (en) * 2017-07-06 2019-01-15 腾讯科技(深圳)有限公司 A kind of method for processing multimedia information and device, terminal, storage medium
KR20190073802A (en) * 2017-12-19 2019-06-27 소수영 Method for photographing of mobile device by composition guide setting

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103491298A (en) * 2013-09-13 2014-01-01 惠州Tcl移动通信有限公司 Multi-region real-time synthesis photographing method and touch control terminal
CN105657247A (en) * 2015-11-20 2016-06-08 乐视移动智能信息技术(北京)有限公司 Secondary exposure photographing method and apparatus for electronic device
CN105491309A (en) * 2015-11-30 2016-04-13 东莞酷派软件技术有限公司 Image display method and device
CN105959551A (en) * 2016-05-30 2016-09-21 努比亚技术有限公司 Shooting device, shooting method and mobile terminal
CN106303229A (en) * 2016-08-04 2017-01-04 努比亚技术有限公司 A kind of photographic method and device
CN109218630A (en) * 2017-07-06 2019-01-15 腾讯科技(深圳)有限公司 A kind of method for processing multimedia information and device, terminal, storage medium
KR20190073802A (en) * 2017-12-19 2019-06-27 소수영 Method for photographing of mobile device by composition guide setting

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111010511A (en) * 2019-12-12 2020-04-14 维沃移动通信有限公司 Panoramic body-separating image shooting method and electronic equipment
CN111669503A (en) * 2020-06-29 2020-09-15 维沃移动通信有限公司 Photographing method and device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN111541845B (en) Image processing method and device and electronic equipment
CN110971823B (en) Parameter adjusting method and terminal equipment
CN108495029B (en) Photographing method and mobile terminal
CN109862267B (en) Shooting method and terminal equipment
US20220279116A1 (en) Object tracking method and electronic device
CN108307109B (en) High dynamic range image preview method and terminal equipment
CN109905603B (en) Shooting processing method and mobile terminal
CN111010512A (en) Display control method and electronic equipment
CN111597370B (en) Shooting method and electronic equipment
CN111010511B (en) Panoramic body-separating image shooting method and electronic equipment
CN111050077B (en) Shooting method and electronic equipment
CN111159449B (en) Image display method and electronic equipment
CN110798621A (en) Image processing method and electronic equipment
CN109120853B (en) Long exposure image shooting method and terminal
CN108924422B (en) Panoramic photographing method and mobile terminal
CN109246351B (en) Composition method and terminal equipment
CN111083374B (en) Filter adding method and electronic equipment
CN110769154B (en) Shooting method and electronic equipment
CN110086998B (en) Shooting method and terminal
CN110545385A (en) image processing method and terminal equipment
CN111178306B (en) Display control method and electronic equipment
CN111131706B (en) Video picture processing method and electronic equipment
CN110958387B (en) Content updating method and electronic equipment
CN110913133B (en) Shooting method and electronic equipment
CN110493457B (en) Terminal device control method and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191206