CN113542578A - Image processing method, electronic equipment and computer storage medium - Google Patents

Image processing method, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN113542578A
CN113542578A CN202010317496.0A CN202010317496A CN113542578A CN 113542578 A CN113542578 A CN 113542578A CN 202010317496 A CN202010317496 A CN 202010317496A CN 113542578 A CN113542578 A CN 113542578A
Authority
CN
China
Prior art keywords
picture
acquiring
display area
focal length
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010317496.0A
Other languages
Chinese (zh)
Inventor
校松华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pateo Connect Nanjing Co Ltd
Original Assignee
Pateo Connect Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pateo Connect Nanjing Co Ltd filed Critical Pateo Connect Nanjing Co Ltd
Priority to CN202010317496.0A priority Critical patent/CN113542578A/en
Publication of CN113542578A publication Critical patent/CN113542578A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to a method of image processing, an electronic device, and a storage medium. The method comprises the following steps: acquiring a first picture shot by a camera device at a first focal length; selecting a local part in a first picture, and acquiring an object contained in the local part in the first picture; acquiring a second picture shot by the camera device at a second focal length on the object, wherein the second focal length is different from the first focal length; the first picture and the second picture are synthesized to generate a third picture.

Description

Image processing method, electronic equipment and computer storage medium
Technical Field
Embodiments of the present disclosure generally relate to electronic information technology, and more particularly, to an image processing method, an electronic device, and a computer storage medium.
Background
When a user shoots a photo, some detail parts cannot be well embodied in the overall photo shot by the user sometimes, the user cannot shoot the whole photo, the object details can be well displayed, and the user experience is poor.
Disclosure of Invention
The invention provides an image processing method, electronic equipment and a computer storage medium, which can shoot an integral part and a detail part under different conditions, automatically synthesize pictures required by a user and improve the user experience.
According to a first aspect of the present disclosure, an image processing method is provided. The method comprises the following steps: acquiring a first picture shot by a camera device at a first focal length; selecting a local part in a first picture, and acquiring a second picture shot by a second focal length for an object contained in the local part in the first picture; and synthesizing the first picture and the second picture.
Further, the first picture is displayed in the first display area; the second frame is displayed in the second display area, and the second display area is positioned in the first display area.
Further, the second focal length may be selected by zooming the second display region.
Further, the selecting the local part in the first picture includes:
acquiring the position of a current touch point or the position of a cursor, and selecting a preset shape area with the position of the current touch point or the position of the cursor as the center;
drawing the selected area on the first picture; or
Acquiring the position of a current touch point or a cursor; and identifying an object containing the current touch point position or cursor position in the first picture, and automatically selecting an area containing the identified object in the first picture.
Further, in response to selection of a part in the first screen, the focus of the image pickup apparatus is set to the center of the part of the first screen, and the second screen is acquired based on the set focus.
According to a second aspect of the present disclosure, there is also provided an electronic device, the device comprising: a memory configured to store one or more computer programs; and a processor coupled to the memory and configured to execute the one or more programs to cause the apparatus to perform the method of the first aspect of the disclosure.
According to a third aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium. The non-transitory computer readable storage medium has stored thereon machine executable instructions which, when executed, cause a machine to perform the method of the first aspect of the disclosure.
The invention has the beneficial effects that: by the method, the whole macroscopic image and the local microscopic image of the same shot object can be obtained, and richer image information is provided for a user through simpler and more convenient operation, so that the user experience is effectively improved.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the disclosure, nor is it intended to be used to limit the scope of the disclosure.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following more particular descriptions of exemplary embodiments of the disclosure as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the disclosure.
FIG. 1 schematically shows a schematic diagram of an information handling environment 100 according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow diagram of a method 200 for information processing, according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a block diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure.
Like or corresponding reference characters designate like or corresponding parts throughout the several views.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The term "include" and variations thereof as used herein is meant to be inclusive in an open-ended manner, i.e., "including but not limited to". Unless specifically stated otherwise, the term "or" means "and/or". The term "based on" means "based at least in part on". The terms "one example embodiment" and "one embodiment" mean "at least one example embodiment". The term "another embodiment" means "at least one additional embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 1 is a schematic hardware structure of an optional mobile terminal for implementing various embodiments of the present invention.
The mobile terminal 100 may include a wireless communication unit 110, an a/V (audio/video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190, etc. Fig. 1 illustrates a mobile terminal having various components, but it is to be understood that not all illustrated components are required to be implemented. More or fewer components may alternatively be implemented. Elements of the mobile terminal will be described in detail below.
The a/V input unit 120 is used to receive an audio or video signal. The a/V input unit 120 may include a camera 121 and a microphone 122, and the camera 121 processes image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 151. The image frames processed by the cameras 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the construction of the mobile terminal. The microphone 122 may receive sounds (audio data) via the microphone in a phone call mode, a recording mode, a voice recognition mode, or the like, and can process such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the mobile communication module 112 in case of a phone call mode. The microphone 122 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The user input unit 130 may generate key input data according to a command input by a user to control various operations of the mobile terminal. The user input unit 130 allows a user to input various types of information, and may include a keyboard, dome sheet, touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), scroll wheel, joystick, and the like. In particular, when the touch pad is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
The sensing unit 140 detects a current state of the mobile terminal 100 (e.g., an open or closed state of the mobile terminal 100), a position of the mobile terminal 100, presence or absence of contact (i.e., touch input) by a user with the mobile terminal 100, an orientation of the mobile terminal 100, acceleration or deceleration movement and direction of the mobile terminal 100, and the like, and generates a command or signal for controlling an operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide-type mobile phone, the sensing unit 140 may sense whether the slide-type phone is opened or closed. In addition, the sensing unit 140 can detect whether the power supply unit 190 supplies power or whether the interface unit 170 is coupled with an external device. The sensing unit 140 may include a proximity sensor 141.
The interface unit 170 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The identification module may store various information for authenticating a user using the mobile terminal 100 and may include a User Identity Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like. In addition, a device having an identification module (hereinafter, referred to as an "identification device") may take the form of a smart card, and thus, the identification device may be connected with the mobile terminal 100 via a port or other connection means. The interface unit 170 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal and the external device.
In addition, when the mobile terminal 100 is connected with an external cradle, the interface unit 170 may serve as a path through which power is supplied from the cradle to the mobile terminal 100 or may serve as a path through which various command signals input from the cradle are transmitted to the mobile terminal. Various command signals or power input from the cradle may be used as signals for recognizing whether the mobile terminal is accurately mounted on the cradle. The output unit 150 is configured to provide output signals (e.g., audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
The display unit 151 may display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 151 and the touch pad are overlapped with each other in the form of a layer to form a touch screen, the display unit 151 may serve as an input device and an output device. The display unit 151 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a TOLED (transparent organic light emitting diode) display or the like. Depending on the particular desired implementation, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 into an audio signal and output as sound when the mobile terminal is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output module 152 may provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output module 152 may include a speaker, a buzzer, and the like.
The alarm unit 153 may provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alarm unit 153 may provide output in different ways to notify the occurrence of an event. For example, the alarm unit 153 may provide an output in the form of vibration, and when a call, a message, or some other incoming communication (communicating communication) is received, the alarm unit 153 may provide a tactile output (i.e., vibration) to inform the user thereof. By providing such a tactile output, the user can recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 may also provide an output notifying the occurrence of an event via the display unit 151 or the audio output module 152.
The memory 160 may store software programs and the like for processing and controlling operations performed by the controller 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, and the like) that has been or will be output. Also, the memory 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
The controller 180 generally controls the overall operation of the mobile terminal. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, and the multimedia module 181 may be constructed within the controller 180 or may be constructed separately from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The power supply unit 190 receives external power or internal power and provides appropriate power required to operate various elements and components under the control of the controller 180.
Fig. 2 an embodiment of the present invention provides a photographing method, including:
step 201, displaying a first picture obtained by the camera to a user through a first display area.
In step 202, a local portion is selected in a first frame.
And step 203, displaying a second picture obtained by the selected local part through the second focal length to the user through a second display area.
Step 204, the first picture and the second picture are synthesized.
According to the shooting method provided by the embodiment of the invention, the whole macroscopic image and the local microscopic image of the same shot object can be obtained through one-time shooting, and richer image information is provided for a user through simpler and more convenient operation, so that the user experience is effectively improved.
Specifically, the selection of the partial portion in the first screen may be implemented by:
acquiring the position of a current touch point or the position of a cursor, and selecting a preset shape area with the position of the current touch point or the position of the cursor as the center;
drawing the selected area on the first picture; or
Acquiring the position of a current touch point or a cursor; and identifying an object containing the current touch point position or cursor position in the first picture, and automatically selecting an area containing the identified object in the first picture.
The preset shape can be rectangle, square or round.
The selected area drawn on the first screen may be an area drawn on the screen by a finger or a touch pen, or an area drawn on the screen by a mouse, a trackball, or other input device.
Acquiring the position of a current touch point or a cursor; identifying the object including the current touch point position or cursor position in the first frame can be realized by using the existing image processing and image identification methods, belongs to the prior art, and is not repeated again.
In an embodiment, the user may select different parts of the first screen to perform multiple operations, and the method for selecting the local area may be different each time.
In an embodiment, the determination of the second focal length may be achieved by scaling the selected local area of the first picture. If the focal length of the partial picture is reduced, the focal length of the partial picture is enlarged.
Further, sometimes the user needs to take the second picture with a different focus from the first picture, and may automatically place the focus when taking the second picture at the center of the selected local area, or change the focus position by clicking.
Specifically, the click operation may implement jumping of a focused object, and the touch zoom operation may implement adjustment of a magnification or a reduction factor.
Specifically, in the embodiment of the present invention, the second display area may be superposed on the first display area in a floating manner, and the corresponding shooting pictures are respectively displayed. Therefore, the user can visually see the overlapping effect of the partial enlarged image on the whole image, and the user experience is further improved.
In another embodiment of the present invention, the first screen corresponding to the first display area and the second screen corresponding to the second display area may be photo-synthesized, and then the synthesized photo may be saved. Optionally, the photo composition may be a corresponding position where the locked object in the second frame is directly embedded in the first frame, or may be a combination of one frame as a background and another frame as a foreground, or may be a combination of other forms,
in another embodiment of the present invention, the first frame and the second frame are stored in a volatile memory, such as a system memory, in which the pictures can be processed more quickly, such as zooming in and zooming out the pictures, and composing the pictures. And a third picture obtained by synthesizing the first picture and the second picture is stored in the nonvolatile memory, and the third picture is a finally shot picture and can be stored in the nonvolatile memory for a longer time.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Reference to step numbers in this specification is only for distinguishing between steps and is not intended to limit the temporal or logical relationship between steps, which includes all possible scenarios unless the context clearly dictates otherwise.
Moreover, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the disclosure and form different embodiments. For example, any of the embodiments claimed in the claims can be used in any combination.
Various component embodiments of the disclosure may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. The present disclosure may also be embodied as device or system programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present disclosure may be stored on a computer-readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
The present disclosure may be methods, apparatus, systems, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for carrying out various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (8)

1. A method of image processing, comprising:
acquiring a first picture shot by a camera device at a first focal length; and
selecting a local part in a first picture, and acquiring an object contained in the local part in the first picture;
acquiring a second picture shot by the camera device at a second focal length on the object, wherein the second focal length is different from the first focal length;
the first picture and the second picture are synthesized to generate a third picture.
2. The method of claim 1, comprising:
the first picture is presented in the first display area;
the second picture is displayed in the second display area, and the second display area is positioned in the first display area; and
the first display area and the second display area are both on a display screen of the camera device.
3. The method of claim 2, comprising:
and acquiring a scaling signal, and scaling the second display area based on the signal to select the second focal length.
4. The method of claim 2, the selecting the local portion in the first picture comprising at least one of:
acquiring the position of a current touch point or the position of a cursor, and selecting a preset shape area with the position of the current touch point or the position of the cursor as the center;
drawing the selected area on the first picture; or
Acquiring the position of a current touch point or a cursor; and identifying an object containing the current touch point position or cursor position in the first picture, and automatically selecting an area containing the identified object in the first picture.
5. The method of claim 1, comprising:
in response to selection of a part in a first screen, an imaging device focus is set to a center of the first screen part, and a second screen is acquired based on the set focus.
6. The method of claim 1, the first and second pictures being stored in volatile memory, the synthesized third picture being stored in non-volatile memory.
7. An electronic device, comprising:
a memory configured to store one or more computer programs; and
a processor coupled to the memory and configured to execute the one or more programs to cause the apparatus to perform the method of any of claims 1-6.
8. A non-transitory computer readable storage medium having stored thereon machine executable instructions which, when executed, cause a machine to perform the steps of the method of any of claims 1-6.
CN202010317496.0A 2020-04-21 2020-04-21 Image processing method, electronic equipment and computer storage medium Pending CN113542578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010317496.0A CN113542578A (en) 2020-04-21 2020-04-21 Image processing method, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010317496.0A CN113542578A (en) 2020-04-21 2020-04-21 Image processing method, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN113542578A true CN113542578A (en) 2021-10-22

Family

ID=78093882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010317496.0A Pending CN113542578A (en) 2020-04-21 2020-04-21 Image processing method, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113542578A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150009372A1 (en) * 2013-07-08 2015-01-08 Lg Electronics Inc. Electronic device and method of operating the same
CN106888349A (en) * 2017-03-30 2017-06-23 努比亚技术有限公司 A kind of image pickup method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150009372A1 (en) * 2013-07-08 2015-01-08 Lg Electronics Inc. Electronic device and method of operating the same
CN106888349A (en) * 2017-03-30 2017-06-23 努比亚技术有限公司 A kind of image pickup method and device

Similar Documents

Publication Publication Date Title
CN110647834B (en) Human face and human hand correlation detection method and device, electronic equipment and storage medium
US10659675B2 (en) Terminal, shooting method thereof and computer storage medium
CN112001321B (en) Network training method, pedestrian re-identification method, device, electronic equipment and storage medium
CN109257645B (en) Video cover generation method and device
CN106657780B (en) Image preview method and device
CN108259991B (en) Video processing method and device
CN110889469A (en) Image processing method and device, electronic equipment and storage medium
KR20190014638A (en) Electronic device and method for controlling of the same
US8682374B2 (en) Mobile terminal and controlling method thereof
EP3147802B1 (en) Method and apparatus for processing information
CN112465843A (en) Image segmentation method and device, electronic equipment and storage medium
CN110458218B (en) Image classification method and device and classification network training method and device
KR101626482B1 (en) A method and a apparatus of capturing picture with dual lenses
CN106612396B (en) Photographing device, terminal and method
CN106033397B (en) Memory buffer area adjusting method, device and terminal
CN108174269B (en) Visual audio playing method and device
CN107147936B (en) Display control method and device for barrage
CN113065591B (en) Target detection method and device, electronic equipment and storage medium
CN109344703B (en) Object detection method and device, electronic equipment and storage medium
CN110989905A (en) Information processing method and device, electronic equipment and storage medium
CN111311588B (en) Repositioning method and device, electronic equipment and storage medium
CN106713656B (en) Shooting method and mobile terminal
CN112085097A (en) Image processing method and device, electronic equipment and storage medium
CN105635809A (en) Image processing method and device and terminal
CN111597797A (en) Method, device, equipment and medium for editing social circle message

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20211022

WD01 Invention patent application deemed withdrawn after publication