CN109286750B - Zooming method based on intelligent terminal and intelligent terminal - Google Patents

Zooming method based on intelligent terminal and intelligent terminal Download PDF

Info

Publication number
CN109286750B
CN109286750B CN201811117681.4A CN201811117681A CN109286750B CN 109286750 B CN109286750 B CN 109286750B CN 201811117681 A CN201811117681 A CN 201811117681A CN 109286750 B CN109286750 B CN 109286750B
Authority
CN
China
Prior art keywords
pixel
local area
camera
intelligent terminal
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811117681.4A
Other languages
Chinese (zh)
Other versions
CN109286750A (en
Inventor
朱斌杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Chuanyin Technology Co ltd
Original Assignee
Chongqing Chuanyin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Chuanyin Technology Co ltd filed Critical Chongqing Chuanyin Technology Co ltd
Priority to CN201811117681.4A priority Critical patent/CN109286750B/en
Publication of CN109286750A publication Critical patent/CN109286750A/en
Application granted granted Critical
Publication of CN109286750B publication Critical patent/CN109286750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides an intelligent terminal-based zooming method and an intelligent terminal, wherein the zooming method is realized by a single camera and comprises the following steps: judging whether the camera executes a shooting operation or not and obtaining an initial image; when the camera carries out a shooting operation, selecting a local area on the initial image; performing double-pixel decoding on the local area through a full-pixel dual-core sensor in the camera; and amplifying the local area after double-pixel decoding by one time to obtain a zoomed image. According to the technical scheme, the zoom function can be realized by adopting the single camera, so that the cost is saved; and ensuring that the definition of the zoomed image is the same as that of the original image.

Description

Zooming method based on intelligent terminal and intelligent terminal
Technical Field
The invention relates to the field of intelligent terminals, in particular to a zooming method based on an intelligent terminal and the intelligent terminal.
Background
At present, intelligent terminal devices such as smart phones and tablet computers become an indispensable part of life of people, and the intelligent terminal has an operating system, supports human-computer interaction, provides various functional application programs, and facilitates life of people. With the development of the technology, the intelligent terminal is further integrated with a camera device to realize the photographing function, and digital photos are formed and stored in the intelligent terminal.
However, the intelligent terminal is not a professional camera device, and it is difficult to allow the user to manually adjust the position of the lens to focus due to limitations such as structural design. The focusing process is a process of moving the lens to make the image of the focusing area reach the clearest. At present, the zooming modes on the intelligent terminal are as follows:
1. digital interpolation zooming: zooming is realized by interpolating the existing image, the name of the English is DigitaltZoom, and the area of each pixel in the image is increased through a processor in an intelligent terminal, so that the aim of amplification is fulfilled. The method is similar to the method that the area of a picture is enlarged by using image processing software, a part of pixels on the original CCD image sensor are enlarged by using an 'interpolation' processing means, and the pixels on the CCD image sensor are enlarged to the whole picture by using an interpolation algorithm. Digital zooming does not actually change the focal length of the lens. The specific realization principle is that the software is used for judging the color of the periphery of the existing pixel, and the pixel added by a special algorithm is inserted according to the color condition of the periphery.
2. Double-shot optical zoom: zooming is realized by switching two cameras with different focal lengths, the realization of the technology needs to arrange two cameras with different focal lengths on the intelligent terminal, the two cameras work in a coordinated way and are switched according to shooting requirements, so that zooming is realized;
3. double-shot optical zoom: the high-pixel camera and the low-pixel camera are matched, and the high-pixel camera is cut and amplified to achieve optical zooming.
Among the above zooming modes, the 1 st mode is a software processing mode, which has the disadvantage that the image is limited by the pixels of the camera, and the image definition is deteriorated or even distorted when the image is enlarged and zoomed; both the 2 nd and 3 rd modes are enhanced in terms of hardware, which has the disadvantage of higher cost.
Therefore, it is desirable to provide a zooming method based on an intelligent terminal, which can implement zooming on a single camera and ensure that the definition of an image is not affected.
Disclosure of Invention
In order to overcome the technical defects, the invention aims to provide a zooming method based on an intelligent terminal and the intelligent terminal, which can realize zooming on a single camera and ensure that an image has better definition.
The invention discloses a zoom method based on an intelligent terminal, which is realized by a single camera and comprises the following steps:
judging whether the camera executes a shooting operation or not and obtaining an initial image;
when the camera carries out a shooting operation, selecting a local area on the initial image;
performing double-pixel decoding on the local area through a full-pixel dual-core sensor in the camera;
and amplifying the local area after double-pixel decoding by one time to obtain a zoomed image.
Preferably, in the step of selecting a local area on the initial image, the local area is selected according to a user operation received by a touch screen of the smart terminal.
Preferably, the step of selecting a local area on the initial image further comprises:
the touch screen receives a touch operation of a user;
acquiring the center position of the touch operation;
and selecting a rectangular area as the local area according to a preset size by taking the central position as a center.
Preferably, the step of selecting a local area on the initial image further comprises:
displaying a rectangular frame on the touch screen;
the touch screen receives a sliding operation of a user and adjusts the size of the rectangular frame;
and after the sliding operation is finished, taking the area in the rectangular frame as the local area.
Preferably, the local area is preset to a central portion of the initial image.
Preferably, after the step of magnifying the two-pixel decoded local area to obtain a zoomed image, the zooming method further comprises the steps of:
and displaying the zoomed image on a touch screen of the intelligent terminal.
In a second aspect of the present invention, an intelligent terminal is disclosed, which includes a processor, a memory, and a camera, wherein the camera employs a full-pixel dual-core sensor, the memory stores a computer program, and the computer program implements the following steps when executed by the processor:
judging whether the camera executes a shooting operation or not and obtaining an initial image;
when the camera carries out a shooting operation, selecting a local area on the initial image;
performing double-pixel decoding on the local area through the full-pixel dual-core sensor;
and amplifying the local area after double-pixel decoding by one time to obtain a zoomed image.
Preferably, each microlens of the full-pixel dual-core sensor corresponds to two light sensing units, and each light sensing unit converts a light signal received by the corresponding microlens into an electrical signal; and in the step of performing double-pixel decoding on the local area through the full-pixel double-core sensor, each micro lens corresponding to the local area generates electric signals of two pixel points.
Preferably, a color filter is further disposed between the micro lens and the light sensing unit.
Preferably, in the step of selecting a local area on the initial image, the local area is selected according to a user operation received by a touch screen of the smart terminal.
After the technical scheme is adopted, compared with the prior art, the method has the following beneficial effects:
1. the zoom function can be realized by adopting a single camera, so that the cost is saved;
2. and ensuring that the definition of the zoomed image is the same as that of the original image.
Drawings
Fig. 1 is a schematic flow chart illustrating a zoom method based on an intelligent terminal according to an embodiment of the present invention;
FIG. 2 is a schematic view of the detailed process of step S102 in FIG. 1;
FIG. 3 is a flowchart illustrating step S102 of FIG. 1 according to another embodiment of the present invention;
FIG. 4 is a block diagram of an intelligent terminal according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a full-pixel dual-core sensor of an intelligent terminal.
Reference numerals:
10-intelligent terminal, 11-memory, 12-processor, 13-camera, 131-microlens, 132-light sensing unit and 133-color filter.
Detailed Description
The advantages of the invention are further illustrated in the following description of specific embodiments in conjunction with the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and limited, it is to be noted that the terms "mounted," "connected," and "connected" are to be interpreted broadly, and may be, for example, a mechanical connection or an electrical connection, a communication between two elements, a direct connection, or an indirect connection via an intermediate medium, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
Referring to fig. 1, a schematic flow chart of a zoom method based on an intelligent terminal according to an embodiment of the present invention is shown, where the zoom method is implemented by a single camera, and includes the following steps:
s101: and judging whether the camera executes a shooting operation or not and obtaining an initial image.
The shooting operation in this step is a normal photo shooting operation, that is, a user operates the intelligent terminal to shoot a target. When the intelligent terminal executes shooting operation, calling and image processing processes of the camera are involved, and whether the camera executes the shooting operation or not can be judged by judging the state of a camera related driving program or the state flag bit of an image processing related class, method or function. The result of the camera performing a shooting operation is an initial image, which is an image directly obtained by shooting according to the current camera pixel and is not subjected to any algorithm processing.
In this embodiment, when shooting through the camera, a single-camera working mode is adopted. The intelligent terminal also can include preceding camera and back camera, and two cameras all work independently in order to ensure the operating condition of single camera. In other embodiments, the intelligent terminal may have only one camera.
S102: when the camera head executes a shooting operation, a local area on the initial image is selected.
And if the judgment in the step S101 is true, the camera executes shooting operation to obtain an initial image, and the step performs area selection operation on the initial image. The content of this step is to select a local area on the initial image in preparation for subsequent zooming. In the zooming process similar to zooming, the image is often required to be enlarged, and the display interface of the intelligent terminal is limited, so that a local area in the initial image must be selected for enlargement. For the selection of the local area, the local area may be set as a fixed position, such as a central position, of the display interface of the intelligent terminal by default, and since the initial image is generally displayed in a full screen, the local area may also be set as a central position of the initial image by default; the local area may be a rectangle similar to the display interface. The local area can also be selected by the user operating the touch screen, and the specific implementation manner will be explained in the following specific steps.
S103: and performing double-pixel decoding on the local area through a full-pixel double-core sensor in the camera.
The step is the core for realizing the technical effect of the whole invention. The implementation of this step is based on the full-pixel dual-core sensor in the camera, the full-pixel dual-core sensor has two photodiodes, namely photodiodes a and B, at the position of each pixel point, and when focusing, the two photodiodes detect the signals of the image a and the image B respectively, that is, two pixel information can be formed at the position of one pixel point. The full-pixel dual-core means that all the pixels of the camera are covered by the double photodiodes. In the prior art, signals of an image a and an image B are generally subjected to superposition processing, and finally become an image signal of one pixel point. In this step, double-pixel decoding is performed on the same pixel position to obtain image signals of two pixels, the range of double-pixel decoding is the pixel corresponding to the local region, and image information of pixels twice as many as the pixel positions corresponding to the local region can be obtained through this step.
Except that the image information of two pixels is decoded from one pixel point position, the other processes of the double-pixel decoding are the same as those of the common image decoding, and optical signals are converted into electric signals through the light sensing element and then are converted into digital image signals, so that the image information readable by a computer is formed.
S104: and amplifying the local area after double-pixel decoding by one time to obtain a zoomed image.
This step is the final step of the zoom operation, and the image information corresponding to the local area obtained in step S103 is used as the basis to magnify the local area by one time, so as to obtain a zoomed image. Since the image information of the pixel point corresponding to the local area in step S103 is twice of the original image, after the local area is enlarged by one time, the pixel density of the obtained image is the same as that of the original image, which ensures that the sharpness of the zoomed image is the same as that of the original image.
S105: and displaying the zoomed image on a touch screen of the intelligent terminal.
This step displays the zoomed image obtained in step S104. The intelligent terminal can acquire image information through the open interface of the camera and display the image information through the touch screen.
Referring to fig. 2, a detailed flowchart of step S102 in fig. 1 is shown, where step S102 further includes:
s102-1: the touch screen receives a touch operation of a user.
In this embodiment, step S102 is implemented by the operation of the user to select the local area, and the component on the intelligent terminal for receiving the operation of the user is a touch screen, which can receive the touch operation of the user. Specifically, the touch operation may be any one of a single-click operation, a double-click operation, a long-press operation, and the like.
S102-2: and acquiring the center position of the touch operation.
In the touch operation of step S102-1, the contact position between the finger and the touch screen is in the form of an area, so that the center position of the touch operation needs to be acquired. Specifically, the center point of the contact area may be obtained by a geometric algorithm, and the center point may be used as the center position of the touch operation.
S102-3: and selecting a rectangular area as the local area according to a preset size by taking the central position as a center.
The local area is a preset rectangular area, and the rectangular area is similar to the display interface of the intelligent terminal, namely similar to the shape of the display area of the touch screen. The size of the local area can also be preset, and is preferably half of the size of the display interface of the touch screen, so that when the local area is enlarged by one time, the enlarged size is exactly equal to the display area of the touch screen, and a better display effect is realized.
Through the steps, the user can conveniently and automatically select the zooming target area, and better experience is provided for the user.
Referring to fig. 3, which is a schematic flowchart of a specific process of step S102 in fig. 1 according to another embodiment of the present invention, step S102 further includes:
s102-4: and displaying a rectangular frame on the touch screen.
In this step, a display function is executed, and a rectangular frame is displayed on the touch screen. When the step is executed, the touch screen also synchronously displays the initial image so that a user can perform subsequent operation by referring to the initial image. The rectangular frame can preset an initial position and size, for example, the initial position and size are set at the center of the touch screen display area, and the size of the rectangular frame is half of the size of the touch screen display area.
S102-5: the touch screen receives a sliding operation of a user and adjusts the size of the rectangular frame.
The sliding operation, that is, the operation of sliding on the touch screen after the user's finger contacts the touch screen, may be a sliding operation with a single finger or a sliding operation with two fingers. Specifically, the user may touch the rectangular frame first, and then slide the rectangular frame to the inside or outside of the rectangular frame, so as to shrink or enlarge the rectangular frame, and the touch screen displays the size change of the rectangular frame synchronously.
S102-6: and after the sliding operation is finished, taking the area in the rectangular frame as the local area.
In this step, the area in the rectangular frame after the user operation in step S102-5 is finished is used as the local area, so as to adjust the size of the local area.
Through the steps, the user is allowed to adjust the size of the local area, and the flexible zooming function is realized.
Referring to fig. 4, which is a block diagram of an intelligent terminal according to an embodiment of the present invention, the intelligent terminal 10 includes a processor 12, a memory 11, and a camera 13, the camera 13 employs a full-pixel dual-core sensor, the memory 11 stores a computer program, and the computer program implements the following steps when executed by the processor 12:
judging whether the camera executes a shooting operation or not and obtaining an initial image;
when the camera carries out a shooting operation, selecting a local area on the initial image;
performing double-pixel decoding on the local area through the full-pixel dual-core sensor;
and amplifying the local area after double-pixel decoding by one time to obtain a zoomed image.
The content of the above steps is the same as the content of the steps of the zoom method based on the intelligent terminal in the embodiment shown in fig. 1, and the technical effect is also the same, which is not described again.
In this embodiment, when shooting through the camera, a single-camera working mode is adopted. The intelligent terminal also can include preceding camera and back camera, and two cameras all work independently in order to ensure the operating condition of single camera. In other embodiments, the intelligent terminal may have only one camera. The full-pixel dual-core sensor in the camera is provided with two photodiodes (namely photodiodes A and B) at the position of each pixel point, and when focusing is performed, the two photodiodes detect signals of an A image and a B image respectively, namely two pixel information can be formed at the position of one pixel point. The full-pixel dual-core means that all the pixels of the camera are covered by the double photodiodes.
Fig. 5 is a schematic structural diagram of a full-pixel dual-core sensor of an intelligent terminal, where the full-pixel dual-core sensor is composed of a plurality of microlenses 131, each microlens 131 corresponds to two light sensing units 132, and each light sensing unit 132 converts an optical signal received by its corresponding microlens 131 into an electrical signal. After entering one microlens 131, the external light acts on two light sensing units 132, and is converted into two optical signals. In the step of performing double-pixel decoding on the local area through the full-pixel dual-core sensor, each microlens corresponding to the local area generates electric signals of two pixel points, and then image information of the two corresponding pixel points is formed. The intelligent terminal 10 fully utilizes the characteristics of the all-pixel dual-core sensor, forms image information of two pixel points at the position of one pixel point, provides image information for zooming, magnifies a local area by one time in the zooming process, and realizes that the image definition after zooming is the same as that of an initial image.
Further, a color filter 133 is disposed between the micro lens 131 and the light sensing unit 132. The color filter 133 may be used to filter light of a specific color, thereby achieving a filter effect of different colors.
As a further improvement of the computer program, in the step of selecting a local area on the initial image, the local area is selected according to a user operation received by a touch screen of the smart terminal. The improvement allows a user to select a local area autonomously, and the zooming operation of different positions in the initial image can be flexibly realized.
It should be noted that the embodiments of the present invention have been described in terms of preferred embodiments, and not by way of limitation, and that those skilled in the art can make modifications and variations of the embodiments described above without departing from the spirit of the invention.

Claims (10)

1. A zooming method based on an intelligent terminal is characterized in that the zooming method is realized through a single camera and comprises the following steps:
judging whether the camera executes a shooting operation or not and obtaining an initial image;
when the camera carries out a shooting operation, selecting a local area on the initial image;
performing double-pixel decoding on the local area through a full-pixel dual-core sensor in the camera;
amplifying the local area after double-pixel decoding by one time to obtain a zoomed image;
the performing double-pixel decoding on the local area through a full-pixel dual-core sensor in the camera includes:
the full-pixel dual-core sensor is characterized in that two photodiodes are arranged at the position of each pixel point, when the full-pixel dual-core sensor is focused, the two photodiodes detect two signals respectively, two pixel information is formed at the position of one pixel point, the two pixel information at the same pixel point position are subjected to double-pixel decoding to obtain image signals of the two pixel points, and the double-pixel decoding range is the pixel point corresponding to the local area.
2. The zooming method of claim 1,
and in the step of selecting a local area on the initial image, selecting the local area according to user operation received by a touch screen of the intelligent terminal.
3. The zooming method of claim 2,
the step of selecting a local area on the initial image further comprises:
the touch screen receives a touch operation of a user;
acquiring the center position of the touch operation;
and selecting a rectangular area as the local area according to a preset size by taking the central position as a center.
4. The zooming method of claim 2,
the step of selecting a local area on the initial image further comprises:
displaying a rectangular frame on the touch screen;
the touch screen receives a sliding operation of a user and adjusts the size of the rectangular frame;
and after the sliding operation is finished, taking the area in the rectangular frame as the local area.
5. The zooming method of claim 1,
the local area is preset in the central part of the initial image.
6. The zooming method of any of claims 1-5,
after the step of magnifying the two-pixel decoded local area to obtain a zoomed image, the zooming method further comprises the following steps:
and displaying the zoomed image on a touch screen of the intelligent terminal.
7. An intelligent terminal comprises a processor, a memory and a camera, and is characterized in that the camera adopts a full-pixel dual-core sensor, the memory stores a computer program, and the computer program realizes the following steps when being executed by the processor:
judging whether the camera executes a shooting operation or not and obtaining an initial image;
when the camera carries out a shooting operation, selecting a local area on the initial image;
performing double-pixel decoding on the local area through the full-pixel dual-core sensor;
amplifying the local area after double-pixel decoding by one time to obtain a zoomed image;
the performing, by the full-pixel dual-core sensor, dual-pixel decoding on the local region includes:
the full-pixel dual-core sensor is characterized in that two photodiodes are arranged at the position of each pixel point, when the full-pixel dual-core sensor is focused, the two photodiodes detect two signals respectively, two pixel information is formed at the position of one pixel point, the two pixel information at the same pixel point position are subjected to double-pixel decoding to obtain image signals of the two pixel points, and the double-pixel decoding range is the pixel point corresponding to the local area.
8. The intelligent terminal of claim 7,
each micro lens of the full-pixel dual-core sensor corresponds to two light sensing units, and each light sensing unit converts a light signal received by the corresponding micro lens into an electric signal;
and in the step of performing double-pixel decoding on the local area through the full-pixel double-core sensor, each micro lens corresponding to the local area generates electric signals of two pixel points.
9. The intelligent terminal of claim 8,
and a color filter is also arranged between the micro lens and the light sensing unit.
10. The intelligent terminal according to any one of claims 7-9,
and in the step of selecting a local area on the initial image, selecting the local area according to user operation received by a touch screen of the intelligent terminal.
CN201811117681.4A 2018-09-21 2018-09-21 Zooming method based on intelligent terminal and intelligent terminal Active CN109286750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811117681.4A CN109286750B (en) 2018-09-21 2018-09-21 Zooming method based on intelligent terminal and intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811117681.4A CN109286750B (en) 2018-09-21 2018-09-21 Zooming method based on intelligent terminal and intelligent terminal

Publications (2)

Publication Number Publication Date
CN109286750A CN109286750A (en) 2019-01-29
CN109286750B true CN109286750B (en) 2020-10-27

Family

ID=65181973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811117681.4A Active CN109286750B (en) 2018-09-21 2018-09-21 Zooming method based on intelligent terminal and intelligent terminal

Country Status (1)

Country Link
CN (1) CN109286750B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113132614B (en) 2019-12-31 2023-09-01 中兴通讯股份有限公司 Camera optical zooming electronic device, method, unit and memory
CN111970439A (en) * 2020-08-10 2020-11-20 Oppo(重庆)智能科技有限公司 Image processing method and device, terminal and readable storage medium
CN113660411B (en) * 2021-06-30 2023-06-23 深圳市沃特沃德信息有限公司 Remote control shooting method and device based on intelligent watch and computer equipment
CN116939363B (en) * 2022-03-29 2024-04-26 荣耀终端有限公司 Image processing method and electronic equipment
CN115002347B (en) * 2022-05-26 2023-10-27 深圳传音控股股份有限公司 Image processing method, intelligent terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1574946A (en) * 2003-06-20 2005-02-02 佳能株式会社 Image display method, program for executing the method, and image display device
CN103780828A (en) * 2012-10-22 2014-05-07 联想(北京)有限公司 Image acquisition method and electronic device
CN104980669A (en) * 2014-04-11 2015-10-14 芯视达系统公司 Image sensor pixel structure with optimized uniformity
CN107404619A (en) * 2016-07-29 2017-11-28 广东欧珀移动通信有限公司 Image zoom processing method, device and terminal device
WO2018166829A1 (en) * 2017-03-13 2018-09-20 Lumileds Holding B.V. Imaging device with an improved autofocusing performance

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9147704B2 (en) * 2013-11-11 2015-09-29 Omnivision Technologies, Inc. Dual pixel-sized color image sensors and methods for manufacturing the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1574946A (en) * 2003-06-20 2005-02-02 佳能株式会社 Image display method, program for executing the method, and image display device
CN103780828A (en) * 2012-10-22 2014-05-07 联想(北京)有限公司 Image acquisition method and electronic device
CN104980669A (en) * 2014-04-11 2015-10-14 芯视达系统公司 Image sensor pixel structure with optimized uniformity
CN107404619A (en) * 2016-07-29 2017-11-28 广东欧珀移动通信有限公司 Image zoom processing method, device and terminal device
WO2018166829A1 (en) * 2017-03-13 2018-09-20 Lumileds Holding B.V. Imaging device with an improved autofocusing performance

Also Published As

Publication number Publication date
CN109286750A (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN109286750B (en) Zooming method based on intelligent terminal and intelligent terminal
US10003731B2 (en) Image element, and imaging device and imaging method using the same for achieving improved image quality regardless of an incident angle of light
US10027918B2 (en) Imaging module that captures subject images having different characteristics and imaging device
US10298828B2 (en) Multi-imaging apparatus including internal imaging device and external imaging device, multi-imaging method, program, and recording medium
US9386216B2 (en) Imaging device, defocus amount calculating method, and imaging optical system
US10244166B2 (en) Imaging device
US10021287B2 (en) Imaging control device, imaging device, imaging control method, and program for transmitting an imaging preparation command, receiving a preparation completion command, and transmitting a captured image acquisition command
US20160182794A1 (en) Imaging device and imaging method
US11184523B2 (en) Imaging apparatus with phase difference detecting element
JP5799178B2 (en) Imaging apparatus and focus control method
US20150002635A1 (en) Solid-state imaging device, imaging apparatus, and driving method of a solid-state imaging device
US9344651B2 (en) Signal processing apparatus for correcting an output signal of a focus detecting pixel cell to improve a captured image quality
US20160227112A1 (en) Imaging device
US11290635B2 (en) Imaging apparatus and image processing method
US10939058B2 (en) Image processing apparatus, image processing method, and program
US9542720B2 (en) Terminal device, image display method, and storage medium
CN113132614B (en) Camera optical zooming electronic device, method, unit and memory
KR102339662B1 (en) Image photographing apparatus and control methods thereof
CN114125417A (en) Image sensor, image pickup apparatus, image capturing method, image capturing apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant