CN111182206B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111182206B
CN111182206B CN201911412744.3A CN201911412744A CN111182206B CN 111182206 B CN111182206 B CN 111182206B CN 201911412744 A CN201911412744 A CN 201911412744A CN 111182206 B CN111182206 B CN 111182206B
Authority
CN
China
Prior art keywords
image
target object
target
moment
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911412744.3A
Other languages
Chinese (zh)
Other versions
CN111182206A (en
Inventor
袁旺程
李秀峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911412744.3A priority Critical patent/CN111182206B/en
Publication of CN111182206A publication Critical patent/CN111182206A/en
Application granted granted Critical
Publication of CN111182206B publication Critical patent/CN111182206B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations

Abstract

The embodiment of the invention discloses an image processing method and device. The method is applied to an electronic device, the electronic device comprises an image sensor, and the method comprises the following steps: under the condition that shooting input of a user is received at a first moment, acquiring a first position of a target object at the first moment through at least two real sensing pixels; shooting a first image of a target object through an image sensor, and determining a second position area of the target object in the first image; and adjusting the target object in the first image to a target position area corresponding to the first position to obtain a target image. The embodiment of the invention can solve the problem of delay in shooting the target object in the motion state.

Description

Image processing method and device
Technical Field
The embodiment of the invention relates to the technical field of camera shooting, in particular to an image processing method and device.
Background
With the continuous development of the camera technology, people have increasingly higher shooting requirements on the camera of the electronic equipment, which is a great challenge to the functions, performances and effects of the camera. The user pays particular attention to the photographing delay experience of the camera of the electronic equipment. Because the camera needs to image in a specific time when taking a picture, for a target object in a moving state, a scene of a final generated image and a scene when pressing a shutter always have deviation, and user experience is poor.
Therefore, what is needed is a what-you-see-what-you-get solution.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, and aims to solve the problem that shooting of a target object in a moving state is delayed.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method applied to an electronic device, where the electronic device includes an image sensor, the image sensor includes a plurality of real sensing pixels arranged, and the method includes:
under the condition that shooting input of a user is received at a first moment, acquiring a first position of a target object at the first moment through at least two real sensing pixels;
shooting a first image of a target object through an image sensor, and determining a second position area of the target object in the first image;
and adjusting the target object in the first image to a target position area corresponding to the first position to obtain a target image.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including an image sensor including a plurality of real sensing pixels arranged;
wherein the image processing apparatus further comprises:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first position of a target object at a first moment through at least two real sensing pixels under the condition that the shooting input of a user is received at the first moment;
a photographing module 302 for photographing a first image of a target object through an image sensor;
and the determining module is used for determining a second position area of the target object in the first image.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the computer program implements the image processing method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the image processing method according to the first aspect.
In the embodiment of the invention, when shooting input of a user is received at a first moment, a first position of a target aligned at the first moment is obtained through at least two real sensing pixels in an image sensor; then, a first image of the target object is shot through the image sensor, a second position area of the target object in the first image is determined, finally, the target object in the first image is adjusted to the target position area corresponding to the first position, the target image is obtained, the effect of what you see is what you shoot can be achieved, and user experience is improved.
Drawings
The present invention will be better understood from the following description of specific embodiments thereof taken in conjunction with the accompanying drawings, in which like or similar reference characters designate like or similar features.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating an embodiment of adjusting the contour of a target object in a second image;
fig. 3 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In an embodiment of the present invention, an electronic device includes an image sensor. The electronic Device may be a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention. As shown in fig. 1, the image processing method is applied to an electronic device, wherein the electronic device includes an image sensor including a plurality of real sensing pixels arranged;
the image processing method comprises the following steps:
step 101: under the condition that shooting input of a user is received at a first moment, acquiring a first position of a target object at the first moment through at least two real sensing pixels;
step 102: shooting a first image of a target object through an image sensor, and determining a second position area of the target object in the first image;
step 103: and adjusting the target object in the first image to a target position area corresponding to the first position to obtain a target image.
In the embodiment of the invention, when shooting input of a user is received at a first moment, a first position of a target aligned at the first moment is obtained through at least two real sensing pixels in an image sensor; then, a first image of the target object is shot through the image sensor, a second position area of the target object in the first image is determined, finally, the target object in the first image is adjusted to the target position area corresponding to the first position, the target image is obtained, the effect of what you see is what you shoot can be achieved, and user experience is improved.
In some embodiments of the present invention, the real-sensing pixels may be independent, and sense the change of the external environment brightness in real time along with the pixel clock frequency, convert the change of the environment brightness into the change of the current, and further convert into the change of the digital signal, if the change amount of the digital signal of a certain real-sensing pixel exceeds a preset threshold, report to the system to request to read out, and output a data packet with coordinate information, brightness information, and time information; in addition, when the standard position exists, the position change information may be acquired by the image sensor. Therefore, the real-time performance of the real-sensing pixel is better than that of the conventional pixel, the signal redundancy is better, and the precision is higher.
In one example, at least two of the real sensing pixels are distributed in the image sensor in a density, for example, the at least two real sensing pixels are arranged in the image sensor in an array.
The size and density of the sensory pixels may be flexibly adjusted according to the actual application scenario, and this embodiment is not limited thereto.
In addition, the image sensor not only includes the above-mentioned real sensing pixels, but also includes regular pixels; the regular pixels are read out one by one in sequence after integrating light information in a time period (the time period is related to the frame rate).
Wherein each of the real sensing pixels includes: the device comprises at least two photosensitive units, a signal processing module and a control module;
each photosensitive unit is used for generating a photosensitive electric signal; the signal processing module is used for outputting at least one of the following items under the condition of receiving the photosensitive electric signal: the analog voltage of the photosensitive electrical signal of each photosensitive unit and the superposed analog voltage obtained by superposing the analog voltages of the photosensitive electrical signals of at least two photosensitive units are superposed; and the control module is used for conducting the output ends of the at least two photosensitive units and the receiving end of the signal processing module under the condition that the variation value of the output voltage of the at least one photosensitive unit exceeds a preset threshold.
In an example, the preset threshold may be a suitable threshold set according to an actual situation, and the specific value of the preset threshold is not limited in the embodiment of the present invention.
The change value of the output voltage of at least one photosensitive unit exceeds a preset threshold, which indicates that a moving object is photographed at the moment, so that at least one of the following can be output to an analog-to-digital sensor of the image sensor: the analog voltage of the photosensitive electrical signal of each photosensitive unit and the overlapped analog voltage obtained by overlapping the analog voltages of the photosensitive electrical signals of at least two photosensitive units are obtained.
Therefore, when the analog voltage of the photosensitive electric signal of each photosensitive unit is output, the distance measurement can be carried out according to the phase difference between the images determined by different analog voltages, so that the function of real-time motion focus tracking is realized, and the shot picture can be corrected and calibrated. When the superposed analog voltage obtained by superposing the analog voltages of the photosensitive electric signals of the at least two photosensitive units is output, the contour of the moving object can be determined according to the superposed analog voltage, and the capability of high-precision real-time grabbing of the contour of the dynamic object is achieved. Therefore, when the terminal equipment is used for shooting a moving object, the shot picture can have a better shooting effect.
In some embodiments of the present invention, the shooting input of the user in step 101 may be an input to a physical shooting key, an input to a virtual shooting area, or an input to a voice.
In some embodiments of the present invention, the shooting input in step 101 includes, but is not limited to, a click, a long press, or a voice input manner, and may also be a first operation, such as a sliding operation.
Optionally, in this example, in the case that the shooting input of the user is received at the first time in step 101, acquiring the first position of the target object at the first time through at least two real pixels specifically includes:
when a shooting input of a user is received at a first moment, a first position of a target object at the first moment in a preview interface is obtained through at least two real sensing pixels.
In some embodiments of the present invention, the capturing a first image of the target object by the image sensor and determining a second position area of the target object in the first image in step 102 specifically includes:
acquiring a second position of the target object at a second moment through at least two real sensing pixels; and the second moment is the moment of imaging the target object in the shooting process of the first image.
In addition, optionally, the second position area of the target object in the first image may also be determined by other methods such as image recognition, which is not specifically limited in the embodiment of the present invention.
Specifically, in the process of capturing the first image, the first image is imaged line by line, that is, the imaging time corresponding to each line in the first image is different, for example, the imaging time of the target object in the first image is the second time; for the target object in the first image, a second position of the target image at a second time may be acquired by at least two real sensing pixels. In addition, the second time of moving to the second position can be determined based on the first position at the first time, the running speed of the target object and the scanning imaging speed of the image sensor.
And determining a second position area of the target object in the first image according to the second position.
Specifically, the contour of the target object in the first image is the second position region surrounded by the contour of the target object.
Due to the long time required for acquiring the image, the target object may be undergoing other changes such as translation, expansion or contraction, rotation, etc. during this acquisition process, so that the contour of the target object in the acquired image is not accurate. The real sensing pixels can acquire corresponding change information of the target object in the process of acquiring the image, and then the electronic equipment can adjust the outline of the target object according to the information, so that the target object is moved to a position corresponding to the moment when the user starts to take the picture.
In the embodiment of the invention, when shooting input of a user is received at a first moment, a first position of a target aligned at the first moment is obtained through at least two real sensing pixels in an image sensor; then, shooting a first image of the target object through the image sensor, and determining a second position of the target object in the first image based on at least two sensing pixels; then, determining the outline of the target object in the first image through at least two real sensing pixels at a second position, namely a second position area surrounded by the outline of the target object; and finally, adjusting the target object in the first image to a target position area corresponding to the first position to obtain a target image, so that the effect of what you see is what you shoot can be realized, and the user experience is improved.
In some embodiments of the present invention, the adjusting the target object in the first image to the target position area corresponding to the first position in step 103 to obtain the target image includes:
acquiring a preview image displayed in a shooting preview interface at a first moment;
specifically, a preview image displayed in a shooting preview interface at a first moment can be acquired through an image sensor; the shooting preview interface is an interface of a preview image displayed on a screen of the electronic equipment. For example, the image sensor continues to acquire images until a first time; when the user starts to take a picture at the first moment, the last acquired image stored in the memory is read as a preview image of the first moment.
And adjusting the target object in the first image to a target position area corresponding to the first position, and fusing the background image corresponding to the second position area in the preview image to the second position area in the first image to obtain a target image.
Specifically, the target object in the first image is scratched, and the scratched target object is moved to a target position area corresponding to the first position (namely, a target position area surrounded by the outline of the mold object at the first position); then, the missing region in the first image (i.e. the region of the first image where the target object is scratched) is fused to the second position region in the first image by the background image corresponding to the second position region in the preview image in a filling manner, so as to obtain the target image.
In the embodiment of the invention, the area image of the target object is firstly extracted from the second image, then the first position of the target object at the first moment is obtained through at least two real sensing pixels, and the extracted area image of the target object is adjusted to the target position area corresponding to the first position in the first image, so that the smear problem can be corrected. And then fusing the background image corresponding to the second position area in the preview image to the second position area in the first image to obtain a target image, and finishing the synthesis of the non-delay image.
For example, in fig. 2, a region image of a "small circle" (i.e., a target object) is first extracted from a photograph a (i.e., a first image), and then the region image of the "small circle" extracted from the photograph a is moved to target position information corresponding to a first position of the target image in a photograph B (i.e., a preview image); and then, fusing the background image corresponding to the second position area in the B picture to the second position area in the A picture in an adding mode to obtain a target image, namely a shot image corresponding to the first moment.
In some embodiments of the present invention, the adjusting the target object in the first image to the target position area corresponding to the first position in step 103 to obtain the target image includes:
and moving the target object in the first image to a target position area corresponding to the first position to obtain a target image.
Specifically, the contour of the target object in the first image is determined, and then a plurality of pixel points included in the contour of the target object are moved to a target position area corresponding to the first position.
In the embodiment of the invention, the contour of the target object in the first image is moved to the target position area corresponding to the first position, the operation is simple, the smear problem can be corrected, and the problem of time delay synthesis does not exist.
In some embodiments of the present invention, the adjusting the target object in the first image to the target position area corresponding to the first position in step 103 to obtain the target image includes:
acquiring a preview image displayed in a shooting preview interface at a first moment;
judging whether the size information and the direction information of the contour of the target object in the first image are consistent with those of the contour of the target object in the preview image;
when the size information or the direction information of the contour of the target object in the first image is not consistent with the size information or the direction information of the contour of the target object in the preview image, adjusting the contour of the target object in the first image so that the size information and the direction information of the contour of the target object in the adjusted first image are consistent with those of the contour of the target object in the preview image;
and moving the contour of the target object in the adjusted first image to a target position area corresponding to the first position to obtain a target image.
Specifically, a preview image displayed in a shooting preview interface at a first moment is acquired, and then whether the size information and the direction information of the outline of the target object in the first image are consistent with those of the outline of the target object in the preview image is judged, if at least one of the size information and the direction information is changed, the outline of the target object in the first image is adjusted first, so that the adjusted target object in the first image is consistent with the outline of the target object in the preview image; and then, moving the contour of the target object in the adjusted first image to a target position area corresponding to the first position to obtain a target image.
In the embodiment of the present invention, first the contour of the target object in the first image is changed compared with the contour of the target object in the preview image, for example, the size information and/or the direction information is changed, the target object in the first image may be scaled and/or rotated first, so that the target object in the first image is not deformed, and then the adjusted target object in the first image is moved to the target position area corresponding to the first position, so as to correct the smear problem and solve the problem that the image synthesis has a time delay.
Fig. 3 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention. As shown in fig. 3, the image processing apparatus 300 includes an image sensor including at least two real sensing pixels arranged;
wherein, the image processing apparatus 300 further comprises:
an obtaining module 301, configured to obtain, through at least two real pixels, a first position of a target object at a first time when a shooting input of a user is received at the first time;
a photographing module 302 for photographing a first image of a target object through an image sensor;
a determining module 303, configured to determine a second position region of the target object in the first image;
the adjusting module 304 is configured to adjust the target object in the first image to a target position area corresponding to the first position, so as to obtain a target image.
In the embodiment of the invention, when shooting input of a user is received at a first moment, a first position of a target aligned at the first moment is obtained through at least two real sensing pixels in an image sensor; then, a first image of the target object is shot through the image sensor, a second position area of the target object in the first image is determined, finally, the target object in the first image is adjusted to the target position area corresponding to the first position, the target image is obtained, the effect of what you see is what you shoot can be achieved, and user experience is improved.
Optionally, the determining module 303 is further configured to:
acquiring a second position of the target object at a second moment through at least two real sensing pixels; the second moment is the moment when the target object is imaged in the shooting process of the first image;
and determining a second position area of the target object in the first image according to the second position.
Optionally, the adjusting module 304 is specifically configured to:
acquiring a preview image displayed in a shooting preview interface at a first moment;
and adjusting the target object in the first image to a target position area corresponding to the first position, and fusing the background image corresponding to the second position area in the preview image to the second position area in the first image to obtain a target image.
Optionally, the adjusting module 304 is specifically configured to:
and moving the target object in the first image to a target position area corresponding to the first position to obtain a target image.
Optionally, the adjusting module 304 is specifically configured to:
acquiring a preview image displayed in a shooting preview interface at a first moment;
judging whether the size information and the direction information of the contour of the target object in the first image are consistent with those of the contour of the target object in the preview image;
when the size information or the direction information of the contour of the target object in the first image is not consistent with the size information or the direction information of the contour of the target object in the preview image, adjusting the contour of the target object in the first image so that the size information and the direction information of the contour of the target object in the adjusted first image are consistent with those of the contour of the target object in the preview image;
and moving the contour of the target object in the adjusted first image to a target position area corresponding to the first position to obtain a target image.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiment of fig. 1, and is not described herein again to avoid repetition.
In the embodiment of the invention, when shooting input of a user is received at a first moment, a first position of a target aligned at the first moment is obtained through at least two real sensing pixels in an image sensor; then, a first image of the target object is shot through the image sensor, a second position area of the target object in the first image is determined, finally, the target object in the first image is adjusted to the target position area corresponding to the first position, the target image is obtained, the effect of what you see is what you shoot can be achieved, and user experience is improved.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The input unit 404 is configured to, when a shooting input of a user is received at a first time, acquire a first position of a target object at the first time through at least two real sensing pixels;
an input unit 404, further configured to capture a first image of the target object through the image sensor, and determine a second position area of the target object in the first image;
and the processor 410 is configured to adjust the target object in the first image to a target position area corresponding to the first position, so as to obtain a target image.
In the embodiment of the invention, when shooting input of a user is received at a first moment, a first position of a target aligned at the first moment is obtained through at least two real sensing pixels in an image sensor; then, a first image of the target object is shot through the image sensor, a second position area of the target object in the first image is determined, finally, the target object in the first image is adjusted to the target position area corresponding to the first position, the target image is obtained, the effect of what you see is what you shoot can be achieved, and user experience is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 402, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the electronic apparatus 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The electronic device 400 also includes at least one sensor 405, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4061 and/or the backlight when the electronic apparatus 400 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 4, the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the electronic device, and the implementation is not limited herein.
The interface unit 408 is an interface for connecting an external device to the electronic apparatus 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 400 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 400 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 410, a memory 409, and a computer program that is stored in the memory 409 and can be run on the processor 410, and when being executed by the processor 410, the computer program implements each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. An image processing method applied to an electronic device, wherein the electronic device comprises an image sensor, the image sensor comprises at least two real sensing pixels which are arranged, and the method comprises the following steps:
under the condition that shooting input of a user is received at a first moment, acquiring a first position of a target object at the first moment through the at least two real sensing pixels;
shooting a first image of the target object through the image sensor, and determining a second position area of the target object in the first image;
adjusting the target object in the first image to a target position area corresponding to the first position to obtain a target image;
the adjusting the target object in the first image to a target position area corresponding to the first position to obtain a target image specifically includes:
acquiring a preview image displayed in the shooting preview interface at the first moment;
and adjusting the target object in the first image to a target position area corresponding to the first position, and fusing a background image corresponding to the second position area in the preview image to the second position area in the first image to obtain a target image.
2. The method according to claim 1, wherein the determining the second location area of the target object in the first image specifically comprises:
acquiring a second position of the target object at a second moment through the at least two real sensing pixels; the second moment is the moment when the target object is imaged in the shooting process of the first image;
and determining a second position area of the target object in the first image according to the second position.
3. The method according to claim 1, wherein the adjusting the target object in the first image to a target position area corresponding to the first position to obtain a target image specifically comprises:
and moving the target object in the first image to a target position area corresponding to the first position to obtain a target image.
4. The method according to claim 1, wherein the adjusting the target object in the first image to a target position area corresponding to the first position to obtain a target image specifically comprises:
acquiring a preview image displayed in the shooting preview interface at the first moment;
judging whether the size information and the direction information of the contour of the target object in the first image are consistent with those of the contour of the target object in the preview image;
when the size information or the direction information of the contour of the target object in the first image is not consistent with the size information or the direction information of the contour of the target object in the preview image, adjusting the contour of the target object in the first image so that the size information and the direction information of the contour of the target object in the adjusted first image are consistent with the size information or the direction information of the contour of the target object in the preview image;
and moving the contour of the target object in the adjusted first image to a target position area corresponding to the first position to obtain a target image.
5. An image processing apparatus, characterized in that the apparatus comprises an image sensor comprising at least two real pixels arranged, the apparatus further comprising:
the acquisition module is used for acquiring a first position of a target object at a first moment through the at least two real sensing pixels under the condition that shooting input of a user is received at the first moment;
a shooting module 302, configured to shoot a first image of the target object through the image sensor;
a determining module, configured to determine a second position region of the target object in the first image;
the adjusting module is used for adjusting the target object in the first image to a target position area corresponding to the first position to obtain a target image;
the adjusting module is specifically configured to:
acquiring a preview image displayed in the shooting preview interface at the first moment;
and adjusting the target object in the first image to a target position area corresponding to the first position, and fusing a background image corresponding to the second position area in the preview image to the second position area in the first image to obtain a target image.
6. The apparatus of claim 5, wherein the determining module is further configured to:
acquiring a second position of the target object at a second moment through the at least two real sensing pixels; the second moment is the moment when the target object is imaged in the shooting process of the first image;
and determining a second position area of the target object in the first image according to the second position.
7. The apparatus of claim 5, wherein the adjustment module is specifically configured to:
and moving the target object in the first image to a target position area corresponding to the first position to obtain a target image.
8. The apparatus of claim 5, wherein the adjustment module is specifically configured to:
acquiring a preview image displayed in the shooting preview interface at the first moment;
judging whether the size information and the direction information of the contour of the target object in the first image are consistent with those of the contour of the target object in the preview image;
when the size information or the direction information of the contour of the target object in the first image is not consistent with the size information or the direction information of the contour of the target object in the preview image, adjusting the contour of the target object in the first image so that the size information and the direction information of the contour of the target object in the adjusted first image are consistent with the size information or the direction information of the contour of the target object in the preview image;
and moving the contour of the target object in the adjusted first image to a target position area corresponding to the first position to obtain a target image.
CN201911412744.3A 2019-12-31 2019-12-31 Image processing method and device Active CN111182206B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911412744.3A CN111182206B (en) 2019-12-31 2019-12-31 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911412744.3A CN111182206B (en) 2019-12-31 2019-12-31 Image processing method and device

Publications (2)

Publication Number Publication Date
CN111182206A CN111182206A (en) 2020-05-19
CN111182206B true CN111182206B (en) 2021-06-25

Family

ID=70650700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911412744.3A Active CN111182206B (en) 2019-12-31 2019-12-31 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111182206B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113784040B (en) * 2021-08-05 2023-11-14 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1501695A (en) * 2002-11-18 2004-06-02 矽峰光电科技股份有限公司 Method for amending internal delay in digital camera imaging
CN105025216A (en) * 2014-04-28 2015-11-04 维沃移动通信有限公司 Moving object photographing method and system thereof
CN105723698A (en) * 2014-09-19 2016-06-29 华为技术有限公司 Method and device for determining photographing delay time and photographing apparatus
CN109151348A (en) * 2018-09-28 2019-01-04 维沃移动通信有限公司 A kind of imaging sensor, image processing method and electronic equipment
CN109889712A (en) * 2019-03-11 2019-06-14 维沃移动通信(杭州)有限公司 A kind of control method of pixel circuit, imaging sensor, terminal device and signal
CN110198412A (en) * 2019-05-31 2019-09-03 维沃移动通信有限公司 A kind of video recording method and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3109695B1 (en) * 2015-06-24 2019-01-30 Samsung Electronics Co., Ltd. Method and electronic device for automatically focusing on moving object
EP3358820B1 (en) * 2015-09-30 2021-06-09 Nikon Corporation Imaging device, image processing device and display device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1501695A (en) * 2002-11-18 2004-06-02 矽峰光电科技股份有限公司 Method for amending internal delay in digital camera imaging
CN105025216A (en) * 2014-04-28 2015-11-04 维沃移动通信有限公司 Moving object photographing method and system thereof
CN105723698A (en) * 2014-09-19 2016-06-29 华为技术有限公司 Method and device for determining photographing delay time and photographing apparatus
CN109151348A (en) * 2018-09-28 2019-01-04 维沃移动通信有限公司 A kind of imaging sensor, image processing method and electronic equipment
CN109889712A (en) * 2019-03-11 2019-06-14 维沃移动通信(杭州)有限公司 A kind of control method of pixel circuit, imaging sensor, terminal device and signal
CN110198412A (en) * 2019-05-31 2019-09-03 维沃移动通信有限公司 A kind of video recording method and electronic equipment

Also Published As

Publication number Publication date
CN111182206A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN108495029B (en) Photographing method and mobile terminal
CN109246360B (en) Prompting method and mobile terminal
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN108989672B (en) Shooting method and mobile terminal
CN108038825B (en) Image processing method and mobile terminal
CN110602389B (en) Display method and electronic equipment
CN110198413B (en) Video shooting method, video shooting device and electronic equipment
CN110213485B (en) Image processing method and terminal
CN111031234B (en) Image processing method and electronic equipment
CN109241832B (en) Face living body detection method and terminal equipment
CN108881721B (en) Display method and terminal
CN108174110B (en) Photographing method and flexible screen terminal
CN110290263B (en) Image display method and mobile terminal
CN109005337B (en) Photographing method and terminal
CN109104573B (en) Method for determining focusing point and terminal equipment
CN109167917B (en) Image processing method and terminal equipment
CN110555815A (en) Image processing method and electronic equipment
CN107734269B (en) Image processing method and mobile terminal
CN108156386B (en) Panoramic photographing method and mobile terminal
CN110913133B (en) Shooting method and electronic equipment
CN111182206B (en) Image processing method and device
CN108391050B (en) Image processing method and mobile terminal
CN108965701B (en) Jitter correction method and terminal equipment
CN111416948A (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant