CN114466129A - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114466129A
CN114466129A CN202011241243.6A CN202011241243A CN114466129A CN 114466129 A CN114466129 A CN 114466129A CN 202011241243 A CN202011241243 A CN 202011241243A CN 114466129 A CN114466129 A CN 114466129A
Authority
CN
China
Prior art keywords
focusing
change information
initial
image data
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011241243.6A
Other languages
Chinese (zh)
Inventor
朱文波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zeku Technology Shanghai Corp Ltd
Original Assignee
Zeku Technology Shanghai Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zeku Technology Shanghai Corp Ltd filed Critical Zeku Technology Shanghai Corp Ltd
Priority to CN202011241243.6A priority Critical patent/CN114466129A/en
Publication of CN114466129A publication Critical patent/CN114466129A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, an image processing device, a storage medium and an electronic device, wherein the method comprises the following steps: the method comprises the steps of shooting a current scene according to initial focusing parameters to obtain initial image data, obtaining a focusing area in the initial image data, determining a target object in the focusing area, obtaining position change information of the target object in the current scene and motion parameter change information of electronic equipment in real time, correcting the initial focusing parameters according to the position change information and the motion parameter change information, and shooting the current scene again according to the corrected focusing parameters to obtain target image data. According to the embodiment of the application, the focusing parameters can be corrected for each frame of image, so that the focusing accuracy and the focusing efficiency are improved.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a storage medium, and an electronic device.
Background
In the current mobile phone preview or video recording scene, two modes of manual focusing and automatic focusing can be used to realize the focusing function of the camera, and the automatic focusing mode is generally used automatically during the camera preview, unless the user manually intervenes, which requires more accurate focusing parameters and real-time focusing processing of the camera if the user wants to obtain better preview or focusing effect. Accurate focusing parameters can be obtained by improving an algorithm, but the real-time focusing processing is limited by the performance of the system and is difficult to meet, and at present, the focusing parameters are basically calculated every other frame.
The current focusing scheme still needs to calculate the focusing parameters based on the whole image (for example, phase focusing processing), which consumes a lot of calculation time, and thus cannot realize the focusing operation for each frame (generally, a real focusing parameter calculation is triggered only when the position changes to a certain extent), so that the focusing parameters can only be calculated every certain number of frames when previewing or video shooting is performed, the latest focusing information cannot be accurately reflected, and the focusing efficiency is low.
Disclosure of Invention
The application provides an image processing method, an image processing device, a storage medium and an electronic device, which can correct focusing parameters according to each frame of image, thereby greatly improving focusing efficiency.
In a first aspect, an embodiment of the present application provides an image processing method, including:
shooting the current scene according to the initial focusing parameters to obtain initial image data;
acquiring a focusing area in the initial image data, and determining a target object in the focusing area;
acquiring position change information of the target object in the current scene and motion parameter change information of the electronic equipment in real time;
and correcting the initial focusing parameters according to the position change information and the motion parameter change information, and shooting the current scene again according to the corrected focusing parameters to acquire target image data.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the shooting module is used for shooting the current scene according to the initial focusing parameters so as to acquire initial image data;
the determining module is used for acquiring a focusing area in the initial image data and determining a target object in the focusing area;
the acquisition module is used for acquiring the position change information of the target object in the current scene and the motion parameter change information of the electronic equipment in real time;
and the correction module is used for correcting the initial focusing parameters according to the position change information and the motion parameter change information and shooting the current scene again according to the corrected focusing parameters so as to acquire target image data.
In a third aspect, an embodiment of the present application provides a storage medium having a computer program stored thereon, which, when run on a computer, causes the computer to perform the above-mentioned image processing method.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores a plurality of instructions, and the processor loads the instructions in the memory to perform the following steps:
shooting the current scene according to the initial focusing parameters to obtain initial image data;
acquiring a focusing area in the initial image data, and determining a target object in the focusing area;
acquiring position change information of the target object in the current scene and motion parameter change information of the electronic equipment in real time;
and correcting the initial focusing parameters according to the position change information and the motion parameter change information, and shooting the current scene again according to the corrected focusing parameters to acquire target image data.
The image processing method provided by the embodiment of the application can be used for shooting the current scene according to the initial focusing parameters to obtain initial image data, obtaining a focusing area in the initial image data, determining a target object in the focusing area, obtaining position change information of the target object in the current scene and motion parameter change information of the electronic equipment in real time, correcting the initial focusing parameters according to the position change information and the motion parameter change information, and shooting the current scene again according to the corrected focusing parameters to obtain the target image data. According to the embodiment of the application, the focusing parameters can be corrected for each frame of image, so that the focusing accuracy and the focusing efficiency are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure.
Fig. 2 is another schematic flow chart of an image processing method according to an embodiment of the present application.
Fig. 3 is a scene schematic diagram of an image processing method according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 5 is another schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 7 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
In the description that follows, specific embodiments of the present application will be described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This action transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the application have been described in the foregoing text and are not meant to be limiting, those of ordinary skill in the art will appreciate that various of the steps and operations described below may be implemented in hardware.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules recited, but rather, some embodiments include additional steps or modules not recited, or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present disclosure. The image processing method provided by the embodiment of the application is applied to the electronic equipment, and the specific flow can be as follows:
step 101, shooting a current scene according to the initial focusing parameters to obtain initial image data.
In an embodiment, the initial image data may be an image of a current scene acquired by an imaging device of the electronic device during shooting. The imaging device may be a front camera, a rear camera, or the like. Starting an imaging device of the electronic equipment, enabling the imaging device to enter a photographing preview mode, displaying a photographed scene in a display window of the electronic equipment, and defining a picture displayed by the display window at the moment as a preview image. Among them, the imaging device generally includes five parts in hardware: a housing (motor), a lens, an infrared filter, an image sensor (e.g., CCD or COMS), and a Flexible Printed Circuit Board (FPCB), etc. In the shooting preview mode, in the process of displaying a preview image, the lens is driven by the motor to move, and a shot object passes through the lens to be imaged on the image sensor. The image sensor converts the optical signal into an electric signal through optical-electric conversion and transmits the electric signal to the image processing circuit for subsequent processing. The Image Processing circuit may be implemented using hardware and/or software components, and may include various Processing units that define an ISP (Image Signal Processing) pipeline.
In an embodiment, the initial focusing parameter may be a focusing parameter determined by an auto-focusing function when the camera function is turned on by the electronic device. Specifically, the Auto Focus may include Contrast Detection Auto Focus (CDAF), Phase Detection Auto Focus (PDAF), Laser Detection Auto Focus (LADF), or the like. The principle of contrast focusing is to find the lens position with the maximum contrast, i.e. the position of accurate focusing, according to the contrast change of the picture at the focus. The principle of phase focusing is to reserve some shielding pixel points on the photosensitive element, specially used for phase detection, and determine the focusing offset value through the distance between pixels and the change thereof, thereby realizing accurate focusing. The principle of laser focusing is to calculate the distance from a target to a test instrument by recording the time difference between infrared laser emitted from a device, reflected by the surface of the target and finally received by a distance meter.
In one embodiment, the initial image data may be displayed as a preview image on a screen of the electronic device after it is acquired.
Step 102, acquiring a focusing area in the initial image data, and determining a target object in the focusing area.
In an embodiment, the focusing area in the initial image data may be determined by the initial focusing parameter. For example, when the electronic device shoots a current scene, the focusing parameters are determined by means of automatic focusing or manual focusing, and the focusing parameters may include parameters such as focusing duration, focusing distance, and focusing position. Specifically, the focusing area may be calculated according to the focal position and the preset range in the initial image data.
It should be noted that, if the electronic device obtains the initial image data by shooting in a manual focusing manner, that is, when the user uses the electronic device to shoot the current scene, the user manually clicks a certain position in the view frame of the screen of the electronic device, the device will focus the scene area corresponding to the position, and complete shooting after focusing. In this case, the position manually clicked by the user is the focus position, so that the focusing area does not need to be determined according to the focusing parameter, and the focusing area can be directly determined according to the position manually clicked by the user and the preset range.
Further, after the focusing area is determined, the target object in the area, that is, the object mainly focused by the user during shooting, may be further identified. Such as recognition among in-focus areas, where the target object may be a pre-configured recognizable object. The subject may be a human, an animal, etc. The material may be flowers, plants, mountains, trees, etc. The animal may be cat, dog, cow, sheep, tiger, etc. The target object may be an irregular figure contour, a regular figure contour, or the like.
In an embodiment, the number of objects in the focusing area may be determined first, and if only one object exists in the focusing area, the object may be directly determined as the target object, and if a plurality of objects exist in the focusing area, the focusing area may be further selected according to the focus position, for example, an object coinciding with the focus position is selected as the target object, or when a plurality of objects exist in the focusing area, the user may select the object.
It should be noted that, if the electronic device obtains the initial image data by shooting in a manual focusing manner, the user often directly clicks a target object to be focused in the view frame of the screen when clicking a focus point. Therefore, when the target object in the focusing area is determined, the target object can be directly determined according to the position manually clicked when the user manually focuses. For example, when a user shoots a current scene through manual focusing, the user directly clicks one puppy in the view-finding frame to focus, and after shooting is completed, the puppy in the image can be determined as a target object in a focusing area.
And 103, acquiring the position change information of the target object in the current scene and the motion parameter change information of the electronic equipment in real time.
If the target object is in a motion state, such as running, walking or jumping, the position of the target object changes in real time. Therefore, in an embodiment, the electronic device may continuously acquire image frames, and after acquiring a new image frame, may calculate the position change information of the target object in the current scene according to the image frame and the position information of the target object in the initial image data.
First position information of the target object in the initial image data may be acquired, second position information of the target object in a new image frame may be acquired, and then position change information may be calculated. The first position information and the second position information may be position information of the target object in the image, or may be position information in the real scene obtained from the position in the image. Further, the first location information and the second location information may be characterized in a coordinate manner.
Specifically, since the image is acquired by an image acquisition device such as a camera or a mobile phone, the coordinates of each point in the image are represented by a camera coordinate system. Since the image capturing device can be placed at any position in the environment, a reference coordinate system is selected in the environment to describe the position of the image capturing device, and generally, the position of the image capturing device is used as an origin to establish a corresponding world coordinate system and use the world coordinate system to describe the actual position of any object in the environment. The relationship between the camera coordinate system and the world coordinate system can be described by using a rotation matrix and a translation vector, which are not described in detail herein.
In general, a new image frame may be acquired later than the time at which the initial image data was captured, relative to the initial image data. For example, the initial image data and the new image frame may be two images corresponding to different times in the video. Specifically, on the basis of the example given in the foregoing step, when both the first position information and the second position information are characterized by coordinate information, the position change information may be a vector or a straight line calculated according to the coordinate information.
It should be noted that only when the target object moves, that is, is located in a moving state, the position change information of the target object in the current scene can be calculated after a new image frame is acquired. If the target object is in a static state during the period, the target object is not changed in the current scene, that is, no position change information exists.
Furthermore, if the target object moves, the focus parameter needs to be corrected in real time so that the electronic device always focuses on the target object. In the embodiment of the application, in addition to the influence caused by the movement of the target object, the application also considers the influence caused by the movement state of the lens. Therefore, after the position change information of the target object in the current scene is acquired, the motion parameter change information of the electronic device can also be acquired. Specifically, the motion parameter change information of the electronic device may be calculated according to the motion parameter of the electronic device when the initial image data is captured and the motion parameter of the electronic device when the new image frame is acquired.
And 104, correcting the initial focusing parameters according to the position change information and the motion parameter change information, and shooting the current scene again according to the corrected focusing parameters to acquire target image data.
In an embodiment, when the focusing parameter is corrected, the position change of the target object mainly reflects the active change of the target object (for example, a moving car), the motion parameter change information reflects a change of the camera relative to the original position, and the superposition processing of the two change parameters can reflect the relative position change of the lens relative to the target object, so that the focusing parameter can be accurately adjusted, the electronic device can perform subsequent focusing processing according to the adjusted focusing parameter, and then the current scene is photographed again according to the corrected focusing parameter, so as to obtain the target image data.
It should be noted that after the target image data is acquired, the target image data may be used as new initial image data, so that the focus parameter is continuously corrected and the image is continuously captured in the subsequent process.
Further, after the target image data is obtained, noise reduction processing may be performed on the target image data, and then Tone Mapping processing (Tone Mapping) may be performed on the noise-reduced image by the electronic device, so as to obtain a final image. It can be understood that the tone mapping processing is performed on the noise-reduced image, so that the image contrast of the image can be improved, and thus the target image has a higher dynamic range and a better imaging effect. The electronic device may further present the tone-mapped image as a preview image of the current scene on a screen of the electronic device.
It can be understood that, in the case that the actual resolution of the preview image is greater than the resolution of the screen display, a better display effect is not obtained compared to the case that the actual resolution of the preview image is equal to the resolution of the screen display. Therefore, before the preview image is displayed on the screen of the electronic device, the current resolution of the screen is obtained first, and then the preview image is subjected to down-sampling processing according to the current resolution of the screen, so that the resolution of the preview image is consistent with the current resolution of the screen. Thus, the efficiency of combining multiple frames can be improved, and the display effect of the preview image is not reduced when the preview image is displayed.
As can be seen from the above, the image processing method provided in the embodiment of the present application may capture a current scene according to an initial focusing parameter to obtain initial image data, obtain a focusing area in the initial image data, determine a target object in the focusing area, obtain position change information of the target object in the current scene and motion parameter change information of an electronic device in real time, correct the initial focusing parameter according to the position change information and the motion parameter change information, and capture the current scene again according to the corrected focusing parameter to obtain the target image data. According to the embodiment of the application, the focusing parameters can be corrected for each frame of image, so that the focusing accuracy and the focusing efficiency are improved.
The image processing method of the present application will be further described below on the basis of the methods described in the above embodiments. Referring to fig. 2, fig. 2 is another schematic flow chart of an image processing method according to an embodiment of the present application, where the image processing method includes:
step 201, shooting a current scene according to the initial focusing parameters to obtain initial image data.
In an embodiment, after determining the viewing area, the electronic device may perform focusing processing on the viewing area, record an initial focusing parameter, and perform shooting according to the initial focusing parameter, thereby obtaining initial image data. The electronic device may be a mobile electronic device including a camera application, such as a mobile phone, a tablet computer, and the like. The initial focus parameters may include focus parameters used when the camera applies focus processing to the viewing area.
Further, the initial focusing parameter may be a focusing parameter determined by an auto-focusing function when the camera function is just turned on by the electronic device, or may be a focusing parameter determined by a user manually focusing.
Step 202, performing region segmentation on the initial image data according to the initial focusing parameters to determine a focusing region.
In an embodiment, the focusing area in the initial image data may be determined by the initial focusing parameter. For example, when the electronic device shoots a current scene, the focusing parameters are determined by means of automatic focusing or manual focusing, and the focusing parameters may include parameters such as focusing duration, focusing distance, and focusing position. Specifically, the focusing area may be calculated according to the focal position and the preset range in the initial image data.
For example, please refer to fig. 3, wherein fig. 3 is a scene diagram of an image processing method according to an embodiment of the present disclosure. In the initial image data, a focus position may be determined according to an initial focusing parameter, and then a rectangular region with a preset side length is divided by taking the focus position as a center, and the rectangular region may be determined as a focusing region, such as a rectangular region where a person is located in the figure.
In another embodiment, if the electronic device obtains the initial image data by shooting in a manual focusing manner, that is, when the user uses the electronic device to shoot the current scene, the user manually clicks a certain position in a view frame of a screen of the electronic device, and then the device focuses on a scene area corresponding to the position, and completes shooting after focusing. In this case, the position manually clicked by the user is the focus position, so that it is not necessary to determine the focusing area according to other focusing parameters, and the area division is directly performed according to the focus position to obtain the focusing area.
It should be noted that, regardless of automatic focusing or manual focusing, after the focal position is determined, contour recognition may be performed according to the focal position to obtain an edge contour image of an object in which the focal position is located. For example, in fig. 3, when the focus position is on the person, the outline of the entire person is recognized by the outline, so that the entire outline of the person can be determined as the in-focus region. The contour recognition may adopt one or more of a morphological erosion algorithm, a Sobel edge detection algorithm, a Prewitt edge detection algorithm, or the like to perform contour recognition on the image, so as to obtain an edge contour image of the object where the focus position is located.
Step 203, determining a target object in the focusing area, and determining a characteristic point according to the target object.
After the focusing area is determined, the target object in the area, that is, the object mainly focused by the user during shooting, can be further identified. In an embodiment, if the focusing area is a rectangular area determined according to the focal position and a preset side length, image recognition may be performed in the area to recognize an image subject in the area and determine the subject as the target object. For example, when one object is recognized in the focusing area, the object may be directly determined as the target object, and when a plurality of objects are recognized in the focusing area, the target object may be selected from the plurality of objects by a center point of the image or a user click.
In another embodiment, if the focusing area is obtained by directly performing contour recognition according to the focal position, that is, the focusing area is an edge contour of an object where the focal position is located, then only one object is included in the focusing area, that is, the object is the target object.
After the target object is determined, a feature point needs to be further selected in the embodiment of the present application, where the feature point may be an image center of the target object, or may be determined by a user clicking on an image corresponding to the target object, which is not further limited in the present application.
Step 204, the initial image data is used as an initial image frame.
In one embodiment, the initial image data may be taken as an initial image frame after the initial image data is acquired. Further, the image can be displayed as a preview image on the screen of the electronic device.
And step 205, acquiring a subsequent image frame, and calculating position change information according to the position information of the feature point in the subsequent image frame and the position information of the feature point in the initial image frame.
In an embodiment, after the initial image frame is obtained, the subsequent image frame may be continuously obtained, and when the subsequent image frame is obtained, the subsequent image frame is compared with the initial image frame, so as to determine the motion state of the target object map, which may be specifically determined by the position change of the feature point.
Specifically, first position information of the feature point in the initial image frame may be acquired, second position information of the feature point in the subsequent image frame may be acquired, and then the position change information may be calculated. The first position information and the second position information may be position information of the feature point in the image, or may be position information in the real scene obtained from the position in the image. Further, the first location information and the second location information may be characterized in a coordinate manner.
Generally, the position of the image acquisition device is used as an origin to establish a corresponding world coordinate system, and the world coordinate system is used for describing the actual position of any object in the environment.
In step 206, a time interval between the initial image frame and the subsequent image frame is determined, and the motion parameter variation information of the electronic device in the time interval is calculated.
In the embodiment of the application, in addition to the influence caused by the movement of the target object, the application also considers the influence caused by the movement state of the lens. Therefore, after the position change information of the target object in the current scene is acquired, the motion parameter change information of the electronic device can also be acquired. Specifically, the motion parameter change information of the electronic device may be calculated according to the motion parameter of the electronic device when the initial image frame is shot and the motion parameter of the electronic device when the subsequent image frame is acquired.
In general, a subsequent image frame may be acquired later than the time the initial image frame was captured relative to the initial image frame. For example, the initial image frame and the subsequent image frame may be two images corresponding to different times in a video, or two images that are acquired by a camera after a certain time interval, so that a time interval between the initial image frame and the subsequent image frame may be determined first, and motion parameter change information of the electronic device in the time interval may be calculated.
In an embodiment, the motion parameters of the electronic device may be detected by sensors on the device, such as a gyroscope and an acceleration sensor, and may further include an angular velocity sensor, a pressure sensor, and the like. The motion parameter acquired by the acceleration sensor may be a motion acceleration. The gyroscope is used for detecting the steering angle of the electronic equipment. The single-axis gyroscope can be adopted to detect the angle change of the electronic equipment in a single direction for measurement, and one dimension of the left and right sides is sensed; the angle change of the X, Y, Z axis in three directions can be measured by adopting a three-axis gyroscope, and the front and back, left and right, and up and down dimensions can be sensed. Of course, the above embodiments are only used as alternative embodiments, and specific setting can be performed according to actual situations. That is, the step of acquiring the motion parameter variation information of the electronic device may include:
acquiring parameter change information of a gyroscope and an acceleration sensor of the electronic equipment;
and calculating the motion parameter change information of the electronic equipment according to the parameter change information of the gyroscope and the acceleration sensor.
And step 207, calculating the relative position change information of the electronic equipment and the target object according to the position change information and the motion parameter change information.
In an embodiment, when the focusing parameter is corrected, the position change of the target object mainly reflects the active change of the target object (for example, a moving car), the motion parameter change information reflects a change of the camera relative to the original position, and the superposition processing of the two change parameters can reflect the relative position change of the lens relative to the target object, so that the focusing parameter can be accurately adjusted.
And 208, correcting the initial focusing parameters according to the relative position change information, and shooting the current scene again according to the corrected focusing parameters to acquire target image data.
After the focusing parameters are corrected, the electronic equipment can perform subsequent focusing processing according to the adjusted focusing parameters, so that the current scene is shot again according to the corrected focusing parameters to acquire target image data. After the target image data is acquired, the target image data can be used as new initial image data, so that the focusing parameters are continuously corrected and shot in the subsequent process.
As can be seen from the above, the image processing method provided in this embodiment of the present application may capture a current scene according to an initial focusing parameter to obtain initial image data, perform region segmentation on the initial image data according to the initial focusing parameter to determine a focusing region, determine a target object in the focusing region, determine a feature point according to the target object, use the initial image data as an initial image frame, obtain a subsequent image frame, calculate position change information according to position information of the feature point in the subsequent image frame and position information of the feature point in the initial image frame, determine a time interval between the initial image frame and the subsequent image frame, calculate motion parameter change information of the electronic device within the time interval, calculate relative position change information between the electronic device and the target object according to the position change information and the motion parameter change information, correct the initial focusing parameter according to the relative position change information, and shooting the current scene again according to the corrected focusing parameters to obtain target image data. According to the embodiment of the application, the focusing parameters can be corrected for each frame of image, so that the focusing accuracy and the focusing efficiency are improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. Wherein the image processing apparatus 30 comprises:
a shooting module 301, configured to shoot a current scene according to the initial focusing parameter to obtain initial image data;
a determining module 302, configured to acquire a focusing area in the initial image data, and determine a target object in the focusing area;
an obtaining module 303, configured to obtain, in real time, position change information of the target object in the current scene and motion parameter change information of the electronic device;
and the correcting module 304 is configured to correct the initial focusing parameter according to the position change information and the motion parameter change information, and capture the current scene again according to the corrected focusing parameter, so as to obtain target image data.
In an embodiment, with continued reference to fig. 5, the determining module 302 may specifically include:
a segmentation submodule 3021, configured to perform region segmentation on the initial image data according to the initial focusing parameter to determine a focusing region;
a determination submodule 3022 configured to determine a target object in the focus area, and determine a feature point according to the target object.
In an embodiment, the obtaining module 303 may specifically include:
a processing submodule 3031, configured to take the initial image data as an initial image frame;
the first calculation submodule 3032 is configured to acquire a subsequent image frame, and calculate position change information according to position information of the feature point in the subsequent image frame and position information of the feature point in the initial image frame;
a second calculating submodule 3033, configured to determine a time interval between the initial image frame and the subsequent image frame, and calculate motion parameter variation information of the electronic device in the time interval
As can be seen from the above, the image processing apparatus 30 according to the embodiment of the present application may capture a current scene according to the initial focusing parameter to obtain initial image data, obtain a focusing area in the initial image data, determine a target object in the focusing area, obtain position change information of the target object in the current scene and motion parameter change information of the electronic device in real time, correct the initial focusing parameter according to the position change information and the motion parameter change information, and capture the current scene again according to the corrected focusing parameter to obtain the target image data. According to the embodiment of the application, the focusing parameters can be corrected for each frame of image, so that the focusing accuracy and the focusing efficiency are improved.
In the embodiment of the present application, the image processing apparatus and the image processing method in the foregoing embodiment belong to the same concept, and any method provided in the embodiment of the image processing method may be executed on the image processing apparatus, and a specific implementation process thereof is described in detail in the embodiment of the image processing method, and is not described herein again.
The term "module" as used herein may be considered a software object executing on the computing system. The different components, modules, engines, and services described herein may be considered as implementation objects on the computing system. The apparatus and method described herein may be implemented in software, but may also be implemented in hardware, and are within the scope of the present application.
The embodiment of the present application also provides a storage medium, on which a computer program is stored, which, when running on a computer, causes the computer to execute the above-mentioned image processing method.
The embodiment of the application also provides an electronic device, such as a tablet computer, a mobile phone and the like. The processor in the electronic device loads instructions corresponding to processes of one or more application programs into the memory according to the following steps, and the processor runs the application programs stored in the memory, so that various functions are realized:
shooting the current scene according to the initial focusing parameters to obtain initial image data;
acquiring a focusing area in the initial image data, and determining a target object in the focusing area;
acquiring position change information of the target object in the current scene and motion parameter change information of the electronic equipment in real time;
and correcting the initial focusing parameters according to the position change information and the motion parameter change information, and shooting the current scene again according to the corrected focusing parameters to acquire target image data.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 6, the electronic device 400 includes a processor 401 and a memory 402. The processor 401 is electrically connected to the memory 402.
The processor 400 is a control center of the electronic device 400, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device 400 by running or loading a computer program stored in the memory 402 and calling data stored in the memory 402, and processes the data, thereby monitoring the electronic device 400 as a whole.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the computer programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, a computer program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
In this embodiment, the processor 401 in the electronic device 400 loads instructions corresponding to one or more processes of the computer program into the memory 402 according to the following steps, and the processor 401 runs the computer program stored in the memory 402, so as to implement various functions, as follows:
shooting a current scene according to the initial focusing parameters to obtain initial image data;
acquiring a focusing area in the initial image data, and determining a target object in the focusing area;
acquiring position change information of the target object in the current scene and motion parameter change information of the electronic equipment in real time;
and correcting the initial focusing parameters according to the position change information and the motion parameter change information, and shooting the current scene again according to the corrected focusing parameters to acquire target image data.
Referring also to fig. 7, in some embodiments, the electronic device 400 may further include: a display 403, radio frequency circuitry 404, audio circuitry 405, and a power supply 406. The display 403, the rf circuit 404, the audio circuit 405, and the power source 406 are electrically connected to the processor 401.
The display 403 may be used to display information entered by or provided to the user as well as various graphical user interfaces, which may be made up of graphics, text, icons, video, and any combination thereof. The Display 403 may include a Display panel, and in some embodiments, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The rf circuit 404 may be used for transceiving rf signals to establish wireless communication with a network device or other electronic devices through wireless communication, and for transceiving signals with the network device or other electronic devices. In general, radio frequency circuit 501 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
The audio circuit 405 may be used to provide an audio interface between the user and the electronic device through a speaker, microphone. The audio circuit 506 may convert the received audio data into an electrical signal, transmit the electrical signal to a speaker, and convert the electrical signal to an audio signal for output by the speaker.
The power supply 406 may be used to power various components of the electronic device 400. In some embodiments, power supply 406 may be logically coupled to processor 401 via a power management system, such that functions to manage charging, discharging, and power consumption management are performed via the power management system. The power supply 406 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 7, the electronic device 400 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
In the embodiment of the present application, the storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for the image processing method in the embodiment of the present application, it can be understood by a person skilled in the art that all or part of the process of implementing the image processing method in the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, the computer program can be stored in a computer readable storage medium, such as a memory of an electronic device, and executed by at least one processor in the electronic device, and the process of executing the process can include, for example, the process of the embodiment of the image processing method. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, etc.
In the image processing apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a storage medium, and an electronic device according to embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method, characterized in that it comprises the steps of:
shooting the current scene according to the initial focusing parameters to obtain initial image data;
acquiring a focusing area in the initial image data, and determining a target object in the focusing area;
acquiring position change information of the target object in the current scene and motion parameter change information of the electronic equipment in real time;
and correcting the initial focusing parameters according to the position change information and the motion parameter change information, and shooting the current scene again according to the corrected focusing parameters to acquire target image data.
2. The image processing method according to claim 1, wherein the step of acquiring a focus area in the initial image data and determining a target object in the focus area comprises:
performing region segmentation on the initial image data according to the initial focusing parameters to determine a focusing region;
and determining a target object in the focusing area, and determining a characteristic point according to the target object.
3. The image processing method according to claim 2, wherein the step of acquiring, in real time, the position change information of the target object in the current scene and the motion parameter change information of the electronic device includes:
taking the initial image data as an initial image frame;
acquiring a subsequent image frame, and calculating position change information according to the position information of the feature point in the subsequent image frame and the position information of the feature point in the initial image frame;
a time interval between the initial image frame and the subsequent image frame is determined, and motion parameter change information of the electronic device within the time interval is calculated.
4. The image processing method according to claim 1, wherein the step of correcting the initial focusing parameter according to the position change information and the motion parameter change information comprises:
calculating the relative position change information of the electronic equipment and the target object according to the position change information and the motion parameter change information;
and correcting the initial focusing parameters according to the relative position change information.
5. The image processing method according to claim 1, wherein the step of acquiring the motion parameter change information of the electronic device comprises:
acquiring parameter change information of a gyroscope and an acceleration sensor of the electronic equipment;
and calculating the motion parameter change information of the electronic equipment according to the parameter change information of the gyroscope and the acceleration sensor.
6. An image processing apparatus, characterized in that the apparatus comprises:
the shooting module is used for shooting the current scene according to the initial focusing parameters so as to acquire initial image data;
the determining module is used for acquiring a focusing area in the initial image data and determining a target object in the focusing area;
the acquisition module is used for acquiring the position change information of the target object in the current scene and the motion parameter change information of the electronic equipment in real time;
and the correction module is used for correcting the initial focusing parameters according to the position change information and the motion parameter change information and shooting the current scene again according to the corrected focusing parameters so as to acquire target image data.
7. The image processing apparatus according to claim 6, wherein the determination module comprises:
the segmentation submodule is used for carrying out region segmentation on the initial image data according to the initial focusing parameters so as to determine a focusing region;
and the determining submodule is used for determining a target object in the focusing area and determining the characteristic point according to the target object.
8. The image processing apparatus according to claim 7, wherein the acquisition module includes:
a processing submodule for taking the initial image data as an initial image frame;
the first calculation submodule is used for acquiring a subsequent image frame and calculating position change information according to the position information of the feature point in the subsequent image frame and the position information of the feature point in the initial image frame;
and the second calculation submodule is used for determining a time interval between the initial image frame and the subsequent image frame and calculating the motion parameter change information of the electronic equipment in the time interval.
9. A storage medium having stored thereon a computer program, characterized in that, when the computer program is run on a computer, it causes the computer to execute the image processing method according to any one of claims 1 to 5.
10. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions, wherein the instructions in the memory are loaded by the processor for performing the steps of:
shooting the current scene according to the initial focusing parameters to obtain initial image data;
acquiring a focusing area in the initial image data, and determining a target object in the focusing area;
acquiring position change information of the target object in the current scene and motion parameter change information of the electronic equipment in real time;
and correcting the initial focusing parameters according to the position change information and the motion parameter change information, and shooting the current scene again according to the corrected focusing parameters to acquire target image data.
CN202011241243.6A 2020-11-09 2020-11-09 Image processing method, image processing device, storage medium and electronic equipment Pending CN114466129A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011241243.6A CN114466129A (en) 2020-11-09 2020-11-09 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011241243.6A CN114466129A (en) 2020-11-09 2020-11-09 Image processing method, image processing device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114466129A true CN114466129A (en) 2022-05-10

Family

ID=81404882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011241243.6A Pending CN114466129A (en) 2020-11-09 2020-11-09 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114466129A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115297262A (en) * 2022-08-09 2022-11-04 中国电信股份有限公司 Focusing method, focusing device, storage medium and electronic equipment
CN117135451A (en) * 2023-02-27 2023-11-28 荣耀终端有限公司 Focusing processing method, electronic device and storage medium
CN117389745A (en) * 2023-12-08 2024-01-12 荣耀终端有限公司 Data processing method, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000188713A (en) * 1998-12-22 2000-07-04 Ricoh Co Ltd Automatic focus controller and method for determining its focusing
CN102970485A (en) * 2012-12-03 2013-03-13 广东欧珀移动通信有限公司 Automatic focusing method and device
US20160323499A1 (en) * 2014-12-19 2016-11-03 Sony Corporation Method and apparatus for forming images and electronic equipment
CN106170064A (en) * 2016-09-14 2016-11-30 广东欧珀移动通信有限公司 Camera focusing method, system and electronic equipment
US20170025150A1 (en) * 2009-06-15 2017-01-26 Olympus Corporation Photographing device, photographing method, and playback method
US20170244883A1 (en) * 2016-02-22 2017-08-24 Panasonic Intellectual Property Management Co., Ltd. Imaging apparatus
US20170257557A1 (en) * 2016-03-02 2017-09-07 Qualcomm Incorporated Irregular-region based automatic image correction
US20180041754A1 (en) * 2016-08-08 2018-02-08 Fotonation Limited Image acquisition device and method
CN108351654A (en) * 2016-02-26 2018-07-31 深圳市大疆创新科技有限公司 System and method for visual target tracking
CN108496350A (en) * 2017-09-27 2018-09-04 深圳市大疆创新科技有限公司 A kind of focusing process method and apparatus
CN109451240A (en) * 2018-12-04 2019-03-08 百度在线网络技术(北京)有限公司 Focusing method, device, computer equipment and readable storage medium storing program for executing
CN110881105A (en) * 2019-11-12 2020-03-13 维沃移动通信有限公司 Shooting method and electronic equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000188713A (en) * 1998-12-22 2000-07-04 Ricoh Co Ltd Automatic focus controller and method for determining its focusing
US20170025150A1 (en) * 2009-06-15 2017-01-26 Olympus Corporation Photographing device, photographing method, and playback method
CN102970485A (en) * 2012-12-03 2013-03-13 广东欧珀移动通信有限公司 Automatic focusing method and device
US20160323499A1 (en) * 2014-12-19 2016-11-03 Sony Corporation Method and apparatus for forming images and electronic equipment
US20170244883A1 (en) * 2016-02-22 2017-08-24 Panasonic Intellectual Property Management Co., Ltd. Imaging apparatus
CN108351654A (en) * 2016-02-26 2018-07-31 深圳市大疆创新科技有限公司 System and method for visual target tracking
US20170257557A1 (en) * 2016-03-02 2017-09-07 Qualcomm Incorporated Irregular-region based automatic image correction
US20180041754A1 (en) * 2016-08-08 2018-02-08 Fotonation Limited Image acquisition device and method
CN106170064A (en) * 2016-09-14 2016-11-30 广东欧珀移动通信有限公司 Camera focusing method, system and electronic equipment
CN108496350A (en) * 2017-09-27 2018-09-04 深圳市大疆创新科技有限公司 A kind of focusing process method and apparatus
CN109451240A (en) * 2018-12-04 2019-03-08 百度在线网络技术(北京)有限公司 Focusing method, device, computer equipment and readable storage medium storing program for executing
CN110881105A (en) * 2019-11-12 2020-03-13 维沃移动通信有限公司 Shooting method and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115297262A (en) * 2022-08-09 2022-11-04 中国电信股份有限公司 Focusing method, focusing device, storage medium and electronic equipment
CN117135451A (en) * 2023-02-27 2023-11-28 荣耀终端有限公司 Focusing processing method, electronic device and storage medium
CN117389745A (en) * 2023-12-08 2024-01-12 荣耀终端有限公司 Data processing method, electronic equipment and storage medium
CN117389745B (en) * 2023-12-08 2024-05-03 荣耀终端有限公司 Data processing method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110493538B (en) Image processing method, image processing device, storage medium and electronic equipment
JP6271990B2 (en) Image processing apparatus and image processing method
CN108377342B (en) Double-camera shooting method and device, storage medium and terminal
CN114466129A (en) Image processing method, image processing device, storage medium and electronic equipment
CN110086905B (en) Video recording method and electronic equipment
CN109903324B (en) Depth image acquisition method and device
JP2020511685A (en) Focusing method, terminal, and computer-readable storage medium
CN110506415B (en) Video recording method and electronic equipment
CN111901524B (en) Focusing method and device and electronic equipment
CN107948505B (en) Panoramic shooting method and mobile terminal
CN113711268A (en) Electronic device for applying shot effect to image and control method thereof
KR20140090078A (en) Method for processing an image and an electronic device thereof
CN109714539B (en) Image acquisition method and device based on gesture recognition and electronic equipment
CN112637500B (en) Image processing method and device
CN114339102B (en) Video recording method and equipment
CN113099122A (en) Shooting method, shooting device, shooting equipment and storage medium
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110708463B (en) Focusing method, focusing device, storage medium and electronic equipment
CN112135034A (en) Photographing method and device based on ultrasonic waves, electronic equipment and storage medium
JP2007281555A (en) Imaging apparatus
CN109451240B (en) Focusing method, focusing device, computer equipment and readable storage medium
CN113395450A (en) Tracking shooting method, device and storage medium
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114125268A (en) Focusing method and device
CN110677580B (en) Shooting method, shooting device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination