CN117376699A - Image processing method, device and equipment - Google Patents

Image processing method, device and equipment Download PDF

Info

Publication number
CN117376699A
CN117376699A CN202311285349.XA CN202311285349A CN117376699A CN 117376699 A CN117376699 A CN 117376699A CN 202311285349 A CN202311285349 A CN 202311285349A CN 117376699 A CN117376699 A CN 117376699A
Authority
CN
China
Prior art keywords
target
image
frame
determining
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311285349.XA
Other languages
Chinese (zh)
Inventor
于宙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202311285349.XA priority Critical patent/CN117376699A/en
Publication of CN117376699A publication Critical patent/CN117376699A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, device and apparatus, wherein the method comprises the following steps: acquiring a mode instruction, wherein the mode instruction comprises a target resolution and a shooting mode; if the shooting mode is a first mode, determining a target area for acquiring at least one frame of image in the image sensor, wherein the target area is smaller than the image size of the image acquired by the image sensor; and processing pixel information of at least one frame of the target area based on the target resolution to obtain and display at least one frame of target image.

Description

Image processing method, device and equipment
Technical Field
The embodiments of the present application relate to the field of image processing, and relate to, but are not limited to, an image processing method, an image processing device, and an image processing device.
Background
In the process of acquiring an image, due to the setting of shooting parameters, usability of face recognition and the like, the set camera view angle is overlarge. The occupation of the photographed face in the image is small, and the definition of the face in the image display is affected. Under the condition that a user needs to enlarge the face, the existing scheme outputs the resolution image according to the resolution requirement of Application (APP), and then directly sacrifices the resolution and definition to enlarge. Thus, the resolution and definition of the face in the obtained image cannot meet the requirements of users.
Disclosure of Invention
In view of this, embodiments of the present application provide an image processing method, apparatus, device, and storage medium.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a mode instruction, wherein the mode instruction comprises a target resolution and a shooting mode;
if the shooting mode is a first mode, determining a target area for acquiring at least one frame of image in the image sensor, wherein the target area is smaller than the image size of the image acquired by the image sensor;
and processing pixel information of at least one frame of the target area based on the target resolution to obtain and display at least one frame of target image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
an image sensor for acquiring at least one frame of image;
the driving assembly is connected with the digital signal processing assembly and used for acquiring a mode instruction, wherein the mode instruction comprises a target resolution and a shooting mode; transmitting the mode instruction to the digital signal processing component;
the digital signal processing component is connected with the driving component and the image sensor and is used for determining a target area for acquiring at least one frame of image in the image sensor, wherein the target area is smaller than or equal to the image size of the image acquired by the image sensor; processing pixel information of at least one frame of the target area based on the target resolution to obtain at least one frame of target image;
and the display component is used for displaying at least one frame of the target image.
In a third aspect, an embodiment of the present application provides an image processing apparatus, including:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring a mode instruction, and the mode instruction comprises a target resolution and a shooting mode;
the determining module is used for determining a target area for acquiring at least one frame of image in the image sensor if the shooting mode is a first mode, wherein the target area is smaller than the image size of the image acquired by the image sensor;
and the obtaining module is used for processing the pixel information of at least one frame of the target area based on the target resolution, obtaining at least one frame of target image and displaying the target image.
In a fourth aspect, embodiments of the present application provide a storage medium storing executable instructions for implementing the above method when executed by a processor.
In the embodiment of the application, a mode instruction is firstly acquired, wherein the mode instruction comprises a target resolution and a shooting mode; then, if the shooting mode is a first mode, determining a target area for acquiring at least one frame of image in the image sensor, wherein the target area is smaller than the image size of the image acquired by the image sensor; and finally, processing pixel information of at least one frame of the target area based on the target resolution to obtain and display at least one frame of target image. In this way, the target area is selected by using the pixel information frame on the image sensor in the first mode based on the target resolution, and the target image meeting the target resolution is obtained based on the pixel information of the target area, so that the image loss can be effectively reduced on the basis of meeting the target resolution.
Drawings
Fig. 1A is a schematic implementation flow diagram of an image processing method according to an embodiment of the present application;
FIG. 1B is a schematic diagram of a method for framing a target area and restoring the target area according to an embodiment of the present disclosure;
fig. 2 is a schematic implementation flow chart of an image processing method according to an embodiment of the present application;
fig. 3 is a schematic implementation flow chart of a method for determining a target area according to an embodiment of the present application;
fig. 4A is a schematic hardware entity diagram of an image processing device according to an embodiment of the present application;
FIG. 4B is a flowchart illustrating image processing instruction delivery according to an embodiment of the present application
Fig. 5 is a schematic diagram of a composition structure of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the embodiments of the present application to be more apparent, the specific technical solutions of the embodiments of the present application will be further described in detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are illustrative of the present application, but are not intended to limit the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
An embodiment of the present application provides an image processing method, as shown in fig. 1A, including:
step S110, a mode instruction is acquired, wherein the mode instruction comprises a target resolution and a shooting mode;
here, the target resolution is the display resolution of the image, the resolution determines the degree of fineness of the details of the image, and the higher the resolution of the image is, the more pixels are included, and the clearer the image is. In an implementation process, a target resolution required by a user can be acquired through an application program based on instructions of the user. For example, the user may input a request for resolution at the operation interface to determine the target resolution, and may also determine the target resolution by gesture operation of the screen.
The photographing modes include at least a large viewing angle mode and a small viewing angle mode. Wherein a large view mode may be used to obtain a panoramic image and a small view mode may be used to focus on a local image such as a task portrait or the like. In the implementation process, an independent application can be set to provide a user with a mode required by setting, and the function can be integrated into the application for acquiring the target image, so that the mode is automatically set or the mode is actively set by the user according to the requirement of the user.
In the implementation process, the acquired mode instruction comprises the target resolution and the shooting mode.
Step S120, if the shooting mode is a first mode, determining a target area for acquiring at least one frame of image in the image sensor, wherein the target area is smaller than the image size of the image acquired by the image sensor;
here, the first mode may be a small view angle mode, i.e., a portrait mode of a person.
An Image Sensor (Image Sensor) converts an optical Image on a photosensitive surface into an electrical signal in a proportional relationship with the optical Image by a photoelectric conversion function of a photoelectric device. For example, image sensors such as Charge-coupled Device (CCD) sensors, complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) sensors, and the like are all important components that make up digital cameras. The pixel is the minimum unit of sensitization of the image sensor, namely the minimum unit of the image. The number of pixels is determined by the number of photosensors on the CCD/CMOS, one photosensor may correspond to each pixel. The larger the pixel, the more photosensitive elements. The pixels are represented by two numbers, e.g., 720×480, 720 representing the number of pixels contained in the image length direction, 480 representing the number of pixels contained in the image width direction, the product of the two being the number of pixels of the camera. The pixel units of the image sensor are Megapixels (MP), namely megapixels, commonly 2M, 5M, 13M and the like,
In an implementation process, a target area can be determined in a pixel area of the image sensor, where the target area is smaller than an image size of an image acquired by the image sensor and covers a target object to be acquired.
And step S130, processing pixel information of at least one frame of the target area based on the target resolution to obtain and display at least one frame of target image.
In the implementation process, the acquired pixel information of the target area can be processed based on the target resolution, and the obtained target image meets the requirement of the target resolution.
Fig. 1B is a schematic diagram for framing and restoring a target area according to an embodiment of the present application, and as shown in fig. 1B, the schematic diagram includes an original image sensor pixel 11, a framed target area 12 and a target image 13, where,
in some embodiments, in the case of determining that the target resolution is 5M, a region including the target is selected on the image sensor of the 5M pixel point, and the proportion of the selected region is also determined by the target resolution 5M. The frame-selected target area 12 selected based on the target resolution 5M restoration frame can obtain a target image 13 in which the target corresponds to the enlargement. Since the target resolution is consistent with the pixel index of the image sensor, the obtained target image 13 has partial image loss, but the target image is enlarged on the basis of satisfying the target resolution.
In some embodiments, in the case of determining that the target resolution is 2.1M, a region including the target is selected on the image sensor of the 5M pixel point, and the proportion of the selected region is determined by the target resolution of 2.1M. The frame-selected target region 12 selected based on the target resolution 2.1M restoration frame can obtain the target image 13 in which the target correspondence is enlarged and there is no image loss.
In some embodiments, in the case of determining that the target resolution is 0.92M, a region including the target may be framed on the image sensor of the 5M pixel point, and the framed region proportion is also determined by the target resolution of 0.92M. The frame-selected target area 12 selected based on the target resolution 0.92M restoration frame can obtain the target image 13 with the target corresponding to the enlarged and without image loss.
In the embodiment of the application, a mode instruction is firstly acquired, wherein the mode instruction comprises a target resolution and a shooting mode; then, if the shooting mode is a first mode, determining a target area for acquiring at least one frame of image in the image sensor, wherein the target area is smaller than the image size of the image acquired by the image sensor; and finally, processing pixel information of at least one frame of the target area based on the target resolution to obtain and display at least one frame of target image. In this way, the target area is selected by using the pixel information frame on the image sensor in the first mode based on the target resolution, and the target image meeting the target resolution is obtained based on the pixel information of the target area, so that the image loss can be effectively reduced on the basis of meeting the target resolution.
An embodiment of the present application provides an image processing method, as shown in fig. 2, including the following steps:
step S210, acquiring a mode instruction, wherein the mode instruction comprises a target resolution and a shooting mode;
step S220, determining the shooting mode as a second mode;
here, the second mode may be a large viewing angle mode, i.e., a panorama mode.
Step S230, determining at least one frame of the target area as an original area of an image acquired by the image sensor;
in the second mode, the target area may be the original area of the image acquired by the image sensor, i.e. all pixel information on the image sensor is acquired.
And step 240, processing pixel information of at least one frame of the target area based on the target resolution to obtain and display at least one frame of target image.
In the implementation process, a sampling process or a pixel point merging mode can be adopted to obtain a target image meeting the requirement of the target resolution based on the target resolution.
In the embodiment of the application, the shooting mode is first determined to be a second mode; and then determining at least one frame of the target area as an original area of an image acquired by the image sensor. In this way, it is possible to achieve a target image that meets the target resolution requirement in the second mode.
In some embodiments, the step S120 of determining the target area for acquiring the at least one frame of image in the image sensor if the shooting mode is the first mode may be implemented by the following steps:
step 121, determining the shooting mode as a first mode;
step 122, determining at least one frame of the target area based on the target resolution.
In practice, as shown in FIG. 1B, different target areas may be determined in the image sensor based on different target resolutions.
For example, for the case where the target resolution requirement is 1080P (2.1M), the image sensor resolution is greater than this. In this way, an optimized view angle frame can be preset at each target resolution, so that a target area with the target resolution of 2.1M can be obtained by direct 'crop' (frame selection) at the bottom layer of an image sensor (such as 5M), and the core requirement of corresponding amplification of a target in the target area is considered.
In the embodiment of the application, the shooting mode is first determined to be a first mode; at least one frame of the target region is then determined based on the target resolution. In this way, the pixels within the resulting target area can be used to process the resulting target image that meets the target resolution requirements.
In some embodiments, the above step 122 "determining the target region for at least one frame based on the target resolution" may be achieved by:
step 1221, determining size information corresponding to a target object in the target image, where the target object is an object determined based on the first mode;
here, the target object may be determined first in the first mode, and size information of pixel information corresponding to the target object in the image sensor may be determined.
Step 1222, determining at least one frame of the target area based on the size information and the target resolution.
In the implementation process, the target area can be determined based on the size information corresponding to the target object and the target resolution, so that the determined target area can not only meet the target resolution, but also completely restore the target object under the condition of generating the target image.
In the embodiment of the application, firstly, size information corresponding to a target object in the target image is determined, wherein the target object is determined based on the first mode; at least one frame of the target region is then determined based on the size information and the target resolution. In this way, the target region obtained can completely restore the target object while satisfying the target resolution when generating the target image.
In some embodiments, as shown in fig. 3, the above step 1222 "determining the target area of at least one frame based on the size information and the target resolution" may be implemented by:
step S310, determining at least one frame first area based on the target resolution;
step S320, determining that all information of the target object can be acquired in at least one frame of the first area based on the size information;
step S330, determining the first region as the target region;
for example, in the case of a target resolution of 1080P, the determined first region is able to acquire all information of the target object.
Step S340, determining that all information of the target object cannot be acquired in the first area based on the size information;
for example, in the case where the target resolution is 720P, the determined first area cannot collect all information corresponding to the target.
Step S350, determining at least one frame of second area based on the size information, wherein the second area is larger than the first area, and the second area can collect all information of the target object;
in some embodiments, the first region may be enlarged to enable the determination that the second region is capable of collecting all information of the target object based on the size information of the target object.
For example, if the resolution is 720P, the frame may be small, the portrait may not be fully overlaid, and a larger frame is required to overlay the target object. The frame can be determined according to the boundary of the target object, and can be directly enlarged to 1080P, namely, the frame selection can be performed again by using 1080P of target resolution, so that a second area capable of collecting all information of the target object is obtained. If there is still a 1080P box that cannot cover all of the target objects, the box may be enlarged again. Thus, the obtained frame is large, i.e. the obtained original pixel points are more, and the whole portrait can be covered.
In the implementation process, the 720P target image can be obtained by sampling due to the fact that the number of the acquired native pixels is large.
Step S360, determining the second area as the target area.
In the embodiment of the application, all information which can collect the target object in at least one frame of the first area is determined based on the size information; the first region is determined as the target region. Determining that all information of the target object cannot be acquired in the first area based on the size information; determining at least one frame of second area based on the size information, wherein the second area is larger than the first area, and the second area can acquire all information of the target object; and determining the second area as the target area. Thus, pixel information in the determined target area can be realized, and restoration of all information of the target object can be realized.
In some embodiments, the step S130 "process the pixel information of the target area of at least one frame based on the target resolution, and obtain and display at least one frame of the target image" may be implemented by:
step 131, determining that the resolution corresponding to the image generated based on the pixels of the target area is greater than the target resolution;
in practice, it may be predetermined that the resolution of the image generated based on the pixels of the target region is greater than the target resolution. In this case, the acquired pixel information is large.
And 132, processing pixels of at least one frame of the target area by utilizing pixel merging, wherein the obtained at least one frame of the target image meets the target resolution.
Since there may be more pixels in the acquired target area than are needed to generate the target image. Pixels of at least one frame of target area may be processed using pixel binning to obtain a target image satisfying the target resolution.
In the embodiment of the application, first, determining that a resolution corresponding to an image generated based on pixels of the target area is greater than the target resolution; and then, processing pixels of at least one frame of the target area by utilizing pixel merging, wherein the obtained at least one frame of the target image meets the target resolution. In this way, when the number of pixels in the acquired target region is large, a target image satisfying the target resolution can be obtained by the pixel merging method.
In some embodiments, the above image processing method further comprises the steps of:
step S140, acquiring an image magnification instruction, acquiring an updated target resolution based on the image magnification instruction, and redefining the target area based on the updated target resolution.
In the implementation process, after the user acquires the target image, there is also a need to zoom in the target object in the target image again. In this case, the updated target resolution may be acquired in response to an image magnification instruction triggered by the user, and the target region corresponding to the updated target resolution may be determined again among the target regions newly determined therebetween based on the target resolution.
In the embodiment of the application, an image amplifying instruction is acquired, an updated target resolution is acquired based on the image amplifying instruction, and the target area is redetermined based on the updated target resolution. In this way, a re-magnification of the target object in the target image can be achieved.
An embodiment of the present application provides an image processing apparatus as shown in fig. 4A, including:
an image sensor 41 for acquiring at least one frame of image;
a driving component 42 connected with the digital signal processing component and used for acquiring a mode instruction, wherein the mode instruction comprises a target resolution and a shooting mode; transmitting the mode instruction to the digital signal processing component;
a digital signal processing component 43, connected to the driving component and the image sensor, for determining a target area for acquiring at least one frame of image in the image sensor, where the target area is smaller than or equal to the image size of the image acquired by the image sensor; processing pixel information of at least one frame of the target area based on the target resolution to obtain at least one frame of target image;
a display component 44 for displaying at least one frame of the target image.
In an embodiment of the present application, an image processing apparatus includes: an image sensor for acquiring at least one frame of image; the driving assembly is connected with the digital signal processing assembly and used for acquiring a mode instruction, wherein the mode instruction comprises a target resolution and a shooting mode; transmitting the mode instruction to the digital signal processing component; the digital signal processing component is connected with the driving component and the image sensor and is used for determining a target area for acquiring at least one frame of image in the image sensor, wherein the target area is smaller than or equal to the image size of the image acquired by the image sensor; processing pixel information of at least one frame of the target area based on the target resolution to obtain at least one frame of target image; and the display component is used for displaying at least one frame of the target image. In this way, the image processing apparatus can select the target area by using the pixel information frame on the image sensor in the first mode based on the target resolution, and obtain the target image satisfying the target resolution based on the pixel information of the target area, and can effectively reduce the image loss based on satisfying the target resolution.
Fig. 4B is a schematic flow chart of image processing instruction transmission according to an embodiment of the present application, as shown in fig. 4B, the instruction transmission includes the following steps:
step S410, acquiring target resolution by using a camera application;
here, the camera application may be an application for acquiring an original image and presenting a target image.
Step S420, acquiring a shooting mode by using a camera setting application;
here, the camera setting application may be an application for setting a photographing mode. The photographing mode may include an original mode, i.e., a large viewing angle mode, and a portrait mode, i.e., a small viewing angle mode.
In the implementation process, the camera setting application may be integrated into the camera application described in step S410, or may be a stand-alone application independent of the camera application described in step S410. The camera setting application can provide the user with the active setting of the shooting mode, and can also automatically set the shooting mode according to the shooting requirement of the user.
Step S430, the driving component sends a target resolution acquired by the camera application and a shooting mode composition mode instruction acquired by the camera setting application to the digital signal processing component; the digital signal processing component processes the original video stream based on the mode instruction, obtains a target video stream and sends the target video stream to the camera application through the driving component.
Here, the photographing mode in the mode instruction includes an original mode and a portrait mode.
If the mode command is the original mode, the digital signal processing component adopts a large visual angle to collect images, and a target video stream obtained by processing the collected video stream is transmitted to the driving component.
If the mode command is a portrait mode, the digital signal processing component adopts a small visual angle to collect images, and a target video stream obtained by processing the collected video stream is transmitted to the driving component.
For example, for a main stream video resolution of 1080P (2.1M) type, the sensor resolution is now greater than this. In the implementation process, an optimized view angle frame can be preset based on each target resolution, so that the target resolution such as 2.1M can be read out from the bottom layer of the image sensor (such as 5M) directly through "crop" (frame selection), and the core requirement of face amplification is simultaneously considered.
The drive component passes the target video stream to the camera application for presentation.
In the implementation process, the image processing instruction is executed on the basis of the function support of a digital signal processing component (DSP chip), and the DSP chip can realize an optimal strategy based on a superior instruction (application program and a driving component) so as to output a target image meeting the target resolution corresponding to the physical condition as originally as possible.
For example, the target resolution 1080P Full HD (FHD) is a core, most widely used. Taking 1080P as a core, the Crop strategy can be determined by considering the maximum resolution or the small view angle. The HD 720P of such smaller resolution may be processed from the 1080P frame selected pixel information, i.e., scaled down, to obtain a target image that meets the 720P requirements.
Here, different systems may individually formulate different policies, and policies of respective resolutions may be individually assigned, and the policy logic may be provided in a driving component of the camera.
In the embodiment of the application, the native pixels "crop" are utilized for the small view mode or the large view mode of the actual physical situation with different resolutions. Therefore, under different shooting requirements of users, the image loss can be effectively reduced, and the resolution requirements of the users are met.
Based on the foregoing embodiments, the embodiments of the present application provide an image processing apparatus, where the apparatus includes each module including each sub-module, and each sub-module includes a unit, and may be implemented by a processor in an electronic device; of course, the method can also be realized by a specific logic circuit; in practice, the processor may be a central processing unit (Central Processing Unit, CPU), microprocessor (Microprocessor Unit, MPU), digital signal processor (Digital Signal Process, DSP) or field programmable gate array (Field Programmable Gate Array, FPGA), etc.
Fig. 5 is a schematic diagram of a composition structure of an image processing apparatus according to an embodiment of the present application, as shown in fig. 5, the apparatus 500 includes:
an obtaining module 510, configured to obtain a mode instruction, where the mode instruction includes a target resolution and a shooting mode;
a determining module 520, configured to determine a target area for acquiring at least one frame of image in the image sensor if the shooting mode is the first mode, where the target area is smaller than an image size of the image acquired by the image sensor;
an obtaining module 530 is configured to process pixel information of the target area of at least one frame based on the target resolution, obtain at least one frame of target image, and display the at least one frame of target image.
In some embodiments, the determining module 520 includes a first determining sub-module and a second determining sub-module, where the first determining sub-module is configured to determine that the shooting mode is a second mode; the second determining submodule is used for determining that at least one frame of the target area is an original area of an image acquired by the image sensor.
In some embodiments, the determining module 530 includes a third determining sub-module and a fourth determining sub-module, where the third determining sub-module is configured to determine that the shooting mode is the first mode; the fourth determining submodule is used for determining at least one frame of the target area based on the target resolution.
In some embodiments, the fourth determining submodule includes a first determining unit and a second determining unit, where the first determining unit is configured to determine size information corresponding to a target object in the target image, where the target object is an object determined based on the first mode; the second determining unit is configured to determine the target area for at least one frame based on the size information and the target resolution.
In some embodiments, the second determining unit includes a first determining subunit, a second determining subunit, and a third determining subunit, where the first determining subunit is configured to determine at least one frame first region based on the target resolution; the second determining subunit is configured to determine, based on the size information, that all information of the target object can be acquired in at least one frame of the first area; the third determining subunit is configured to determine the first area as the target area.
In some embodiments, the second determining unit further includes a fourth determining subunit, a fifth determining subunit, and a sixth determining subunit, where the fourth determining subunit is configured to determine, based on the size information, that all information of the target object cannot be acquired within the first area; the fifth determining subunit is configured to determine at least one frame of a second area based on the size information, where the second area is larger than the first area, and the second area is capable of collecting all information of the target object; the sixth determination subunit is configured to determine the second area as the target area.
In some embodiments, the obtaining module 530 includes a fifth determining sub-module and a obtaining sub-module, where the fifth determining sub-module is configured to determine that a resolution corresponding to an image generated based on pixels of the target area is greater than the target resolution; and the obtaining submodule is used for processing at least one frame of pixels of the target area by utilizing pixel combination, and the obtained at least one frame of target image meets the target resolution.
In some embodiments, the apparatus further comprises a redetermining module for acquiring an image magnification instruction, acquiring an updated target resolution based on the image magnification instruction, and redetermining the target region based on the updated target resolution.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the method is implemented in the form of a software functional module, and sold or used as a separate product, the method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing an electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Accordingly, the present embodiment provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the steps in the image processing method provided in the above embodiment.
Correspondingly, an electronic device is provided in the embodiment of the present application, fig. 6 is a schematic diagram of a hardware entity of the electronic device provided in the embodiment of the present application, and as shown in fig. 6, the hardware entity of the device 600 includes: comprising a memory 601 and a processor 602, said memory 601 storing a computer program executable on the processor 602, said processor 602 implementing the steps of the image processing method provided in the above embodiments when said program is executed.
The memory 601 is configured to store instructions and applications executable by the processor 602, and may also cache data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or processed by the processor 602 and the modules in the electronic device 600, which may be implemented by a FLASH memory (FLASH) or a random access memory (Random Access Memory, RAM).
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the integrated units described above may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing an electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The methods disclosed in the several method embodiments provided in the present application may be arbitrarily combined without collision to obtain a new method embodiment.
The features disclosed in the several product embodiments provided in the present application may be combined arbitrarily without conflict to obtain new product embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be arbitrarily combined without conflict to obtain new method embodiments or apparatus embodiments.
The foregoing is merely an embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method, the method comprising:
acquiring a mode instruction, wherein the mode instruction comprises a target resolution and a shooting mode;
if the shooting mode is a first mode, determining a target area for acquiring at least one frame of image in the image sensor, wherein the target area is smaller than the image size of the image acquired by the image sensor;
and processing pixel information of at least one frame of the target area based on the target resolution to obtain and display at least one frame of target image.
2. The method of claim 1, the method further comprising:
determining the shooting mode as a second mode;
and determining at least one frame of the target area as an original area of an image acquired by the image sensor.
3. The method of claim 1, wherein if the photographing mode is the first mode, determining the target area for acquiring at least one frame of image in the image sensor comprises:
determining the shooting mode as a first mode;
at least one frame of the target region is determined based on the target resolution.
4. The method of claim 3, the determining at least one frame of the target region based on the target resolution comprising:
determining size information corresponding to a target object in the target image, wherein the target object is an object determined based on the first mode;
at least one frame of the target region is determined based on the size information and the target resolution.
5. The method of claim 4, the determining at least one frame of the target region based on the size information and the target resolution comprising:
determining at least one frame first region based on the target resolution;
determining that all information of the target object can be acquired in at least one frame of the first area based on the size information;
the first region is determined as the target region.
6. The method of claim 5, the method further comprising:
determining that all information of the target object cannot be acquired in the first area based on the size information;
determining at least one frame of second area based on the size information, wherein the second area is larger than the first area, and the second area can acquire all information of the target object;
and determining the second area as the target area.
7. The method according to any one of claims 1 to 6, wherein said processing pixel information of at least one frame of said target area based on said target resolution, to obtain and display at least one frame of target image, comprises:
determining that a resolution corresponding to an image generated based on pixels of the target region is greater than the target resolution;
and processing pixels of at least one frame of the target area by utilizing pixel merging, wherein the obtained at least one frame of the target image meets the target resolution.
8. The method of any one of claims 1 to 6, further comprising:
and acquiring an image amplification instruction, acquiring updated target resolution based on the image amplification instruction, and re-determining the target area based on the updated target resolution.
9. An image processing apparatus, the apparatus comprising:
an image sensor for acquiring at least one frame of image;
the driving assembly is connected with the digital signal processing assembly and used for acquiring a mode instruction, wherein the mode instruction comprises a target resolution and a shooting mode; transmitting the mode instruction to the digital signal processing component;
the digital signal processing component is connected with the driving component and the image sensor and is used for determining a target area for acquiring at least one frame of image in the image sensor, wherein the target area is smaller than or equal to the image size of the image acquired by the image sensor; processing pixel information of at least one frame of the target area based on the target resolution to obtain at least one frame of target image;
and the display component is used for displaying at least one frame of the target image.
10. An image processing apparatus, the apparatus comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring a mode instruction, and the mode instruction comprises a target resolution and a shooting mode;
the determining module is used for determining a target area for acquiring at least one frame of image in the image sensor if the shooting mode is a first mode, wherein the target area is smaller than the image size of the image acquired by the image sensor;
and the obtaining module is used for processing the pixel information of at least one frame of the target area based on the target resolution, obtaining at least one frame of target image and displaying the target image.
CN202311285349.XA 2023-09-28 2023-09-28 Image processing method, device and equipment Pending CN117376699A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311285349.XA CN117376699A (en) 2023-09-28 2023-09-28 Image processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311285349.XA CN117376699A (en) 2023-09-28 2023-09-28 Image processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN117376699A true CN117376699A (en) 2024-01-09

Family

ID=89392108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311285349.XA Pending CN117376699A (en) 2023-09-28 2023-09-28 Image processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN117376699A (en)

Similar Documents

Publication Publication Date Title
US11758265B2 (en) Image processing method and mobile terminal
US9230306B2 (en) System for reducing depth of field with digital image processing
US8494301B2 (en) Refocusing images using scene captured images
KR100967855B1 (en) System and method for checking framing and sharpness of a digital image
US7856173B2 (en) Shooting device for electrical image stabilizing using relationship between stabilization information and shooting condition
US20140198242A1 (en) Image capturing apparatus and image processing method
JP6727989B2 (en) Image processing apparatus and control method thereof
CN105934940B (en) Image processing apparatus, method and program
JPWO2013005727A1 (en) Imaging apparatus and program
JP2001298659A (en) Image pickup device, image processing system, image pickup method and storage medium
US20100020202A1 (en) Camera apparatus, and image processing apparatus and image processing method
JP2018037857A (en) Image processing system, image processing method and program
US8334919B2 (en) Apparatus and method for digital photographing to correct subject area distortion caused by a lens
JP2006014221A (en) Imaging apparatus and imaging method
CN108810326B (en) Photographing method and device and mobile terminal
CN111131714A (en) Image acquisition control method and device and electronic equipment
US20060114510A1 (en) Apparatus and method for image conversion
CN117376699A (en) Image processing method, device and equipment
CN114745502A (en) Shooting method and device, electronic equipment and storage medium
JP6702792B2 (en) Image processing apparatus and control method thereof
JP5847570B2 (en) Image processing apparatus and control method thereof
JP7244661B2 (en) Imaging device and image processing method
EP4102824B1 (en) Image processing method and mobile terminal
KR100231922B1 (en) Image transformation method for digital still camera
US20220217285A1 (en) Image processing device, image processing method, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination