CN113055603A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN113055603A
CN113055603A CN202110351548.0A CN202110351548A CN113055603A CN 113055603 A CN113055603 A CN 113055603A CN 202110351548 A CN202110351548 A CN 202110351548A CN 113055603 A CN113055603 A CN 113055603A
Authority
CN
China
Prior art keywords
target
objects
focus
target image
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110351548.0A
Other languages
Chinese (zh)
Inventor
李凡智
刘旭国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202110351548.0A priority Critical patent/CN113055603A/en
Publication of CN113055603A publication Critical patent/CN113055603A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method and electronic equipment, wherein the image processing method comprises the following steps: determining a viewing area corresponding to the electronic equipment; a plurality of objects are contained in the viewing area; dividing the viewing area into a plurality of focus areas based on a plurality of target distances determined by the ranging device, wherein the target distance is the distance between each object and the electronic equipment; after the shooting instruction is acquired, generating a corresponding target image based on each focus area; the target images are generated by focusing the focus area, and each target image comprises all objects in the view area. According to the image processing method, the plurality of focus areas are divided in advance based on the plurality of target distances determined by the distance measuring device, and one target image is generated based on each focus area, so that the image acquisition efficiency can be improved, and the waste of shooting resources is avoided.

Description

Image processing method and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and an electronic device.
Background
Currently, when an image capturing apparatus obtains an image, a focusing device is used to perform automatic focusing to form a focus, and then a user presses a shooting key of the image capturing apparatus to obtain an image corresponding to the focus.
However, the focus generated by the auto-focusing in the process is not the focus that the user wants to determine, so that the captured image is not the image that the user wants to obtain, and at this time, the user needs to capture again to obtain the image that the user wants to obtain, which results in low efficiency of image capture and waste of capture resources.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method and an electronic device, which can improve image acquisition efficiency and avoid waste of shooting resources.
In a first aspect, an embodiment of the present application provides an image processing method, including:
determining a viewing area corresponding to the electronic equipment; a plurality of objects are contained within the viewing area;
dividing the viewing area into a plurality of focus areas based on a plurality of target distances determined by a distance measuring device, wherein the target distance is the distance between each object and the electronic equipment;
after a shooting instruction is acquired, generating a corresponding target image based on each focus area; the target images are generated by focusing the focus area, and each target image comprises all objects in the view area.
In a possible embodiment, the dividing the viewing area into a plurality of focus areas based on a plurality of target distances determined by the distance measuring device includes:
respectively measuring a target distance between each object and the electronic equipment by using the distance measuring device;
and determining a focus area based on the objects which are adjacent and have the target distance falling into the same preset range.
In one possible implementation, the image processing method further includes:
acquiring a user instruction;
and calling a target image corresponding to the user instruction based on the focal region information contained in the user instruction.
In one possible implementation, in a case where the focus area information indicates a part of the objects in the focus area, the image processing method further includes:
acquiring adjacent focus areas of the part of the objects;
screening target images corresponding to focus areas adjacent to the part of objects;
and fitting the target image obtained by screening to obtain a target image corresponding to the part of objects.
In a possible implementation manner, the fitting the target image obtained by screening to obtain a target image corresponding to the partial object includes:
calculating a relative angle between adjacent focus areas of the partial objects;
and fitting the target image obtained by screening based on the relative angle to obtain a target image corresponding to the part of the objects.
In a second aspect, an embodiment of the present application further provides an electronic device, where the electronic device includes:
a determination module configured to: determining a viewing area corresponding to the electronic equipment; a plurality of objects are contained within the viewing area;
a partitioning module configured to: dividing the viewing area into a plurality of focus areas based on a plurality of target distances determined by a distance measuring device, wherein the target distance is the distance between each object and the electronic equipment;
a generation module configured to: after a shooting instruction is acquired, generating a corresponding target image based on each focus area; the target images are generated by focusing the focus area, and each target image comprises all objects in the view area.
In a possible implementation, the dividing module is specifically configured to:
respectively measuring a target distance between each object and the electronic equipment by using the distance measuring device;
and determining a focus area based on the objects which are adjacent and the distances of which fall into the same preset range.
In one possible implementation, the electronic device further includes a retrieval module configured to:
acquiring a user instruction;
and calling a target image corresponding to the user instruction based on the focal region information contained in the user instruction.
In a possible implementation, in a case that the focus area information indicates a part of the objects in the focus area, the electronic device further includes a fitting module that includes:
an acquisition unit configured to: acquiring adjacent focus areas of the part of the objects;
a screening unit configured to: screening target images corresponding to focus areas adjacent to the part of objects;
a fitting unit configured to: and fitting the target image obtained by screening to obtain a target image corresponding to the part of objects.
In a possible implementation, the fitting unit is specifically configured to:
calculating a relative angle between adjacent focus areas of the partial objects;
and fitting the target image obtained by screening based on the relative angle to obtain a target image corresponding to the part of the objects.
In the image processing method of the embodiment of the application, the plurality of focus areas are divided in advance based on the plurality of target distances determined by the distance measuring device, and one target image is generated based on each focus area, so that the problem of low image acquisition efficiency caused by the fact that one image acquired by the prior art is not the image which the user wants to obtain and needs to be shot again is solved, the image acquisition efficiency can be improved, and the waste of shooting resources is avoided.
Drawings
In order to more clearly illustrate the technical solutions in the present application or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 illustrates a flow chart of an image processing method provided by the present application;
fig. 2 is a flowchart illustrating a method for dividing a viewing area into a plurality of focus areas based on a plurality of target distances determined by a distance measuring device in an image processing method provided by the present application;
FIG. 3 shows a flow chart of another image processing method provided by the present application;
FIG. 4 shows a flow chart of another image processing method provided by the present application;
fig. 5 shows a flowchart of fitting a target image obtained by screening to obtain a target image corresponding to a part of objects in an image processing method provided by the present application;
fig. 6 shows a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
Various aspects and features of the present application are described herein with reference to the drawings.
It will be understood that various modifications may be made to the embodiments of the present application. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the application.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the application and, together with a general description of the application given above and the detailed description of the embodiments given below, serve to explain the principles of the application.
These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It should also be understood that, although the present application has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present application will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present application are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the application, which can be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application of unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the application.
The information processing method provided by the embodiment of the application can improve the image acquisition efficiency and avoid the waste of shooting resources. For the understanding of the present application, a detailed description of an image processing method provided in the present application will be given first.
In practical applications, the execution subject of the information processing method in the embodiment of the present application may be a server, a processor, or the like, and for convenience of illustration, the processor is hereinafter described in detail. As shown in fig. 1, a flowchart of an image processing method provided in an embodiment of the present application is shown, where the specific steps include:
s101, determining a view area corresponding to the electronic equipment; the viewing area contains a plurality of objects.
In a specific implementation, the electronic device is a device capable of capturing images, such as a camera, a mobile phone, a camera, and so on.
The viewing area corresponding to the electronic device can be adjusted through the viewfinder, that is, the shot range is adjusted, for example, in the case of being far away from the shot object, the viewing area can be reduced, so that the shot object in the shot image is larger; when the shot object cannot be completely shot at a position close to the shot object, the view area can be enlarged, so that the shot object can be completely contained in the shot image.
Wherein a plurality of objects are contained in the viewing area, and here, the objects may be determined based on a preset shooting unit, for example, an object within one square meter is taken as one object, and the like; the identification model can also be determined based on a pre-trained identification model, and specifically, the identification model can identify different objects such as people, buildings, plants and the like.
S102, dividing a viewing area into a plurality of focus areas based on a plurality of target distances determined by the distance measuring device, wherein the target distance is the distance between each object and the electronic equipment;
here, the distance measuring device is a laser radar, and the distance measuring device is mounted on the electronic apparatus. Of course, the distance measuring device may also be an ultrasonic distance meter or the like as long as the distance can be measured.
In specific implementation, a distance measuring device on the electronic device is used for measuring a distance between each object and the electronic device, so as to obtain a target distance corresponding to each object. After the plurality of target distances are obtained, the viewing area is divided into a plurality of focus areas based on the plurality of target distances.
Specifically, fig. 2 shows the method steps of dividing the viewing area into a plurality of focus areas based on a plurality of target distances determined by the ranging apparatus, and specifically includes S201 and S202.
S201, measuring the target distance between each object and the electronic equipment by using a distance measuring device.
S202, a focus area is determined based on the objects with the target distance falling into the same preset range and adjacent objects.
When a plurality of target distances are determined, the target distance between each object and the electronic equipment is measured by using the distance measuring device, and then the plurality of target distances are obtained.
Distinguishing a plurality of target distances according to a plurality of preset ranges, specifically, taking all target distances falling into the same preset range as the same group; and then, judging whether the object corresponding to each target distance in each group has an adjacent object or not, and further determining the area corresponding to the adjacent object as a focus area.
For example, distances between an object and the electronic device are respectively set as a preset range from 0 to 10 meters, from 10 to 20 meters, from 20 to 30 meters and greater than 30 meters, after it is determined that 10 object distances in the preset range corresponding to 10 to 20 meters are present, coordinate information of the object corresponding to each object distance is determined, whether the 10 objects are adjacent is determined based on the coordinate information of each object, and an area corresponding to the adjacent object is set as a focus area.
S103, after the shooting instruction is acquired, generating a corresponding target image based on each focus area; the target images are generated by focusing the focus area, and each target image comprises all objects in the view area.
In specific implementation, the operation that a user presses a shooting key of the electronic device or clicks a virtual key in a touch screen of the electronic device is acquired, and a shooting instruction is generated in response to the operation.
After the shooting instruction is acquired, a target image corresponding to each focus area is generated, specifically, the focus area is focused to obtain a target image including all objects in the view area, and the focus area in the target image is clearer than other areas. Therefore, the target image corresponding to each focus area, that is, a plurality of target images can be obtained, and then each focus area and the corresponding target image are stored in an associated manner.
According to the embodiment of the application, the plurality of focus areas are divided in advance based on the plurality of target distances determined by the distance measuring device, and one target image is generated based on each focus area, so that the problem that the image acquisition efficiency is low due to the fact that one image acquired by the prior art is not the image which a user wants to obtain and needs to be shot again is solved, the image acquisition efficiency can be improved, and the waste of shooting resources is avoided.
Further, after obtaining a plurality of target images, the electronic device may display one target image according to a preset display rule or randomly, and when the user determines that the currently displayed target image is not the desired target image, the user may call the desired target image by using the method steps shown in fig. 3, specifically including S301 and S302.
S301, acquiring a user instruction.
S302, based on the focus area information contained in the user instruction, the target image corresponding to the user instruction is called.
In a specific implementation, a retrieval operation interface corresponding to the retrieval target image may be set in the electronic device, and the retrieval operation interface may be displayed through a display screen of the electronic device based on a preset operation, for example, a virtual key of the retrieval operation interface is clicked through an input/output device (such as a mouse, a keyboard, a touch screen, and the like), which is not specifically limited in this embodiment of the application.
Here, the call operation interface includes focus area information of each focus area, such as coordinate information, an included object name, and the like. After the electronic device displays the invoking operation interface, the user may determine a desired target image based on the electronic device according to a requirement of the user, and control the electronic device to generate a user instruction based on a focus area corresponding to the desired target image, for example, click information of the corresponding focus area, or click a certain selected area in the currently displayed target image.
After the user instruction is acquired, extracting focal region information contained in the user instruction, calling a target image corresponding to the user instruction based on the focal region information, and displaying the target image.
When a user instruction is generated based on clicking a certain area in a currently displayed target image, a clicked selection area possibly smaller than a focus area exists, at this time, the focus area where the selection area falls is determined based on coordinate information of the clicked selection area, and the user instruction is generated based on the focus area where the selection area falls.
Here, the user may also determine a certain sub-region by continuously clicking twice or for a preset duration, where the area of the sub-region is smaller than the area of the focus region, and at this time, the focus region information included in the user instruction generated based on the operation indicates a part of the objects in the focus region, so that the embodiment of the present application further provides the method steps in fig. 4 to obtain the target image corresponding to the part of the objects, which specifically includes S401 to S403.
S401, acquiring a focus area adjacent to a part of objects.
S402, screening target images corresponding to the adjacent focus areas of the partial objects.
And S403, fitting the screened target images to obtain target images corresponding to part of the objects.
After partial objects in the focus area indicated by the focus area information are determined, the focus area to which each object belongs is searched, if the partial objects all belong to the same focus area, at least one focus area closest to the partial area is searched, and the focus area to which the partial objects belong and the at least one focus area closest to the partial area are taken as adjacent focus areas.
And screening target images corresponding to the adjacent focus areas of each partial object in the plurality of target images, and fitting the screened target images to obtain the target images corresponding to the partial objects.
Further, S501 and S502 in fig. 5 show a specific step of fitting the target images obtained by screening to obtain target images corresponding to a part of the objects.
S501, calculating relative angles between adjacent focus areas of partial objects.
And S502, fitting the screened target images based on the relative angles to obtain target images corresponding to partial objects.
In a specific implementation, a relative angle between adjacent focus regions of each partial object may be determined based on a central point of each focus region, and then the target image obtained by screening may be fitted based on the relative angle to obtain a target image corresponding to the partial object. Of course, the relative angle between the focus areas adjacent to the partial object may also be determined based on other reference points.
The fitting model can be established in advance, target images corresponding to the focus areas of the focus areas adjacent to the partial objects are used as input, the target images corresponding to the partial objects are used as output, and therefore fitting efficiency can be improved. Of course, the embodiment of the present application is not particularly limited thereto.
Based on the same inventive concept, the second aspect of the present application further provides an electronic device corresponding to the image processing method, and since the principle of solving the problem of the electronic device in the present application is similar to that of the image processing method in the present application, the implementation of the electronic device may refer to the implementation of the method, and repeated details are not repeated.
Fig. 6 shows a schematic diagram of an electronic device provided in an embodiment of the present application, which specifically includes:
a determination module 601 configured to: determining a viewing area corresponding to the electronic equipment; a plurality of objects are contained within the viewing area;
a partitioning module 602 configured to: dividing the viewing area into a plurality of focus areas based on a plurality of target distances determined by a distance measuring device, wherein the target distance is the distance between each object and the electronic equipment;
a generating module 603 configured to: after a shooting instruction is acquired, generating a corresponding target image based on each focus area; the target images are generated by focusing the focus area, and each target image comprises all objects in the view area.
In another embodiment, the dividing module 602 is specifically configured to:
respectively measuring a target distance between each object and the electronic equipment by using the distance measuring device;
and determining a focus area based on the objects which are adjacent and the distances of which fall into the same preset range.
In yet another embodiment, the electronic device further comprises a retrieval module 604 configured to:
acquiring a user instruction;
and calling a target image corresponding to the user instruction based on the focal region information contained in the user instruction.
In yet another embodiment, in case the focus area information indicates a part of the objects in the focus area, the electronic device further comprises a fitting module 605 comprising:
an acquisition unit configured to: acquiring adjacent focus areas of the part of the objects;
a screening unit configured to: screening target images corresponding to focus areas adjacent to the part of objects;
a fitting unit configured to: and fitting the target image obtained by screening to obtain a target image corresponding to the part of objects.
In a further embodiment, the fitting unit is specifically configured to:
calculating a relative angle between adjacent focus areas of the partial objects;
and fitting the target image obtained by screening based on the relative angle to obtain a target image corresponding to the part of the objects.
According to the embodiment of the application, the plurality of focus areas are divided in advance based on the plurality of target distances determined by the distance measuring device, and one target image is generated based on each focus area, so that the problem that the image acquisition efficiency is low due to the fact that one image acquired by the prior art is not the image which a user wants to obtain and needs to be shot again is solved, the image acquisition efficiency can be improved, and the waste of shooting resources is avoided.
The storage medium is a computer-readable medium, and stores a computer program, and when the computer program is executed by a processor, the method provided in any embodiment of the present application is implemented, including the following steps S11 to S13:
s11, determining a corresponding viewing area of the electronic equipment; a plurality of objects are contained within the viewing area;
s12, dividing the view area into a plurality of focus areas based on a plurality of target distances determined by the distance measuring device, wherein the target distance is the distance between each object and the electronic equipment;
s13, after the shooting instruction is acquired, generating a corresponding target image based on each focus area; the target images are generated by focusing the focus area, and each target image comprises all objects in the view area.
The computer program is executed by the processor to divide the viewing area into a plurality of focus areas based on a plurality of target distances determined by the ranging device, and specifically executed by the processor to perform the following steps: respectively measuring a target distance between each object and the electronic equipment by using the distance measuring device; and determining a focus area based on the objects which are adjacent and have the target distance falling into the same preset range.
When the computer program is executed by the processor to execute the image processing method, the processor further executes the following steps: acquiring a user instruction; and calling a target image corresponding to the user instruction based on the focal region information contained in the user instruction.
In a case where the focus area information indicates a part of the objects in the focus area, the computer program, when being executed by the processor, further executes the steps of: acquiring adjacent focus areas of the part of the objects; screening target images corresponding to focus areas adjacent to the part of objects; and fitting the target image obtained by screening to obtain a target image corresponding to the part of objects.
When the computer program is executed by the processor to fit the target image obtained by screening to obtain the target image corresponding to the partial object, the following steps are specifically executed by the processor: calculating a relative angle between adjacent focus areas of the partial objects; and fitting the target image obtained by screening based on the relative angle to obtain a target image corresponding to the part of the objects.
According to the embodiment of the application, the plurality of focus areas are divided in advance based on the plurality of target distances determined by the distance measuring device, and one target image is generated based on each focus area, so that the problem that the image acquisition efficiency is low due to the fact that one image acquired by the prior art is not the image which a user wants to obtain and needs to be shot again is solved, the image acquisition efficiency can be improved, and the waste of shooting resources is avoided.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes. Optionally, in this embodiment, the processor executes the method steps described in the above embodiments according to the program code stored in the storage medium. Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again. It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
Moreover, although exemplary embodiments have been described herein, the scope thereof includes any and all embodiments based on the present application with equivalent elements, modifications, omissions, combinations (e.g., of various embodiments across), adaptations or alterations. The elements of the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as non-exclusive. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the above detailed description, various features may be grouped together to streamline the application. This should not be interpreted as an intention that a disclosed feature not claimed is essential to any claim. Rather, subject matter of the present application can lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with each other in various combinations or permutations. The scope of the application should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The embodiments of the present application have been described in detail, but the present application is not limited to these specific embodiments, and those skilled in the art can make various modifications and modified embodiments based on the concept of the present application, and these modifications and modified embodiments should fall within the scope of the present application.

Claims (10)

1. An image processing method comprising:
determining a viewing area corresponding to the electronic equipment; a plurality of objects are contained within the viewing area;
dividing the viewing area into a plurality of focus areas based on a plurality of target distances determined by a distance measuring device, wherein the target distance is the distance between each object and the electronic equipment;
after a shooting instruction is acquired, generating a corresponding target image based on each focus area; the target images are generated by focusing the focus area, and each target image comprises all objects in the view area.
2. The image processing method according to claim 1, the dividing the finder area into a plurality of focus areas based on a plurality of target distances determined by a ranging device, comprising:
respectively measuring a target distance between each object and the electronic equipment by using the distance measuring device;
and determining a focus area based on the objects which are adjacent and have the target distance falling into the same preset range.
3. The image processing method according to claim 1, further comprising:
acquiring a user instruction;
and calling a target image corresponding to the user instruction based on the focal region information contained in the user instruction.
4. The image processing method according to claim 3, in a case where the focus area information indicates a partial object in a focus area, the image processing method further comprising:
acquiring adjacent focus areas of the part of the objects;
screening target images corresponding to focus areas adjacent to the part of objects;
and fitting the target image obtained by screening to obtain a target image corresponding to the part of objects.
5. The image processing method according to claim 4, wherein the fitting the target image obtained by screening to obtain the target image corresponding to the partial object includes:
calculating a relative angle between adjacent focus areas of the partial objects;
and fitting the target image obtained by screening based on the relative angle to obtain a target image corresponding to the part of the objects.
6. An electronic device, comprising:
a determination module configured to: determining a viewing area corresponding to the electronic equipment; a plurality of objects are contained within the viewing area;
a partitioning module configured to: dividing the viewing area into a plurality of focus areas based on a plurality of target distances determined by a distance measuring device, wherein the target distance is the distance between each object and the electronic equipment;
a generation module configured to: after a shooting instruction is acquired, generating a corresponding target image based on each focus area; the target images are generated by focusing the focus area, and each target image comprises all objects in the view area.
7. The electronic device of claim 6, the partitioning module being specifically configured to:
respectively measuring a target distance between each object and the electronic equipment by using the distance measuring device;
and determining a focus area based on the objects which are adjacent and the distances of which fall into the same preset range.
8. The electronic device of claim 6, further comprising a recall module configured to:
acquiring a user instruction;
and calling a target image corresponding to the user instruction based on the focal region information contained in the user instruction.
9. The electronic device of claim 8, where the focus region information indicates a partial object in a focus region, further comprising a fitting module comprising:
an acquisition unit configured to: acquiring adjacent focus areas of the part of the objects;
a screening unit configured to: screening target images corresponding to focus areas adjacent to the part of objects;
a fitting unit configured to: and fitting the target image obtained by screening to obtain a target image corresponding to the part of objects.
10. The electronic device of claim 9, the fitting unit being specifically configured to:
calculating a relative angle between adjacent focus areas of the partial objects;
and fitting the target image obtained by screening based on the relative angle to obtain a target image corresponding to the part of the objects.
CN202110351548.0A 2021-03-31 2021-03-31 Image processing method and electronic equipment Pending CN113055603A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110351548.0A CN113055603A (en) 2021-03-31 2021-03-31 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110351548.0A CN113055603A (en) 2021-03-31 2021-03-31 Image processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN113055603A true CN113055603A (en) 2021-06-29

Family

ID=76516823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110351548.0A Pending CN113055603A (en) 2021-03-31 2021-03-31 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113055603A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092916A (en) * 2021-11-26 2022-02-25 阿波罗智联(北京)科技有限公司 Image processing method, image processing device, electronic apparatus, autonomous vehicle, and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243828A (en) * 2014-09-24 2014-12-24 宇龙计算机通信科技(深圳)有限公司 Method, device and terminal for shooting pictures
CN108040206A (en) * 2017-12-18 2018-05-15 信利光电股份有限公司 A kind of method, apparatus and equipment focused again using depth camera equipment
CN111866378A (en) * 2020-06-30 2020-10-30 维沃移动通信有限公司 Image processing method, apparatus, device and medium
CN112135034A (en) * 2019-06-24 2020-12-25 Oppo广东移动通信有限公司 Photographing method and device based on ultrasonic waves, electronic equipment and storage medium
WO2021013009A1 (en) * 2019-07-19 2021-01-28 维沃移动通信有限公司 Photographing method and terminal device
CN112529951A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Method and device for acquiring extended depth of field image and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243828A (en) * 2014-09-24 2014-12-24 宇龙计算机通信科技(深圳)有限公司 Method, device and terminal for shooting pictures
CN108040206A (en) * 2017-12-18 2018-05-15 信利光电股份有限公司 A kind of method, apparatus and equipment focused again using depth camera equipment
CN112135034A (en) * 2019-06-24 2020-12-25 Oppo广东移动通信有限公司 Photographing method and device based on ultrasonic waves, electronic equipment and storage medium
WO2021013009A1 (en) * 2019-07-19 2021-01-28 维沃移动通信有限公司 Photographing method and terminal device
CN112529951A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Method and device for acquiring extended depth of field image and electronic equipment
CN111866378A (en) * 2020-06-30 2020-10-30 维沃移动通信有限公司 Image processing method, apparatus, device and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092916A (en) * 2021-11-26 2022-02-25 阿波罗智联(北京)科技有限公司 Image processing method, image processing device, electronic apparatus, autonomous vehicle, and medium

Similar Documents

Publication Publication Date Title
US8497920B2 (en) Method, apparatus, and computer program product for presenting burst images
CN108961157B (en) Picture processing method, picture processing device and terminal equipment
CN108668086B (en) Automatic focusing method and device, storage medium and terminal
CN108632536B (en) Camera control method and device, terminal and storage medium
CN111859020B (en) Recommendation method, recommendation device, electronic equipment and computer readable storage medium
CN106687991A (en) System and method for setting focus of digital image based on social relationship
CN108762740B (en) Page data generation method and device and electronic equipment
CN109447186A (en) Clustering method and Related product
CN108200335A (en) Photographic method, terminal and computer readable storage medium based on dual camera
CN105959593B (en) A kind of exposure method and photographing device of photographing device
CN106791809B (en) A kind of light measuring method and mobile terminal
CN110505397B (en) Camera selection method, device and computer storage medium
CN113055603A (en) Image processing method and electronic equipment
CN109191544A (en) A kind of paster present methods of exhibiting, device, electronic equipment and storage medium
CN108769538B (en) Automatic focusing method and device, storage medium and terminal
US9451155B2 (en) Depth-segmenting peak tracking autofocus
CN113866782A (en) Image processing method and device and electronic equipment
CN107124547B (en) Double-camera shooting method and device
CN110705480B (en) Target object stop point positioning method and related device
CN112995765A (en) Network resource display method and device
US10986394B2 (en) Camera system
CN109547678B (en) Processing method, device, equipment and readable storage medium
CN112887606B (en) Shooting method and device and electronic equipment
CN114125226A (en) Image shooting method and device, electronic equipment and readable storage medium
US20140108405A1 (en) User-specified image grouping systems and methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210629

RJ01 Rejection of invention patent application after publication