CN111212226A - Focusing shooting method and device - Google Patents

Focusing shooting method and device Download PDF

Info

Publication number
CN111212226A
CN111212226A CN202010028389.6A CN202010028389A CN111212226A CN 111212226 A CN111212226 A CN 111212226A CN 202010028389 A CN202010028389 A CN 202010028389A CN 111212226 A CN111212226 A CN 111212226A
Authority
CN
China
Prior art keywords
user
area
target
focusing
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010028389.6A
Other languages
Chinese (zh)
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010028389.6A priority Critical patent/CN111212226A/en
Publication of CN111212226A publication Critical patent/CN111212226A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a focusing shooting method and a focusing shooting device, wherein the method comprises the following steps: extracting user characteristics in a preview image in a photographing preview stage, and matching the user characteristics with preset user characteristics; judging whether the target user characteristics successfully matched exist or not; if the target user characteristics exist, determining a target focusing area according to a user area corresponding to the target user characteristics; and focusing the shot image according to the target focusing area. Therefore, when the user shoots, the user can automatically shoot the image by taking the object concerned by the user as the focusing main body, and the shooting efficiency is improved. The technical problem of low shooting efficiency caused by the fact that a user needs to manually touch and focus for multiple times in the prior art is solved.

Description

Focusing shooting method and device
Technical Field
The present application relates to the field of photographing technologies, and in particular, to a focusing photographing method and apparatus.
Background
Currently, a focusing method for an electronic device during photographing is mainly for a user to manually touch a screen for focusing, for example, in a photographing scene, the user touches an area where the user concerned in a preview image is located to determine a focusing subject.
However, the focusing method in the related art requires manual triggering by a user, and as shown in fig. 1, when a plurality of users are included in a current preview image, the user also needs to continuously touch the preview image to actively determine a focusing subject, which results in low shooting efficiency.
Disclosure of Invention
The method and the device aim to solve the technical problem that in the prior art, a user needs to manually touch and focus for multiple times to cause low shooting efficiency at least to a certain extent.
Therefore, a first objective of the present application is to provide a focusing shooting method, so as to automatically shoot an image with an object concerned by a user as a focusing subject during shooting, thereby improving shooting efficiency.
A second objective of the present application is to provide a focusing shooting device.
A third object of the present application is to provide an electronic device.
A fourth object of the present application is to propose a non-transitory computer-readable storage medium.
In order to achieve the above object, a first embodiment of the present application provides a focus shooting method, including the following steps: extracting user characteristics in a preview image in a photographing preview stage, and matching the user characteristics with preset user characteristics; judging whether the target user characteristics successfully matched exist or not; if the target user characteristics exist, determining a target focusing area according to a user area corresponding to the target user characteristics; and focusing and shooting the image according to the target focusing area.
An embodiment of a second aspect of the present application provides a focusing shooting device, including: the matching module is used for extracting the user characteristics in the preview image in the photographing preview stage and matching the user characteristics with the preset user characteristics; the judging module is used for judging whether the target user characteristics which are successfully matched exist or not; the determining module is used for determining a target focusing area according to a user area corresponding to the target user characteristic when the target user characteristic exists; and the shooting module is used for shooting an image according to the focusing of the target focusing area.
An embodiment of a third aspect of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the focus shooting method according to the embodiment of the first aspect.
A fourth aspect of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the focus shooting method according to the first aspect of the present application.
The technical scheme provided by the application at least comprises the following beneficial effects:
and extracting user characteristics in the preview image in a photographing preview stage, matching the user characteristics with preset user characteristics, judging whether target user characteristics which are successfully matched exist, further, if the target user characteristics exist, determining a target focusing area according to a user area corresponding to the target user characteristics, and finally, focusing the photographed image according to the target focusing area. Therefore, when the user shoots, the user can automatically shoot the image by taking the object concerned by the user as the focusing main body, and the shooting efficiency is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic illustration of a preview image according to the prior art;
FIG. 2 is a schematic illustration of a preview image taken in focus according to one embodiment of the present application;
fig. 3 is a schematic flowchart of a focusing shooting method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another focusing photographing method according to an embodiment of the present disclosure;
FIG. 5-1 is a schematic view of a focusing scene according to an embodiment of the present disclosure;
FIG. 5-2 is a schematic view of another focusing scene provided in the embodiment of the present application; and
fig. 6 is a schematic structural diagram of a focusing camera device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The focus shooting method and apparatus of the embodiments of the present application are described below with reference to the drawings.
In order to solve the technical problem mentioned in the background art that shooting efficiency is low due to manual touch focusing, the application provides a method for automatically identifying a focusing main body which a user wants to focus and shooting the focusing main body in a focusing manner. That is, as shown in fig. 2, when taking a picture, a "friend" that the user wants to focus on can be identified in the first time, so as to realize quick taking of the picture, and this automatic focusing mode has the most outstanding effect especially when taking a travel picture in a scenic spot.
Specifically, fig. 3 is a flowchart illustrating a focusing shooting method according to an embodiment of the present disclosure. As shown in fig. 3, the method includes:
step 101, extracting user characteristics in a preview image in a photographing preview stage, and matching the user characteristics with preset user characteristics.
It is understood that, in an actual photographing scene, a photographing preview image, which is a preview image mentioned in the embodiment of the present application, is displayed in the view finder.
In the embodiment of the application, the user features in the preview image are extracted, and the user features are matched with the preset user features, so that the focusing main body is determined according to the matching result.
As a possible implementation manner, the user may extract corresponding user features from photos of a user (such as a user, a friend, a family Person, or the like) at multiple angles, which the user wants to focus, and train a deep learning model according to the user features, where the deep learning model may be a pedestrian Re-detection network model (ReID), extract a user region in a preview image according to technologies such as subject recognition, input the user region to the corresponding pedestrian Re-detection network model, and determine whether the user feature matches with the preset user feature according to an output result of the pedestrian Re-detection network model.
As another possible implementation manner, the user may extract the corresponding user features as preset user features from photos of multiple angles of a focused user (such as a user himself, a friend, a family person, or the like) in advance, extract the user region in the preview image according to a technology such as subject recognition, extract the user features of the user region, calculate a distance between the extracted user features of the user region and the preset user features, and determine whether the user features are matched with the preset user features according to a distance value.
Of course, in an embodiment of the present application, the preset user characteristic may also be a user characteristic corresponding to a shooting scene determined in a current shooting scene, for example, when traveling with a group, the corresponding preset user characteristic is a user characteristic of wearing a traveling group of clothing, and for example, in a beauty shooting scene, the corresponding preset user characteristic is a female user characteristic.
In an embodiment of the present application, the user features refer to face features, a face subregion in a preview image is identified according to a face recognition technology, the face features in the face subregion are extracted, and face features preset by the face features are matched to determine whether a focusing subject in which a user is interested is included.
In an actual execution process, it may be caused by an excessively small area of the face sub-region or other reasons that whether the focusing subject interested by the user exists is detected based on only the face feature, and the detection accuracy is low.
In this embodiment, after the face features are matched with the preset face features, if the face features successfully matched are included, the detection of the sub-region of the human body is not performed, so that the determination efficiency of the focusing main body is improved.
Of course, as shown in fig. 4, in an embodiment of the present application, human detection and human face detection may be performed simultaneously on a preview image, and the human face subregion and the human body subregion are scratched, and then the human face subregion is input into a pre-constructed deep learning model to obtain human face features, the human body subregion is input into the pre-constructed deep learning model to respectively obtain the human face features and the human body features, and then the human face features are preferentially compared, and when matching between the human face features and the preset human face features fails, the human body features of the human body region are compared.
And 102, judging whether the target user characteristics successfully matched exist or not.
Specifically, when the matching degree between the user characteristic and the preset user characteristic is greater than a certain value, it may be determined that there is a target user characteristic that is successfully matched.
In an embodiment of the present application, if there is no target user feature, the position of each face area in the preview image in the current preview image may be determined, and the face area close to the center area of the preview image is used as a focusing area.
And 103, if the target user characteristics exist, determining a target focusing area according to a user area corresponding to the target user characteristics.
Specifically, if the target user characteristics exist, the target focusing area is determined according to the user area corresponding to the target user characteristics, so that the user area where the target user characteristics exist is ensured to be focused as a focusing main body, and the photographing experience is improved.
It should be noted that, in different application scenarios, the manner of determining the target focusing area according to the user area corresponding to the target user characteristic is different, and the following example is given:
example one:
in this example, it can be understood that the user sets user levels corresponding to different preset user characteristics in advance, for example, the user level of the family user characteristic is higher than the user level of the friend user characteristic, so as to ensure that when the preview image includes both the family user and the friend user, the user area where the family user is located is the focusing area.
Specifically, whether a plurality of user areas corresponding to the target user features are determined, if not, the user area corresponding to the target user features is used as a focusing area, and if so, a preset database is queried according to the target user features, wherein the preset database stores the corresponding relationship between the user features and the user levels, the user level corresponding to each user area is obtained, and the user area with the highest user level is determined to be the target focusing area.
Example two:
in this example, it is determined whether there are multiple user regions corresponding to the target user features, and if there are multiple user regions, it is considered that a user who usually wants to focus is necessarily closer to the camera during photographing, so that the focusing region can be determined according to the area of the corresponding user region, that is, the area of each user region is obtained, the area can be determined according to the number of pixels included in the user region, and the user region with the largest area is determined as the target focusing region.
Example three:
in this example, it is determined whether there are multiple user areas corresponding to the target user feature, and if there are multiple user areas, the distance between the multiple user areas is calculated, for example, the coordinates of the center point of each user area are determined, the distance between the multiple user areas is calculated based on the coordinate difference between the coordinates of the center store, and if the distance is smaller than a preset threshold, it indicates that multiple users may be together, so that the center area of the multiple user areas may be used as a focusing area.
That is, as shown in fig. 5-1, when the user area corresponding to the determined target user feature is the area where the user a and the user B are located, but the distance between the user a and the user B is calculated to be short according to the distance between the user a and the user B, the central area between the user a and the user B may be the focusing area, and the central area may be the area included between the area where the user a is located and the central point of the area where the user B is located, or the like.
Of course, in this example, when the user area corresponding to the target user feature is smaller than the preset threshold, all the user areas may also be taken as the focusing areas, that is, as shown in fig. 5-2, the areas where the user a and the user B are located are taken as the focusing areas.
And step 104, focusing the shot image according to the target focusing area.
Specifically, after the target focusing area is determined, the user area corresponding to the target feature is ensured to obtain clear images according to the focusing shooting image of the target focusing area, and the focusing shooting requirements of the user are met. In an embodiment of the present application, depth-of-field information of the target focusing area may be calculated, for example, depth-of-field information of the focusing area may be calculated according to a principle of triangulation or an image recognition technology, a shooting focal length may be determined according to the depth-of-field information, and an image may be shot according to the shooting focal length, so as to ensure that a user area of the target user characteristics may clearly fall on the image sensor.
Therefore, the focusing and shooting method provided by the embodiment of the application provides more intelligent shooting experience for users, the users can control shooting and focusing completely according to preset figures, clear portrait really wanted to be shot by the users can be intelligently met to a greater extent, and particularly the users who want to be focused can be directly captured in a multi-person scene or a scene with interference of passersby.
In summary, in the focusing shooting method of the embodiment of the application, the user characteristics in the preview image are extracted in the shooting preview stage, the user characteristics are matched with the preset user characteristics, whether the target user characteristics which are successfully matched exist is judged, further, if the target user characteristics exist, the target focusing area is determined according to the user area corresponding to the target user characteristics, and finally, the image is shot in a focusing mode according to the target focusing area. Therefore, when the user shoots, the user can automatically shoot the image by taking the object concerned by the user as the focusing main body, and the shooting efficiency is improved.
In order to implement the above embodiments, the present application further provides a focusing shooting device.
Fig. 6 is a schematic structural diagram of a focusing device according to an embodiment of the present disclosure.
As shown in fig. 6, the focus camera includes: a matching module 100, a judging module 200, a determining module 300 and a photographing module 400.
The matching module 100 is configured to extract user characteristics in a preview image in a photographing preview stage, and match the user characteristics with preset user characteristics;
the judging module 200 is configured to judge whether there is a target user feature successfully matched;
a determining module 300, configured to determine a target focusing area according to a user area corresponding to a target user feature when the target user feature exists;
and a shooting module 400 for focusing and shooting the image according to the target focusing area.
Further, in a possible implementation manner of the embodiment of the present application, the matching module 100 is specifically configured to:
identifying a face subregion in the preview image;
extracting the face features in the face sub-area;
and matching the human face features with preset human face features.
In this embodiment, the matching module 100 is specifically configured to: when the face features which are successfully matched are not contained, identifying a human body subregion in the preview image;
extracting human body features in the human body sub-region;
and matching the human body characteristics with preset human body characteristics.
Further, in a possible implementation manner of the embodiment of the present application, the determining module 300 is specifically configured to:
judging whether a plurality of user areas corresponding to the target user characteristics exist or not;
if the number of the user areas is multiple, inquiring a preset database according to the target user characteristics to obtain the user level corresponding to each user area;
and determining the user area with the highest user grade as the target focusing area.
It should be noted that the foregoing explanation of the embodiment of the focusing shooting method is also applicable to the focusing shooting apparatus of the embodiment, and is not repeated here.
In order to implement the foregoing embodiments, the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the electronic device implements the focus shooting method as described in the foregoing embodiments.
In order to implement the above embodiments, the present application also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the focus shooting method as described in the aforementioned method embodiments.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. A focusing shooting method is characterized by comprising the following steps:
extracting user characteristics in a preview image in a photographing preview stage, and matching the user characteristics with preset user characteristics;
judging whether the target user characteristics successfully matched exist or not;
if the target user characteristics exist, determining a target focusing area according to a user area corresponding to the target user characteristics;
and focusing and shooting the image according to the target focusing area.
2. The method of claim 1, wherein the extracting user features in the preview image and matching the user features with preset user features comprises:
identifying a face subregion in the preview image;
extracting the human face features in the human face sub-area;
and matching the human face features with preset human face features.
3. The method of claim 2, wherein after said matching said facial features to preset facial features, further comprising:
if the human face features which are successfully matched are not contained, identifying the sub-area of the human body in the preview image;
extracting human body features in the human body sub-region;
and matching the human body characteristics with preset human body characteristics.
4. The method of claim 1, wherein the determining a target focusing area according to the user area corresponding to the target user feature comprises:
judging whether a plurality of user areas corresponding to the target user characteristics exist or not;
if the number of the user areas is multiple, inquiring a preset database according to the target user characteristics to acquire a user grade corresponding to each user area;
and determining the user area with the highest user grade as the target focusing area.
5. The method of claim, wherein determining a target focusing area according to a user area corresponding to the target user feature comprises:
judging whether a plurality of user areas corresponding to the target user characteristics exist or not;
if the number of the user areas is multiple, the area of each user area is obtained;
and determining the user area with the largest area as the target focusing area.
6. The method of claim 1, wherein focusing the captured image according to the target focus area comprises:
calculating the depth of field information of the target focusing area;
and determining a shooting focal length according to the depth of field information, and shooting an image according to the shooting focal length.
7. A focus shooting apparatus, comprising:
the matching module is used for extracting the user characteristics in the preview image in the photographing preview stage and matching the user characteristics with the preset user characteristics;
the judging module is used for judging whether the target user characteristics which are successfully matched exist or not;
the determining module is used for determining a target focusing area according to a user area corresponding to the target user characteristic when the target user characteristic exists;
and the shooting module is used for shooting an image according to the focusing of the target focusing area.
8. The apparatus of claim 7, wherein the matching module is specifically configured to:
identifying a face subregion in the preview image;
extracting the human face features in the human face sub-area;
and matching the human face features with preset human face features.
9. The apparatus of claim 8, wherein the matching module is specifically configured to:
when the face features which are successfully matched are not contained, identifying the sub-area of the human body in the preview image;
extracting human body features in the human body sub-region;
and matching the human body characteristics with preset human body characteristics.
10. The apparatus of claim 7, wherein the determination module is specifically configured to:
judging whether a plurality of user areas corresponding to the target user characteristics exist or not;
if the number of the user areas is multiple, inquiring a preset database according to the target user characteristics to acquire a user grade corresponding to each user area;
and determining the user area with the highest user grade as the target focusing area.
11. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the computer program, implements the focus shooting method according to any one of claims 1 to 6.
12. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the focus shooting method according to any one of claims 1 to 6.
CN202010028389.6A 2020-01-10 2020-01-10 Focusing shooting method and device Pending CN111212226A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010028389.6A CN111212226A (en) 2020-01-10 2020-01-10 Focusing shooting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010028389.6A CN111212226A (en) 2020-01-10 2020-01-10 Focusing shooting method and device

Publications (1)

Publication Number Publication Date
CN111212226A true CN111212226A (en) 2020-05-29

Family

ID=70788865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010028389.6A Pending CN111212226A (en) 2020-01-10 2020-01-10 Focusing shooting method and device

Country Status (1)

Country Link
CN (1) CN111212226A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112399073A (en) * 2020-09-29 2021-02-23 深圳市修远文化创意有限公司 Photographing control method and device
CN114244999A (en) * 2020-09-09 2022-03-25 北京小米移动软件有限公司 Automatic focusing method and device, camera equipment and storage medium
CN114374815A (en) * 2020-10-15 2022-04-19 北京字节跳动网络技术有限公司 Image acquisition method, device, terminal and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101071252A (en) * 2006-05-10 2007-11-14 佳能株式会社 Focus adjustment method, focus adjustment apparatus, and control method thereof
CN104850828A (en) * 2015-04-29 2015-08-19 小米科技有限责任公司 Person identification method and person identification device
CN105704389A (en) * 2016-04-12 2016-06-22 上海斐讯数据通信技术有限公司 Intelligent photo taking method and device
CN105915782A (en) * 2016-03-29 2016-08-31 维沃移动通信有限公司 Picture obtaining method based on face identification, and mobile terminal
CN107509030A (en) * 2017-08-14 2017-12-22 维沃移动通信有限公司 A kind of focusing method and mobile terminal
CN110418064A (en) * 2019-09-03 2019-11-05 北京字节跳动网络技术有限公司 Focusing method, device, electronic equipment and storage medium
CN110503022A (en) * 2019-08-19 2019-11-26 北京积加科技有限公司 A kind of personal identification method, apparatus and system
CN110581954A (en) * 2019-09-30 2019-12-17 深圳酷派技术有限公司 shooting focusing method and device, storage medium and terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101071252A (en) * 2006-05-10 2007-11-14 佳能株式会社 Focus adjustment method, focus adjustment apparatus, and control method thereof
CN104850828A (en) * 2015-04-29 2015-08-19 小米科技有限责任公司 Person identification method and person identification device
CN105915782A (en) * 2016-03-29 2016-08-31 维沃移动通信有限公司 Picture obtaining method based on face identification, and mobile terminal
CN105704389A (en) * 2016-04-12 2016-06-22 上海斐讯数据通信技术有限公司 Intelligent photo taking method and device
CN107509030A (en) * 2017-08-14 2017-12-22 维沃移动通信有限公司 A kind of focusing method and mobile terminal
CN110503022A (en) * 2019-08-19 2019-11-26 北京积加科技有限公司 A kind of personal identification method, apparatus and system
CN110418064A (en) * 2019-09-03 2019-11-05 北京字节跳动网络技术有限公司 Focusing method, device, electronic equipment and storage medium
CN110581954A (en) * 2019-09-30 2019-12-17 深圳酷派技术有限公司 shooting focusing method and device, storage medium and terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114244999A (en) * 2020-09-09 2022-03-25 北京小米移动软件有限公司 Automatic focusing method and device, camera equipment and storage medium
CN114244999B (en) * 2020-09-09 2023-11-24 北京小米移动软件有限公司 Automatic focusing method, device, image pickup apparatus and storage medium
CN112399073A (en) * 2020-09-29 2021-02-23 深圳市修远文化创意有限公司 Photographing control method and device
CN114374815A (en) * 2020-10-15 2022-04-19 北京字节跳动网络技术有限公司 Image acquisition method, device, terminal and storage medium

Similar Documents

Publication Publication Date Title
CN108496350B (en) Focusing processing method and device
CN107241559B (en) Portrait photographing method and device and camera equipment
CN109034013B (en) Face image recognition method, device and storage medium
CN110378945B (en) Depth map processing method and device and electronic equipment
KR101971866B1 (en) Method and apparatus for detecting object in moving image and storage medium storing program thereof
WO2019179441A1 (en) Focus tracking method and device of smart apparatus, smart apparatus, and storage medium
JP4350725B2 (en) Image processing method, image processing apparatus, and program for causing computer to execute image processing method
CN107948517B (en) Preview picture blurring processing method, device and equipment
EP2037320B1 (en) Imaging apparatus, imaging apparatus control method, and computer program
CN107945105B (en) Background blurring processing method, device and equipment
CN111212226A (en) Focusing shooting method and device
CN107341762B (en) Photographing processing method and device and terminal equipment
CN110400338B (en) Depth map processing method and device and electronic equipment
US9615019B2 (en) Image capturing apparatus and control method for image capturing apparatus with particle filter for main object detection and selecting focus detection area based on priority
CN110378946B (en) Depth map processing method and device and electronic equipment
WO2013165565A1 (en) Method of detecting a main subject in an image
JP2004320286A (en) Digital camera
CN102843509A (en) Image processing device and image processing method
CN109451240B (en) Focusing method, focusing device, computer equipment and readable storage medium
CN107439005B (en) Method, device and equipment for determining focusing window
JP2021176243A (en) Image processing apparatus, control method for the same, and imaging apparatus
JP2004320285A (en) Digital camera
JP2010008620A (en) Imaging apparatus
CN108780568A (en) A kind of image processing method, device and aircraft
JP2011193063A (en) Electronic camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200529

RJ01 Rejection of invention patent application after publication