WO2020111776A1 - Dispositif électronique pour la photographie à suivi de mise au point et procédé associé - Google Patents

Dispositif électronique pour la photographie à suivi de mise au point et procédé associé Download PDF

Info

Publication number
WO2020111776A1
WO2020111776A1 PCT/KR2019/016477 KR2019016477W WO2020111776A1 WO 2020111776 A1 WO2020111776 A1 WO 2020111776A1 KR 2019016477 W KR2019016477 W KR 2019016477W WO 2020111776 A1 WO2020111776 A1 WO 2020111776A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
target character
feature set
weight
matched
Prior art date
Application number
PCT/KR2019/016477
Other languages
English (en)
Inventor
Bin Li
Lu LV
Shipeng Yu
Sugang TIAN
Siqun YANG
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2020111776A1 publication Critical patent/WO2020111776A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • the present disclosure generally relate to a photographing device and method, more particularly, to an electronic device capable of photographing and a focus tracking photographing method of the electronic device.
  • the terminal device When photographing a person by using a mobile terminal in the prior art, when there are a plurality of faces in the preview image captured by the camera, the terminal device usually focuses all faces in the preview image automatically based on face recognition, or selects a target to focus by clicking on the screen with a finger.
  • Exemplary embodiments address the above disadvantages and other disadvantages not described above. Moreover, the exemplary embodiments are not required to overcome the disadvantages described above, and the exemplary embodiments may not overcome any of the problems described above.
  • a focus tracking photographing method for an electronic device may include: obtaining a preview image including a plurality of faces; performing facial feature extraction for each of the plurality of faces; matching extracted facial features for each of the plurality of faces with at least one target character feature set respectively, to identify at least one matched target character in the preview image; and performing auto-focusing on the at least one matched target character in the preview image based on information associated with the identified at least one matched target character.
  • an electronic device comprising: a feature extracting module configured to perform facial feature extraction for each of a plurality of faces included in a preview image; a feature matching module configured to match extracted facial features for each of the plurality of faces with at least one target character feature set respectively, to identify at least one matched target character in the preview image; and a focusing module configured to perform auto-focusing on at least one matched target characters in the preview image based on information associated with the identified at least one matched target character.
  • the electronic device may quickly identify a person, who the user of the electronic device may be interested in, in the crowd and perform auto-focusing, thereby avoiding the difficulty that it is difficult for the user to adjust the focus, which has an additional advantage of reducing shooting time and improving shooting quality.
  • the electronic device may effectively reduce the impact of strangers.
  • FIG. 1 is a diagram illustrating an example configuration of an electronic device according to an embodiment of the present disclosure
  • FIG. 2 is a flowchart illustrating a method of establishing and updating a target character feature set by a target character feature set managing module according to an embodiment of the present disclosure
  • FIGS. 3A, 3B, 3C, 3D and 3E are diagrams illustrating operations of a focusing module in a focus tracking mode, according to an embodiment of the present disclosure.
  • FIG. 4 is a flowchart illustrating a focus tracking photographing method of an electronic device in the focus tracking mode according to an embodiment of the present disclosure.
  • FIG. 1 is a block diagram illustrating an example configuration of the electronic device according to an embodiment of the present invention.
  • the electronic device described herein may be any electronic device having a photographing function.
  • a portable device will be employed as a representative example of the electronic device in some embodiments of the invention, it should be understood that some components of the electronic device may be omitted or replaced.
  • the electronic device 100 may include a camera module 110 and a controller 120.
  • the controller 120 may include a feature extracting module 122 and a feature matching module 124.
  • the feature extracting module 122 may determine whether a face appears in a preview image captured by an image capturing module of the camera module 110, and perform auto-focusing or manual-focusing on the appeared face according to a preset setting.
  • a human face has certain structural distribution features (such as structural features and positional features of facial features (e.g., eyes, nose, mouth, eyebrows, etc.)), rules generated from these features may be used to determine whether a face is included in the image.
  • Adaboost Adaptive Boosting
  • the method for determining whether a face is included in an image by the feature extracting module 122 may be implemented through various known methods. Further, in the embodiment of the present disclosure, the operation of performing auto-focusing or manual-focusing on the face in the preview image may also be implemented through various known methods.
  • the feature extracting module 122 may determine whether a plurality of faces appear in the preview image captured by the image capturing module 112 of the camera module 110, and if the faces appear, a facial feature extraction may be performed for each face. In an embodiment, after determining that a plurality of faces are included in the preview image, the feature extracting module 122 may extract facial features for all of the faces included in the preview image based on deep learning techniques. As described above, the human face has a certain structural distribution features, for example, feature data that may used for face recognition may be obtained based on the shape description of the facial features of the face and the distance feature between the facial features. The geometrical description of the facial features and the structural relationship between the facial features may also be used as feature data for facial recognition. In the embodiment of the present disclosure, the method for extracting facial features by the feature extracting module 122 may be implemented through various known methods such as AdaBoost learning algorithm.
  • the feature matching module 124 may match facial features of each face extracted by the feature extracting module 122 with one or more target character feature sets respectively, to determine whether a plurality of faces included in the image include a face which has a target character feature set corresponding thereto, so that it is possible to determine whether a matched target character appears in the image.
  • the target character feature set is established for each character separately, and each target character feature set includes facial features of a character corresponding thereto. A character having the matched target character feature set may be recognized as the target character.
  • controller 120 may further include control circuits and other elements for controlling the overall operation of the electronic device 100 or the operations of elements within the electronic device 100.
  • the controller 120 may control a predetermined operation or function in the electronic device 100 to be performed according to a predetermined setting or in response to any combination of user inputs.
  • the controller 120 may further include a target character feature set managing module (not shown) for managing the target character feature sets.
  • a target character feature set managing module (not shown) for managing the target character feature sets. A method of establishing and updating a target character feature set will be described in detail below with reference to FIG. 2.
  • the camera module 110 may include an image capturing module 112 and a focusing module 114.
  • the image capturing module 112 may convert an image obtained by an image sensor module (not shown) in the image capturing module 112 into a preview image and display the same in a display module (not shown) in the electronic device 100 when the camera module 110 is driven, and when a request for capturing an image is generated through the shutter button, may store the captured image in the electronic device or store the captured image in the electronic device after performing a series of image processing on the captured image.
  • the focusing module 114 may include a driving module capable of driving a lens system included in the image sensor module.
  • the focusing module 114 may focus on the point at which the user input is detected based on the user's input to the display module. Further, the focusing module 114 may determine whether the focal length is appropriate by judging the sharpness of the images taken under the different focal lengths, and further adjust the focal length by controlling the lens system.
  • the focusing module 114 may control the lens system to perform single focusing or central focusing on the face determined as the subject.
  • the focus mode may be set in the preview mode, and the focus mode may be switched at any time according to the user's input.
  • the focus mode is not limited to the focus mode described above, and may include other focus modes known in the art.
  • the operations of the focusing module 114 in the focus tracking mode will be described in detail with reference to FIG. 3.
  • the focusing module 114 may be configured to automatically focus on matched target characters who appear in the image according to related information of the matched target characters who appear in the image.
  • the focusing module 114 may also include a separate processing module (not shown) for determining to use at least one of a single-target focusing mode and a multi-targets focusing mode in the focus tracking mode.
  • the single-target focusing mode may indicate focusing on a face of a single character or focusing on faces of a plurality of characters respectively
  • the multi-targets focusing mode refers to focusing on the center point of a polygon formed by lines connecting the center point of the face of each of a plurality of characters.
  • the single-target focusing mode, the multi-targets focusing mode, or both may be selected according to positions of a plurality of target characters.
  • the electronic device 100 may also include a communication module (not shown) for communicating with an external device or connecting to a network.
  • the communication module may also be used to transmit data to outside or receive data from the outside.
  • the electronic device 100 may further include a memory (not shown) for storing local images and target character feature sets established by the target character feature set managing module.
  • the electronic device 100 may also include an internal bus for communicating between elements in electronic device 100.
  • FIG. 2 is a flowchart illustrating a method for establishing and updating a target character feature set by the target character feature set managing module, according to an embodiment of the present disclosure.
  • the target character feature set managing module may be configured to establish a target character feature set for each target character separately based on local images in the electronic device 100.
  • the local images may include images in an album of the electronic device 100, images downloaded from the Internet or a cloud server via the communication module, and images received from another electronic device via the communication module.
  • a character image may refer to an image in which a character is included.
  • the target character feature set managing module may communicate through the feature extracting module 122 to determine character images in the local images through the feature extracting module 122. In general, the target character feature set managing module operates in the background while the electronic device 100 is in a standby state or the camera module 110 is in an inactive state.
  • the operations of establishing a target character feature set by the target character feature set managing module may include the following steps, first, in step 201, the target character feature set managing module selects the character images as the target image set from the local images.
  • the target character feature set managing module may identify and extract facial features of each face included in all the character images in the target image set by using the feature extracting module 122.
  • the target character feature set managing module may communicate with the feature matching module 124 such that the feature matching module 124 compares the extracted facial features of each face with each other , and may establish a the target character feature set for each character separately based on a degree of similarity of the extracted facial features of each face. Facial features, the degree of similarity of which is greater than a predetermined threshold, will be determined to be facial features of the same character.
  • Each target character feature set contains facial features of the target character corresponding thereto.
  • the target character feature set managing module may allocate a weight to each target character feature set, the weight is determined by counting the number of occurrences of the same character in all images of the target image set. Specifically, the target character feature set managing module may count the number of occurrences of the same character in all the character images of the target image set, for example, if a character appears only once, a target character feature set will not be established for facial features of this character, or the target character feature set corresponding to this character may be allocated with the lowest weight. In addition, the target character feature set managing module may further allocate weights to different target character feature sets according to appearing frequencies of different target characters in the target image set.
  • a weight of the target character feature set corresponding to the person is also relatively higher.
  • the weight of the target character feature set actually reflects the extent to which the user of the electronic device is interested in the character for the target character feature set.
  • a character for a target character feature set with a higher weight may be a character who the user of the electronic device has a higher interest in.
  • the operations of establishing a target character feature set by the target character feature set managing module may be implemented using deep learning algorithms.
  • the established target character feature set may be stored in the memory of the electronic device.
  • the target character feature set managing module may further update the target character feature sets according to a command input by the user. In another embodiment, the target character feature set managing module may further update the target character feature sets according to a time period preset by the user or according to a time period defaulted by the system. The updating of the target character feature sets may include adding a new target character feature set, updating the weights of the target character feature sets, and deleting a target character feature set the weight of which is lower than a threshold.
  • the target character feature set managing module may perform the selection of the character image, the extraction of facial features of the character, and the establishment of the target character feature set only for images newly added in the local images after the last update, and re-allocates weights to the previously stored target character feature sets in the memory and the newly established target character feature sets, and deletes a target character feature set the weight of which is lower than the threshold.
  • the periodically updating of the target character feature sets may be classified into short periodic update and long periodic update. For example, in a case where a short period is one week and a long period is one month, the target character feature set managing module may repeat the above-described updating operations for the images newly added in the local images after the last update every other week, and, the target character feature set managing module may update all the images in the local images every other month.
  • the target character feature sets may accurately reflect the extent of user's interest in different characters.
  • the focusing module 114 may be configured to perform auto-focusing on matched target characters who appear in the image according to related information of the matched target characters who appear in the image.
  • the related information of the matched target characters who appear in the image includes at least one of the number of the matched target characters, the distance between centers of the faces of the matched target characters, and the weights of the target character feature sets corresponding to the matched target characters.
  • the focusing module 114 may select at least one of a single-target focusing mode and a multi-targets focusing mode according to the number of matched target characters, the distance between the centers of the faces of matched target characters, and the weights of the target character feature sets corresponding to the matched target characters to perform auto-focusing on the matched target characters.
  • the feature matching module 124 determines that there is only one matched target character (for example, the object 302)
  • the matched one target character i.e., the object 302
  • the single-target focusing mode includes performing a separate focusing on the matched target character, that is, focusing on the center of the face of the matched target character.
  • the focusing module 114 may determine the differences between the plurality of weights of the target character feature sets corresponding to the matched two or more target characters, respectively, wherein if the differences between the highest weight and other weights among the plurality of weights are all greater than a predetermined threshold, the single-target focusing mode is selected to focus on a target character for a target character feature set with the highest weight. Still as shown in FIG. 3A , in another embodiment, if the objects 301 to 304 have target character feature sets corresponding thereto respectively, that is, the objects 301 to 304 are all determined as the target characters.
  • the focusing module 114 may determine weights of the target character feature sets corresponding to the objects 301 to 304, if the target character feature set corresponding to the object 302 has the highest weight, and each of the differences between the weight of the target character feature set corresponding to the object 302 and the weights of the target character feature sets corresponding to the object 301, the object 303, and the object 304 is greater than a preset weight threshold, the focusing module 114 may determine the object 302 as the focused object, and focus on the object 302 under the single-target focusing mode.
  • the objects 301 to 304 are all determined as the target characters, the user is much more interested in the object 302 than the other three objects, therefore, the focusing module may focus on the object 302 only.
  • the focusing module may determine an object for target character feature set with the highest weight and an object for a target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, as the objects to be focused, by determining the weights of the target character feature sets corresponding to the objects 301 to 304, and determining the differences between the highest weight and other weights among the weights of the target character feature sets corresponding to the objects 301 to 304.
  • the object 302 and the object 303 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, then the focusing module 114 may determine the object 302 and the object 303 as the objects to be focused.
  • the focusing module 114 may select a focus mode for object 302 and the object 303.
  • the object 302 and the object 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, then the focusing module 114 may determine the object 302 and the object 304 as the objects to be focused.
  • the focusing module 114 may select a focus mode for the objects 302 and the object 304.
  • the focusing module 114 may determine the objects 302 to 304 as the objects to be focused. That is, although the objects 301 to 304 are all determined as the target characters, the extent of the user's interest in the objects 302 to 304 are similar and much higher than that of the user's interest in the object 301, and therefore, the focusing module 114 may select a focus mode for the objects 302 to 304.
  • the object 301 and the objects 303 to 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, then the focusing module 114 may determine the object 301 and the objects 303 to 304 as the objects to be focused.
  • the focusing module 114 may select a focus mode for the object 301 and the objects 303 to 304.
  • the focusing module 114 determines a distance between the centers of the faces of the target character for the target character feature set with the highest weight and the target character for the target character feature set with the at least one weight. That is, in the example shown in FIG. 3B, the distance between the centers of the faces of the object 302 and the object 303 is determined. In the example shown in FIG.
  • the distance between the centers of the faces of the object 302 and the object 304 is determined.
  • the distances between the object 302 to 304 are determined respectively.
  • the distances between the centers of the faces of object 301 and the objects 303 to 304 are determined respectively.
  • the focusing module 114 may perform central focusing only on two or more target characters, the distance between which being less than a predetermined threshold. If the distance between the centers of the faces of two target characters is less than the predetermined threshold, the focusing module 114 may determine that the two target characters are in a state of being next to each other. For example, if the distance between the centers of the faces of the object 302 and the object 303 in FIG. 3B is less than the predetermined threshold, it is determined that the object 302 and the object 303 are in a state of being next to each other. For example, if the distances between the centers of the faces of the object 303 and the object 302 and the object 304 in FIG.
  • 3D are all less than the predetermined threshold, it is determined that the object 302 to the object 304 are in a state of being next to each other.
  • FIG. 3E only the distance between the centers of the faces of the object 303 and the object 304 is less than the predetermined threshold, then it is determined that the object 303 and the object 304 are in a state of being next to each other.
  • the focusing module 114 may perform central focusing only on two or more target characters, the distance between which being less than the predetermined threshold, that is to say, the focusing module 114 may perform central focusing only on the target characters in the state of being next to each other.
  • the central focusing may mean focusing on the center of a straight line or the center of a polygon formed by the centers of the faces of the plurality of target characters.
  • the object 302 and the object 303 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, and the distance between the centers of the faces of the object 302 and the object 303 is less than the predetermined threshold, therefore, the focusing module 114 selects the multi-targets focusing mode to perform central focusing on the object 302 and the object 303.
  • the object 302 and the object 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, and the distance between the center of the face of the object 302 and the center of the face of the object 304 is greater than the predetermined threshold, therefore, the focusing module 114 selects the single-target focusing mode to perform separate focusing on the object 302 and the object 304 respectively.
  • the objects 302 to 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, and the distances between the centers of the faces of the object 303 and the object 302 and object 304 are all less than the predetermined threshold, therefore, the focusing module 114 selects the multi-targets focusing mode to perform central focusing on the objects 302 to 304.
  • the objects 301 and the objects 303 to 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, and only the distance between the centers of the faces of the object 303 and the object 304 is less than the predetermined threshold, therefore, the focusing module 114 selects the single-target focusing mode to perform separate focusing on the object 301, and selects the multi-targets focusing mode to perform central focusing on the objects 303 to 304.
  • FIGS. 3A to 3E are merely exemplary embodiments, and the present invention is not limited to the embodiments illustrated in FIGS. 3A to 3E, instead, various modifications of the embodiments described above may also be included.
  • FIG. 4 is a flowchart illustrating a focus tracking photographing method of the electronic device in the focus tracking mode according to an embodiment of the present disclosure.
  • step 401 it is determined whether a plurality of faces appear in an image captured in a preview mode, and if a plurality of faces appear, the method proceeds to step 402. If no faces appear, the method proceeds to step 404.
  • facial feature extraction is performed for each face.
  • the facial feature extraction may be implemented based on machine learning techniques, for example, AdaBoost learning algorithm.
  • step 403 the extracted facial features of each face are matched with one or more target character feature sets respectively to determine whether a matched target character appears in the image. If no matched target character appears in the image, the method proceeds to step 404. If a matched target character appears, the method proceeds to step 405.
  • step 404 that is, in the case where no faces appear in the image captured in the preview mode or only one face appears, or if a face appears in the image captured in the preview mode, but no matched target character appears, the focus mode can be automatically switched to the normal focus mode.
  • step 405 auto-focusing on the matched target characters who appear in the image is performed according to the related information of the matched target characters who appear in the image.
  • the related information of the matched target characters who appear in the image includes at least one of the number of matched target characters, the distance between the centers of the faces of the matched target characters, and the weights of the target character feature sets corresponding to the matched target characters.
  • the performing of auto-focusing on matched target characters who appear in the image according to related information of the matched target characters who appear in the image may include: selecting at least one of a single-target focusing mode and a multi-targets focusing mode based on the related information of the matched target characters who appear in the image.
  • the single-target focusing mode is selected; in a case where the number of the matched target characters is greater than or equal to two, the differences between a plurality of weights of the target character feature sets corresponding to the matched target characters are determined, wherein if differences between the highest weight and other weights among the plurality of weights are all greater than a preset weight threshold, and the single-target focusing mode is selected to focus on a target character for a target character feature set with the highest weight, and wherein, if a difference between the highest weight of the plurality of weights and at least one weight of the other weights is less than the preset weight threshold, a distance between the centers of the faces of the target character for the target character feature set with the highest weight and a target character for the target character feature set with the at least one weight is determined, and the multi-targets focusing mode is selected to focus on two or more target characters, the distance between which being less than a preset weight threshold
  • An electronic device may preferably use an album of the electronic device or other image resources downloaded to the electronic device as a macroscopic database, and analyze characters who frequently appear in the macro database. Characters who frequently appear in the macroscopic database are typically the owner of the electronic device (i.e., the user of the electronic device) and family members or close friends of the owner or a person the owner interested in.
  • the electronic device may quickly identify a person who the user of the electronic device may be interested in the crowd and perform auto focusing, thereby avoiding the difficulty that it is difficult for the user to adjust the focus, which has an additional advantage of reducing shooting time and improving shooting quality.
  • it may effectively reduce the impact of strangers.
  • various embodiments of the present disclosure can be performed by program commands that can be executed by various computers and can be stored in a recording medium readable by a computer.
  • Recording media readable by a computer may include program commands, data files, data structures, and combinations thereof.
  • the program commands stored in the recording medium may be program commands specifically designed for the present disclosure or program commands commonly used in the field of computer software.
  • Non-transitory computer readable recording medium is any data storage device that can store data which can be subsequently read by a computer system.
  • Examples of the non-transitory computer readable recording medium include a read only memory (ROM), a random access memory (RAM), a compact disk ROM (CD-ROM), a magnetic tape, a floppy disk, and an optical data storage device.
  • the non-transitory computer readable recording medium can further be distributed over a network coupled computer system to store and execute the computer readable code in a distributed mode.
  • programmers skilled in the art to which the present disclosure pertains may readily interpret functional programs, code, and code segments for implementing the present disclosure.
  • processors operating in accordance with stored instructions may implement the functions associated with the various embodiments of the present disclosure as described above. If this is the case, the instructions may be stored in one or more non-transitory processor readable media, which falls within the scope of the present disclosure. Examples of the processor readable media include ROM, RAM, CD-ROM, magnetic tape, floppy disks, and optical data storage devices. The processor readable media can also be distributed over network coupled computer systems for storing and executing instructions in a distributed mode. Furthermore, programmers skilled in the art to which the present disclosure pertains may readily interpret functional computer programs, code, and code segments for implementing the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un dispositif électronique et un procédé de photographie à suivi de mise au point associé. Un procédé de photographie à suivi de mise au point pour un dispositif électronique peut comprendre les étapes suivantes : obtenir une image de prévisualisation contenant une pluralité de visages; effectuer une extraction de caractéristiques faciales pour chaque visage de la pluralité de visages; faire correspondre les caractéristiques faciales extraites pour chaque visage de la pluralité de visages avec au moins un ensemble de caractéristiques de personnage cible respectivement, pour identifier un personnage cible correspondant dans l'image de prévisualisation; et effectuer une mise au point automatique sur le ou les personnages cibles correspondants dans l'image de prévisualisation en fonction d'un résultat de la correspondance.
PCT/KR2019/016477 2018-11-27 2019-11-27 Dispositif électronique pour la photographie à suivi de mise au point et procédé associé WO2020111776A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811469213.3A CN109474785A (zh) 2018-11-27 2018-11-27 电子装置和电子装置的焦点追踪拍照方法
CN201811469213.3 2018-11-27

Publications (1)

Publication Number Publication Date
WO2020111776A1 true WO2020111776A1 (fr) 2020-06-04

Family

ID=65674916

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/016477 WO2020111776A1 (fr) 2018-11-27 2019-11-27 Dispositif électronique pour la photographie à suivi de mise au point et procédé associé

Country Status (2)

Country Link
CN (1) CN109474785A (fr)
WO (1) WO2020111776A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107241618B (zh) * 2017-08-07 2020-07-28 苏州市广播电视总台 收录方法和收录装置
CN110266941A (zh) * 2019-05-31 2019-09-20 维沃移动通信(杭州)有限公司 一种全景拍摄方法及终端设备
CN110290324B (zh) * 2019-06-28 2021-02-02 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110830712A (zh) * 2019-09-16 2020-02-21 幻想动力(上海)文化传播有限公司 一种自主摄影系统和方法
CN110581954A (zh) * 2019-09-30 2019-12-17 深圳酷派技术有限公司 一种拍摄对焦方法、装置、存储介质及终端

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002333652A (ja) * 2001-05-10 2002-11-22 Oki Electric Ind Co Ltd 撮影装置及び再生装置
JP2009017038A (ja) * 2007-07-02 2009-01-22 Fujifilm Corp デジタルカメラ
EP1737216B1 (fr) * 2005-06-22 2009-05-20 Omron Corporation Dispositif de détermination de l'objet, dispositif et moniteur d'imagerie
JP2010028720A (ja) * 2008-07-24 2010-02-04 Sanyo Electric Co Ltd 撮像装置
EP2187624A1 (fr) * 2008-11-18 2010-05-19 Fujinon Corporation Système autofocus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010117663A (ja) * 2008-11-14 2010-05-27 Fujinon Corp オートフォーカスシステム
JP5990951B2 (ja) * 2012-03-15 2016-09-14 オムロン株式会社 撮影装置、撮影装置の制御方法、撮影装置制御プログラム、および該プログラムを記録したコンピュータ読み取り可能な記録媒体
CN106713734B (zh) * 2015-11-17 2020-02-21 华为技术有限公司 自动对焦方法及装置
CN105915782A (zh) * 2016-03-29 2016-08-31 维沃移动通信有限公司 一种基于人脸识别的照片获取方法和移动终端
CN107395986A (zh) * 2017-08-28 2017-11-24 联想(北京)有限公司 图像获取方法、装置及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002333652A (ja) * 2001-05-10 2002-11-22 Oki Electric Ind Co Ltd 撮影装置及び再生装置
EP1737216B1 (fr) * 2005-06-22 2009-05-20 Omron Corporation Dispositif de détermination de l'objet, dispositif et moniteur d'imagerie
JP2009017038A (ja) * 2007-07-02 2009-01-22 Fujifilm Corp デジタルカメラ
JP2010028720A (ja) * 2008-07-24 2010-02-04 Sanyo Electric Co Ltd 撮像装置
EP2187624A1 (fr) * 2008-11-18 2010-05-19 Fujinon Corporation Système autofocus

Also Published As

Publication number Publication date
CN109474785A (zh) 2019-03-15

Similar Documents

Publication Publication Date Title
WO2020111776A1 (fr) Dispositif électronique pour la photographie à suivi de mise au point et procédé associé
KR101423916B1 (ko) 복수의 얼굴 인식 방법 및 장치
KR101534808B1 (ko) 얼굴 인식 기술을 이용한 전자 앨범 관리 방법 및 시스템
JP2008547126A (ja) 携帯装置のための予め構成された設定
CN105872363A (zh) 人脸对焦清晰度的调整方法及调整装置
CN103607538A (zh) 拍摄方法及拍摄装置
KR102297217B1 (ko) 영상들 간에 객체와 객체 위치의 동일성을 식별하기 위한 방법 및 장치
CN102272673A (zh) 用于为本人自动拍摄照片的方法、装置和计算机程序产品
WO2011078596A2 (fr) Procédé, système et support d'enregistrement lisible par ordinateur pour réalisation adaptative d'une adaptation d'image selon certaines conditions
WO2024087797A1 (fr) Procédé, appareil et dispositif de collecte de données de direction de ligne de visée, et support d'enregistrement
CN111970437B (zh) 文本拍摄方法、可穿戴设备和存储介质
CN110677580B (zh) 拍摄方法、装置、存储介质及终端
CN111881740A (zh) 人脸识别方法、装置、电子设备及介质
CN108780568A (zh) 一种图像处理方法、装置及飞行器
CN114387548A (zh) 视频及活体检测方法、系统、设备、存储介质及程序产品
CN108259767B (zh) 图像处理方法、装置、存储介质及电子设备
JP4867295B2 (ja) 画像管理装置、および画像管理プログラム
KR102664027B1 (ko) 인공지능에 기반하여 영상을 분석하는 카메라 및 그것의 동작 방법
CN112712564A (zh) 相机的拍摄方法和装置、存储介质、电子装置
CN112188108A (zh) 拍摄方法、终端和计算机可读存储介质
JP2011090410A (ja) 画像処理装置、画像処理システムおよび画像処理装置の制御方法
CN108076280A (zh) 一种基于图像识别的影像分享方法及装置
CN108495038B (zh) 图像处理方法、装置、存储介质及电子设备
CN114143429B (zh) 图像拍摄方法、装置、电子设备和计算机可读存储介质
CN112966575B (zh) 一种应用于智慧社区的目标人脸识别方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19890282

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19890282

Country of ref document: EP

Kind code of ref document: A1