CN113766140A - Image shooting method and image shooting device - Google Patents

Image shooting method and image shooting device Download PDF

Info

Publication number
CN113766140A
CN113766140A CN202111158351.1A CN202111158351A CN113766140A CN 113766140 A CN113766140 A CN 113766140A CN 202111158351 A CN202111158351 A CN 202111158351A CN 113766140 A CN113766140 A CN 113766140A
Authority
CN
China
Prior art keywords
image
focus
photographing
sub
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111158351.1A
Other languages
Chinese (zh)
Other versions
CN113766140B (en
Inventor
季佳松
夏勇峰
张鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Beehive Century Technology Co ltd
Original Assignee
Beijing Beehive Century Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Beehive Century Technology Co ltd filed Critical Beijing Beehive Century Technology Co ltd
Priority to CN202111158351.1A priority Critical patent/CN113766140B/en
Publication of CN113766140A publication Critical patent/CN113766140A/en
Application granted granted Critical
Publication of CN113766140B publication Critical patent/CN113766140B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects

Abstract

The patent relates to an image shooting method, which comprises the steps of obtaining a first shooting view-finding image; acquiring the position of a first photographing focus in the first photographing view-finding image; identifying an object to which the location belongs in the viewfinder image; edge detection is performed on the object and the detected edges are delineated. The patent also relates to an image shooting device, which comprises a first view finding module, a second view finding module and a third view finding module, wherein the first view finding module is used for acquiring a first shooting view finding image; the first focusing module is used for acquiring the position of a first photographing focal point in the framing image; a first identification module that identifies an object to which the position belongs in the through-view image; and the edge detection module is used for carrying out edge detection on the object and drawing the detected edge. The image shooting method and the image shooting device can effectively improve the focusing effect during shooting.

Description

Image shooting method and image shooting device
Technical Field
The patent belongs to the field of image processing and particularly relates to an image shooting method and an image shooting device.
Background
Image capture is a common function of prior art smart devices. In the prior art, there is a technical solution for recognizing and automatically focusing an object such as a human face in a captured view image.
However, such focusing is often specific to a particular object, such as a human face, and in the shooting of other scenes, it is difficult for a user to determine whether a selected focal point belongs to a desired object.
In the prior art, common shooting is realized based on a terminal with a display screen, such as a touch screen mobile phone, and focusing and selection of a shooting object are not particularly intuitive and convenient to operate under the condition that a screen capable of being touched cannot be provided, such as a shooting scene of smart glasses.
Disclosure of Invention
The present patent is based on the above-mentioned needs of the prior art, and provides a shooting method and a shooting apparatus capable of improving the focusing effect during shooting.
In order to solve the above technical problem, the technical solution in this patent includes:
an image photographing method is provided, including acquiring a first photographed framing image; acquiring the position of a first photographing focus in the first photographing view-finding image; identifying an object to which the location belongs in the viewfinder image; edge detection is performed on the object and the detected edges are delineated.
Preferably, the identifying the object to which the position belongs in the image includes identifying all objects in a photographed through-view image, and determining the object in which the position of the first photographing focus in the through-view image is located as the object to which the photographing focus belongs in the through-view image.
Preferably, the edge detection of the object includes detecting an outer edge region of the object and detecting an edge region of an inner module of the object.
Preferably, tracing the detected edge comprises drawing the detected edge region using vector graphics.
Preferably, the pixel properties of the edge region have a variation during the first period.
Preferably, a sub-object in the object is determined according to the detected edge region, a selection command of the sub-object is acquired, and the position of the second focus is determined according to the selection command.
Preferably, the determining the position of the second focus according to the selected sub-object includes acquiring a selection command to determine the selected sub-object and displaying the sub-object in an enlarged manner.
Preferably, determining sub-objects in the object based on the detected edge region comprises determining an enclosed region enclosed by the redrawn edge pattern as a sub-object.
Preferably, the method further comprises: and acquiring a second image, identifying an object in the second image, matching the object in the second image with the object of the first focus in the first image to obtain a second object, and determining the position in the second object as a third focus position.
The patent also provides an image shooting device, which comprises a first view finding module, a second view finding module and a third view finding module, wherein the first view finding module is used for acquiring a first shooting view finding image; the first focusing module is used for acquiring the position of a first photographing focal point in the framing image; a first identification module that identifies an object to which the position belongs in the through-view image; and the edge detection module is used for carrying out edge detection on the object and drawing the detected edge.
Preferably, the identifying the object to which the position belongs in the image includes identifying all objects in a photographed through-view image, and determining the object in which the position of the first photographing focus in the through-view image is located as the object to which the photographing focus belongs in the through-view image.
Preferably, the edge detection of the object includes detecting an outer edge region of the object and detecting an edge region of an inner module of the object.
Preferably, tracing the detected edge comprises drawing the detected edge region using vector graphics.
Preferably, the pixel properties of the edge region have a variation during the first period.
Preferably, a sub-object of the object is determined based on the detected edge region, and the position of the second focus is determined based on the selected sub-object.
Preferably, the determining the position of the second focus according to the selected sub-object includes acquiring a selection command to determine the selected sub-object and displaying the sub-object in an enlarged manner.
Preferably, determining sub-objects in the object based on the detected edge region comprises determining an enclosed region enclosed by the redrawn edge pattern as a sub-object.
Preferably, the apparatus further comprises: the second view finding module is used for obtaining a second image, the second identification module is used for identifying an object in the second image, the second focusing module is used for matching the object in the second image with the object of the first focus in the first image to obtain a second object, and the position in the second object is determined as a third focus position.
Preferably, the image photographing device includes smart glasses.
Compared with the prior art, the method can effectively improve the focusing effect during shooting.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present specification, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a flow chart illustrating the steps of an image capture method according to the present patent;
FIG. 2 is a diagram illustrating the determination of a first focus for taking a picture in one embodiment;
FIG. 3 is a diagram illustrating edge detection of an object at which the first focus belongs according to an embodiment;
fig. 4 is a schematic diagram of an image capture device according to the present patent.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For the purpose of facilitating understanding of the embodiments of the present application, the following description will be made in terms of specific embodiments with reference to the accompanying drawings, which are not intended to limit the embodiments of the present application.
Example 1
The present embodiment provides an image capturing method, referring to fig. 1.
The image photographing method includes the steps of:
s1 acquires a first photographed view image.
The first photographing view-finding image is an image displayed by the photographing device display module when a user uses the photographing device to photograph.
Illustratively, the shooting device is intelligent photographing glasses, when a user uses the intelligent photographing glasses to shoot, software capable of photographing is opened, the camera of the intelligent photographing glasses faces to an object to be shot, and an image picture displayed on the screen of the intelligent photographing glasses is the first photographing view-finding image.
In a typical photographing process, an image taken by a viewfinder of a camera is presented in a video mode, namely, a picture taken by the viewfinder is reflected in a real-time dynamic mode. At this time, the first photo-frame image may be one frame of the video image or all frames of the video frame image within a certain period of time.
The acquisition of these images can be obtained in real time by retrieving the viewfinder signal and processed by memory storage.
During shooting with the smart glasses, the framing image may not be displayed or completely displayed on the near-eye display device, but the framing image still exists in the virtual data of the system, and the above-mentioned acquisition method is still applicable.
S2, acquiring the position of the first photographing focus in the through-view image.
Based on the general operation of taking a picture, the user usually selects a focus point during the framing to satisfy the focusing requirement. This selection is often accomplished by clicking on the screen through a touch operation or directing the smart device through a finger click.
In these operations, the user is usually required to click on the image of a specific object, and the system determines that the specific area in the image is a focused area, which is referred to as a focus point in the present application, according to the user's selection.
Obviously, the first photographing focus determined by the selection command is located on the first through image.
Preferably, the one-frame through image at the time of acquiring the selection command is taken as the first through image. At the moment, the selected focus can be ensured to completely correspond to the image, processing of multi-frame images can be avoided, and the processing speed is improved.
S3 identifies the object to which the location belongs in the through-view image.
The recognition of the object in the through-image in this step is ready for subsequent processing. In general, a plurality of objects are included in one through image, and in this step, all the objects in the through image may be recognized, or only the objects in the area around the focal position may be recognized. The method mainly aims to determine the object at the position of the focus and provide a basis for subsequent processing of the object.
In this embodiment, it is preferable to describe the recognition of all the objects in the through-image as an example.
In the present embodiment, all objects in the photographed through-image are recognized, and the object in which the position of the first photographing focus in the through-image is located is determined as the object to which the photographing focus belongs in the through-image.
The identification of the object in the photographed viewfinder image can be realized through texture identification and color identification.
The first photographing focus is an area, and each object in the first viewfinder image occupies a plurality of areas respectively. A user manually selects a first photographing focal point, and when the first photographing focal point region is positioned in the region where a certain object in the first framing image is positioned, the certain object is the object where the first photographing focal point is positioned; when the first photographing focus region intersects with a region where an object in the first viewing image is located, one object, the intersection area of which accounts for more than 50% of the first photographing focus region, is the object where the first photographing focus is located; and when the first photographing focus area comprises the inside of an area where a certain object in the first view finding image is located and the periphery of the object is irrelevant to the object, the object is the object where the first photographing focus is located, and if the inside of the first photographing focus comprises a plurality of objects, a user is advised to adjust the size of the object through a zooming function.
Illustratively, the first photographed view image acquired by the user using the photographing apparatus includes a plurality of buildings, roads, and trees. And identifying all objects in the first photographed viewfinder image through texture identification and color identification. Referring to fig. 2, the user selects a focus, clicks on a building a, that is, the first photographing focus is located on the building a, and recognizes that the first photographing focus is located on a building. Specifically, the first photographing focus is an area, and when building a is clicked, the area may be inside the area where building a is located; the area of the building A can be intersected with the area of the building A, and the intersected area occupies more than half of the focal area; the area of the building A can be further included, wherein the area of the building A occupies more than half of the focal area.
And S4, performing edge detection on the object and drawing the detected edge.
Edge detection of the object includes detecting an outer edge region of the object and detecting an edge region of an inner module of the object. Rendering the detected edge includes rendering the detected edge region using vector graphics.
Illustratively, referring to fig. 3, when it is recognized that the position of the first photographing focus belongs to a building a, edge detection is performed on the building a. Since the image of the building a acquired using the photographing apparatus is a plane image, the outer edge area of the building a is detected, the outermost edge of the building a is detected and the outer edge thereof is drawn by vector graphic fitting. The method includes the steps of detecting an inner edge area in a plane image of the building A, specifically, detecting an edge area of an object including an edge, such as a gate, a window, and the like, in the plane image, and drawing the inner edge of the building A through appropriate amount of graph fitting.
It is noted that the pixel properties of the edge region have a variation during the first time period.
Illustratively, when the user aims at the shooting object by using the shooting device and does not press the shutter, in the first time, the edge will flash to remind the user that the object at the flash edge is the selected object, and the object will be further processed.
For example, after it is determined that the object to which the first photographing focus belongs is a building a, an edge line of the building a flickers to prompt the user that the object to which the first photographing focus belongs is the building a.
Determining a sub-object of the objects according to the detected edge region, and determining a position of the second focus according to the selected sub-object.
It should be noted that the change of the pixel attribute does not refer to the flashing of the edge line, but may have a plurality of expression forms, such as the flashing of the whole image or the change of the color of the image, and any method capable of prompting the user to change the specific object determined by the first photographing focus is within the scope of the patent protection.
And determining a sub-object in the object according to the detected edge area, acquiring a selection command of the sub-object, and determining the position of the second focus according to the selection command.
And detecting an edge area of an object to which the first photographing focus belongs, wherein the edge area also comprises a plurality of sub-objects, and a user continues to select the focus and determines the position of the second focus according to a selection command.
Illustratively, the object of the first photographing focus is a building a, an outer contour edge of the building a includes a plurality of sub-objects, such as doors, windows, and the like, among which a user selects an object to be precisely selected, and the user clicks an area inside the image of the building a to determine a position of the second focus.
All child objects having the location in the object are identified.
The object image comprises a plurality of sub-objects, and the second focus is positioned on one sub-object on the object to identify the sub-object. In particular, the identification of the object is achieved by texture recognition and color recognition.
The second focus is an area, and each sub-object of the objects occupies a plurality of areas respectively. A user manually selects a second focus, and when the second focus area is located in the area where a certain sub-object of the object is located, the certain sub-object is the sub-object where the second focus is located; when the second focus area is intersected with the area where the sub-object is located, a certain sub-object with the intersection area accounting for more than 50% of the second focus area is the sub-object where the second focus is located; and when the second focus area comprises the inside of an area where a certain sub-object is located and the periphery of the sub-object is irrelevant to the sub-object, the sub-object is the sub-object where the second focus is located.
Illustratively, the identification of all sub-objects in the object is awakened through texture identification and color identification, and the object to which the first photographing focus acquired by the user through the photographing device belongs is a building a, and the building a comprises a plurality of sub-objects, such as doors, windows and the like. And the user selects the focus again, clicks a door, namely the second focus is positioned on the door, and identifies that the position of the second focus is positioned on one door. Specifically, the second focus is an area, and when a door is clicked, the second focus area may be inside the area where the building a is located; the area of the focal region can be more than half of the area of the focal region; the focus area can be a focal area, and the focus area can be a focal area of the optical system. The above situation can determine that the sub-object to which the second focus belongs is a door.
Determining the position of the second focus according to the selected sub-object includes acquiring a selection command to determine the selected sub-object and displaying the sub-object in an enlarged manner.
Illustratively, the user determines the position of the second focus by clicking on an area inside the building a image, and the second focus is located inside the door, so that the door is a selected child object and the door is displayed in an enlarged manner.
Determining sub-objects in the object from the detected edge region comprises determining an enclosed region enclosed by the redrawn edge pattern as a sub-object.
And if the sub-object is shielded to cause the area surrounded by the edge graphs to be a semi-closed area, reselecting.
Illustratively, the user determines that the second focus belongs to a door by clicking on an area in the building a image, and the door is taken as a child object. If the edge graph of the door can enclose a closed area, the door is the sub-object; if the edge pattern of the door cannot enclose a closed area, collectively, if the door is partially blocked or the door is not completely in the first photographed and framed image, and a part of the door is outside the first photographed and framed image, the second focus needs to be reselected.
The image shooting method further comprises the steps of acquiring a second image, identifying an object in the second image, matching the object in the second image with an object of the first focus in the first image to obtain a second object, and determining the position of the second object as a third focus position.
When the user aims at a shooting object by using the shooting device and does not press a shutter, a dynamic image is displayed on a screen of the shooting device, and after the first focus is determined, the shooting device is displaced to a certain degree, so that the second image is formed.
A second image is acquired.
Illustratively, the shooting device is intelligent photographing glasses, when a user uses the intelligent photographing glasses to shoot, software capable of taking pictures is opened, the camera of the intelligent photographing glasses faces to an object to be shot, and an image picture displayed on the screen of the intelligent photographing glasses is a first image. After the first image is obtained, the first photographing focus is determined through a selection command, and the object of the first photographing focus is obtained according to the first photographing focus, so that the object is changed into a first object for convenience of the following description. And when the user does not press the shutter and adjust the shooting angle, the image obtained by the shooting equipment after adjusting the shooting angle is the second image.
All objects on the second image are identified.
And matching all objects in the second image with a first object to which the first focus belongs in the first image, if the first object exists in the adjusted second image, changing the position and the angle of the first object in the second image to form a second object, and determining the position in the second object as a third focus.
Illustratively, the first photographed view image acquired by the user using the photographing apparatus includes a plurality of buildings, roads, and trees. The method comprises the steps that a user selects a focus, clicks a building A, namely, a first photographing focus is located on the building A, and finally, the position where the first photographing focus is located is identified on one building through texture identification and color identification of the building A. At this time, the user adjusts the shooting angle, recognizes all objects in the second image, matches the objects with the first object, and if the building a is found in the second image, the building a is regarded as the second object, and the position in the building a is determined as the third focus. In this embodiment, the position of the third focus is a center point of the second object. The user can avoid repeated selection, and the use experience of the user can be effectively improved.
It should be noted that the position of the third focal point may be any part in the second object, and in this embodiment, the third focal point is set at the central point of the second object.
Example 2
The present embodiment provides an image capturing apparatus, referring to fig. 4.
The image photographing apparatus includes a first view finding module, a first focusing module, a first recognition module, and an edge detection module.
The first framing module can acquire a first photographing framing image.
The first photographing view finding image is an image displayed by the photographing equipment display module when a user uses the photographing equipment to photograph.
Illustratively, the shooting device is intelligent photographing glasses, when a user uses the intelligent photographing glasses to shoot, software capable of taking pictures is opened, the camera of the intelligent photographing glasses faces to an object to be shot, and an image picture displayed on the screen of the intelligent photographing glasses is the first photographing view-finding image.
In a typical photographing process, an image taken by a viewfinder of a camera is presented in a video mode, namely, a picture taken by the viewfinder is reflected in a real-time dynamic mode. At this time, the first photo-frame image may be one frame of the video image or all frames of the video frame image within a certain period of time.
The acquisition of these images can be obtained in real time by retrieving the viewfinder signal and processed by memory storage.
During shooting with the smart glasses, the framing image may not be displayed or completely displayed on the near-eye display device, but the framing image still exists in the virtual data of the system, and the above-mentioned acquisition method is still applicable.
The first focusing module can acquire the position of a first photographing focal point in the framing image.
Based on the general operation of taking a picture, the user usually selects a focus point during the framing to satisfy the focusing requirement. This selection is often accomplished by clicking on the screen through a touch operation or directing the smart device through a finger click.
In these operations, the user is usually required to click on the image of a specific object, and the system determines that the specific area in the image is a focused area, which is referred to as a focus point in the present application, according to the user's selection.
Obviously, the first photographing focus determined by the selection command is located on the first through image.
Preferably, the one-frame through image at the time of acquiring the selection command is taken as the first through image. At the moment, the selected focus can be ensured to completely correspond to the image, processing of multi-frame images can be avoided, and the processing speed is improved.
A first recognition module capable of recognizing an object to which the position belongs in the through-view image.
In general, a plurality of objects are included in one through image, and in this step, all the objects in the through image may be recognized, or only the objects in the area around the focal position may be recognized. The method mainly aims to determine the object at the position of the focus and provide a basis for subsequent processing of the object.
In this embodiment, it is preferable to describe the recognition of all the objects in the through-image as an example.
The first identification module receives the information in the first viewing module, identifies all objects in the photographed viewing image, and identifies the object to which the first photographing focus belongs on the first viewing image. And determining the object where the position of the first photographing focus in the viewfinder image is located as the object to which the photographing focus belongs in the viewfinder image, namely the first object.
The identification of the object in the photographed viewfinder image can be realized through texture identification and color identification.
The first photographing focus is an area, and each object in the first viewfinder image occupies a plurality of areas respectively. A user manually selects a first photographing focal point, and when the first photographing focal point region is positioned in the region where a certain object in the first framing image is positioned, the certain object is the object where the first photographing focal point is positioned; when the first photographing focus region intersects with a region where an object in the first viewing image is located, one object, the intersection area of which accounts for more than 50% of the first photographing focus region, is the object where the first photographing focus is located; and when the first photographing focus area comprises the inside of an area where a certain object in the first view finding image is located and the periphery of the object is irrelevant to the object, the object is the object where the first photographing focus is located, and if the inside of the first photographing focus comprises a plurality of objects, a user is advised to adjust the size of the object through a zooming function.
Illustratively, the first photographed view image acquired by the user using the photographing apparatus includes a plurality of buildings, roads, and trees. And identifying all objects in the first photographed viewfinder image through texture identification and color identification. And selecting a focus by a user, clicking a building A, namely the first photographing focus is positioned on the building A, and identifying that the position of the first photographing focus is positioned on one building. Specifically, the first photographing focus is an area, and when building a is clicked, the area may be inside the area where building a is located; the area of the building A can be intersected with the area of the building A, and the intersected area occupies more than half of the focal area; the area of the building A can be further included, wherein the area of the building A occupies more than half of the focal area.
An edge detection module to perform edge detection on the object and to delineate the detected edge.
And the edge detection receives the object determined by the first identification module and carries out edge detection on the object. Performing edge detection on the object includes detecting an outer edge region of the object and detecting an edge region of an inner module of the object. Rendering the detected edge includes rendering the detected edge region using vector graphics.
Illustratively, when the position of the first photographing focus is identified to belong to a building A, edge detection is performed on the building A. Since the image of the building a acquired using the photographing apparatus is a plane image, the outer edge area of the building a is detected, the outermost edge of the building a is detected and the outer edge thereof is drawn by vector graphic fitting. The method includes the steps of detecting an inner edge area in a plane image of the building A, specifically, detecting an edge area of an object including an edge, such as a gate, a window, and the like, in the plane image, and drawing the inner edge of the building A through appropriate amount of graph fitting.
It is noted that the pixel properties of the edge region have a variation during the first time period.
Illustratively, when the user aims at the shooting object by using the shooting device and does not press the shutter, in the first time, the edge will flash to remind the user that the object at the flash edge is the selected object, and the object will be further processed.
For example, after it is determined that the object to which the first photographing focus belongs is a building a, an edge line of the building a flickers to prompt the user that the object to which the first photographing focus belongs is the building a.
Determining a sub-object of the objects according to the detected edge region, and determining a position of the second focus according to the selected sub-object.
It should be noted that the change of the pixel attribute does not refer to the flashing of the edge line, but may have a plurality of expression forms, such as the flashing of the whole image or the change of the color of the image, and any method capable of prompting the user to change the specific object determined by the first photographing focus is within the scope of the patent protection.
And determining a sub-object in the object according to the detected edge area, acquiring a selection command of the sub-object, and determining the position of the second focus according to the selection command.
And detecting an edge area of an object to which the first photographing focus belongs, wherein the edge area also comprises a plurality of sub-objects, and a user continues to select the focus and determines the position of the second focus according to a selection command.
Illustratively, the object of the first photographing focus is a building a, an outer contour edge of the building a includes a plurality of sub-objects, such as doors, windows, and the like, among which a user selects an object to be precisely selected, and the user clicks an area inside the image of the building a to determine a position of the second focus.
Further comprising a sub-object identification module for identifying all sub-objects of said location in said object.
And the sub-object identification module receives the information of the edge detection module to obtain the area surrounded by the edge detection module. The object image comprises a plurality of sub-objects, and the second focus is positioned on one sub-object on the object to identify the sub-object. In particular, the identification of the object is achieved by texture recognition and color recognition.
The second focus is an area, and each sub-object of the objects occupies a plurality of areas respectively. A user manually selects a second focus, and when the second focus area is located in the area where a certain sub-object of the object is located, the certain sub-object is the sub-object where the second focus is located; when the second focus area is intersected with the area where the sub-object is located, a certain sub-object with the intersection area accounting for more than 50% of the second focus area is the sub-object where the second focus is located; and when the second focus area comprises the inside of an area where a certain sub-object is located and the periphery of the sub-object is irrelevant to the sub-object, the sub-object is the sub-object where the second focus is located.
Illustratively, the identification of all sub-objects in the object is awakened through texture identification and color identification, and the object to which the first photographing focus acquired by the user through the photographing device belongs is a building a, and the building a comprises a plurality of sub-objects, such as doors, windows and the like. And the user selects the focus again, clicks a door, namely the second focus is positioned on the door, and identifies that the position of the second focus is positioned on one door. Specifically, the second focus is an area, and when a door is clicked, the second focus area may be inside the area where the building a is located; the area of the focal region can be more than half of the area of the focal region; the focus area can be a focal area, and the focus area can be a focal area of the optical system. The above situation can determine that the sub-object to which the second focus belongs is a door.
Determining the position of the second focus according to the selected sub-object includes acquiring a selection command to determine the selected sub-object and displaying the sub-object in an enlarged manner.
Illustratively, the user determines the position of the second focus by clicking on an area inside the building a image, and the second focus is located inside the door, so that the door is a selected child object and the door is displayed in an enlarged manner.
Determining sub-objects in the object from the detected edge region comprises determining an enclosed region enclosed by the redrawn edge pattern as a sub-object.
And if the sub-object is shielded to cause the area surrounded by the edge graphs to be a semi-closed area, reselecting.
Illustratively, the user determines that the second focus belongs to a door by clicking on an area in the building a image, and the door is taken as a child object. If the edge graph of the door can enclose a closed area, the door is the sub-object; if the edge pattern of the door cannot enclose a closed area, collectively, if the door is partially blocked or the door is not completely in the first photographed and framed image, and a part of the door is outside the first photographed and framed image, the second focus needs to be reselected.
The image shooting device further comprises a second view finding module, a second recognition module and a second focusing module.
A second viewing module to obtain a second image.
When the user aims at a shooting object by using the shooting device and does not press a shutter, a dynamic image is displayed on a screen of the shooting device, and after the first focus is determined, the shooting device is displaced to a certain degree, so that the second image is formed.
A second image is acquired.
Illustratively, the shooting device is intelligent photographing glasses, when a user uses the intelligent photographing glasses to shoot, software capable of taking pictures is opened, the camera of the intelligent photographing glasses faces to an object to be shot, and an image picture displayed on the screen of the intelligent photographing glasses is a first image. After the first image is obtained, the first photographing focus is determined through a selection command, and the object of the first photographing focus is obtained according to the first photographing focus, so that the object is changed into a first object for convenience of the following description. And when the user does not press the shutter and adjust the shooting angle, the image obtained by the shooting equipment after adjusting the shooting angle is the second image.
A second recognition module to recognize an object in a second image.
And the second identification module receives the information transmitted by the second identification module. All objects on the second image are identified.
The second focusing module can match the object in the second image with the object of the first focus in the first image to obtain a second object, and the position in the second object is determined as a third focus position.
And matching the second object output by the second recognition module with the first object output by the first recognition module, wherein if the first object exists in the adjusted second image, the position and the angle of the first object in the second image are changed to form the second object. And inputting the matched structure into the second focusing module to determine the third focus.
Illustratively, the first photographed view image acquired by the user using the photographing apparatus includes a plurality of buildings, roads, and trees. The method comprises the steps that a user selects a focus, clicks a building A, namely, a first photographing focus is located on the building A, and finally, the position where the first photographing focus is located is identified on one building through texture identification and color identification of the building A. At this time, the user adjusts the shooting angle, recognizes all objects in the second image, matches the objects with the first object, and if the building a is found in the second image, the building a is regarded as the second object, and the position in the building a is determined as the third focus. In this embodiment, the position of the third focus is a center point of the second object. The user can avoid repeated selection, and the use experience of the user can be effectively improved.
It should be noted that the position of the third focal point may be any part in the second object, and in this embodiment, the third focal point is set at the central point of the second object.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are described in further detail, it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present application.

Claims (19)

1. An image capturing method, characterized in that the method comprises:
acquiring a first photographing view-finding image;
acquiring the position of a first photographing focus in the first photographing view-finding image;
identifying an object to which the location belongs in the viewfinder image;
edge detection is performed on the object and the detected edges are delineated.
2. An image capturing method as set forth in claim 1, characterized in that: the step of identifying the object of which the position belongs to in the image comprises identifying all objects in the photographed and framed image, and determining the object of which the position of the first photographing focus in the framed image is located as the object of which the photographing focus in the framed image belongs to.
3. An image capture method as claimed in claim 1, wherein edge detection of the object comprises detecting outer edge regions of the object and edge regions of inner modules of the object.
4. An image capture method as claimed in claim 1 wherein delineating the detected edge comprises using vector graphics to map the detected edge region.
5. An image capture method as claimed in claim 4, characterized in that the pixel properties of the edge region have a variation during the first period.
6. An image capturing method as claimed in claim 1, characterized in that, in dependence on the detected edge region, a sub-object of the object is determined and a selection command for the sub-object is acquired, the position of the second focus being determined in dependence on the selection command.
7. An image capture method as claimed in claim 6 wherein determining the position of the second focus in dependence on the selected sub-object comprises retrieving a selection command to determine the selected sub-object and displaying the sub-object in enlargement.
8. An image capture method according to claim 6, wherein determining sub-objects in the object based on the detected edge regions comprises determining closed regions surrounded by the redrawn edge patterns as sub-objects.
9. An image capture method as claimed in claim 1, the method further comprising:
a second image is acquired of the first image,
the object in the second image is identified,
and matching the object in the second image with the object of the first focus in the first image to obtain a second object, and determining the position of the second object as a third focus position.
10. An image capturing apparatus, characterized in that the apparatus comprises:
the first view finding module is used for acquiring a first photographing view finding image;
the first focusing module is used for acquiring the position of a first photographing focal point in the framing image;
a first identification module that identifies an object to which the position belongs in the through-view image;
and the edge detection module is used for carrying out edge detection on the object and drawing the detected edge.
11. An image capture device as defined in claim 10, wherein: the step of identifying the object of which the position belongs to in the image comprises identifying all objects in the photographed and framed image, and determining the object of which the position of the first photographing focus in the framed image is located as the object of which the photographing focus in the framed image belongs to.
12. An image capture device as claimed in claim 10 wherein edge detection of the object comprises detection of an outer edge region of the object and detection of an edge region of an inner block of the object.
13. An image capture device as claimed in claim 10 wherein delineating the detected edge comprises using vector graphics to map the detected edge region.
14. An image capturing apparatus as claimed in claim 13,
the pixel attributes of the edge region have a variation during the first time period.
15. An image capture device as claimed in claim 10, characterized in that sub-objects of the object are determined on the basis of the detected edge region and the position of the second focus is determined on the basis of the selected sub-object.
16. An image capture device as claimed in claim 15 wherein determining the position of the second focus in dependence on the selected sub-object comprises retrieving a selection command to determine the selected sub-object and displaying the sub-object in enlargement.
17. An image capture device as claimed in claim 15, wherein determining sub-objects in the object in dependence on the detected edge region comprises determining an enclosed region enclosed by the redrawn edge pattern as a sub-object.
18. An image capture device as claimed in claim 10, wherein the device further comprises:
a second view finding module for obtaining a second image,
a second recognition module to recognize an object in a second image,
and the second focusing module is used for matching the object in the second image with the object of the first focus in the first image to obtain a second object, and determining the position of the second object as a third focus position.
19. An image capture device as claimed in claim 10, wherein the image capture device comprises smart glasses.
CN202111158351.1A 2021-09-30 2021-09-30 Image shooting method and image shooting device Active CN113766140B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111158351.1A CN113766140B (en) 2021-09-30 2021-09-30 Image shooting method and image shooting device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111158351.1A CN113766140B (en) 2021-09-30 2021-09-30 Image shooting method and image shooting device

Publications (2)

Publication Number Publication Date
CN113766140A true CN113766140A (en) 2021-12-07
CN113766140B CN113766140B (en) 2022-07-26

Family

ID=78798530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111158351.1A Active CN113766140B (en) 2021-09-30 2021-09-30 Image shooting method and image shooting device

Country Status (1)

Country Link
CN (1) CN113766140B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080131019A1 (en) * 2006-12-01 2008-06-05 Yi-Ren Ng Interactive Refocusing of Electronic Images
US20100060780A1 (en) * 2008-09-09 2010-03-11 Canon Kabushiki Kaisha Image pickup apparatus, control method, and storage medium
CN104104787A (en) * 2013-04-12 2014-10-15 上海果壳电子有限公司 Shooting method, shooting system and hand-held device
CN108076278A (en) * 2016-11-10 2018-05-25 阿里巴巴集团控股有限公司 A kind of Atomatic focusing method, device and electronic equipment
CN110049253A (en) * 2019-05-31 2019-07-23 努比亚技术有限公司 A kind of focusing control method, equipment and computer readable storage medium
WO2020195246A1 (en) * 2019-03-28 2020-10-01 ソニー株式会社 Imaging device, imaging method, and program
CN112995507A (en) * 2021-02-08 2021-06-18 北京蜂巢世纪科技有限公司 Method and device for prompting object position
CN113438396A (en) * 2021-06-17 2021-09-24 中兵勘察设计研究院有限公司 Shooting focusing device and method for cultural relic digital photogrammetry

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080131019A1 (en) * 2006-12-01 2008-06-05 Yi-Ren Ng Interactive Refocusing of Electronic Images
US20100060780A1 (en) * 2008-09-09 2010-03-11 Canon Kabushiki Kaisha Image pickup apparatus, control method, and storage medium
CN104104787A (en) * 2013-04-12 2014-10-15 上海果壳电子有限公司 Shooting method, shooting system and hand-held device
CN108076278A (en) * 2016-11-10 2018-05-25 阿里巴巴集团控股有限公司 A kind of Atomatic focusing method, device and electronic equipment
WO2020195246A1 (en) * 2019-03-28 2020-10-01 ソニー株式会社 Imaging device, imaging method, and program
CN110049253A (en) * 2019-05-31 2019-07-23 努比亚技术有限公司 A kind of focusing control method, equipment and computer readable storage medium
CN112995507A (en) * 2021-02-08 2021-06-18 北京蜂巢世纪科技有限公司 Method and device for prompting object position
CN113438396A (en) * 2021-06-17 2021-09-24 中兵勘察设计研究院有限公司 Shooting focusing device and method for cultural relic digital photogrammetry

Also Published As

Publication number Publication date
CN113766140B (en) 2022-07-26

Similar Documents

Publication Publication Date Title
CN111402135B (en) Image processing method, device, electronic equipment and computer readable storage medium
KR100556856B1 (en) Screen control method and apparatus in mobile telecommunication terminal equipment
CN108933899B (en) Panorama shooting method, device, terminal and computer readable storage medium
WO2018201809A1 (en) Double cameras-based image processing device and method
US7623733B2 (en) Image combination device, image combination method, image combination program, and recording medium for combining images having at least partially same background
WO2021027537A1 (en) Method and apparatus for taking identification photo, device and storage medium
CN103929596A (en) Method and device for guiding shooting picture composition
CN116582741B (en) Shooting method and equipment
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2018094648A1 (en) Guiding method and device for photography composition
CN107231524A (en) Image pickup method and device, computer installation and computer-readable recording medium
WO2022261828A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN109948525A (en) It takes pictures processing method, device, mobile terminal and storage medium
US20200120269A1 (en) Double-selfie system for photographic device having at least two cameras
CN112036209A (en) Portrait photo processing method and terminal
WO2017173578A1 (en) Image enhancement method and device
WO2019084756A1 (en) Image processing method and device, and aerial vehicle
CN112017137A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN117918057A (en) Display device and device control method
CN113610865B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN106803920A (en) A kind of method of image procossing, device and intelligent meeting terminal
CN113766140B (en) Image shooting method and image shooting device
CN113395456B (en) Auxiliary shooting method and device, electronic equipment and computer readable storage medium
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
WO2019205566A1 (en) Method and device for displaying image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant