CN113766140B - Image shooting method and image shooting device - Google Patents

Image shooting method and image shooting device Download PDF

Info

Publication number
CN113766140B
CN113766140B CN202111158351.1A CN202111158351A CN113766140B CN 113766140 B CN113766140 B CN 113766140B CN 202111158351 A CN202111158351 A CN 202111158351A CN 113766140 B CN113766140 B CN 113766140B
Authority
CN
China
Prior art keywords
image
focus
photographing
sub
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111158351.1A
Other languages
Chinese (zh)
Other versions
CN113766140A (en
Inventor
季佳松
夏勇峰
张鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Beehive Century Technology Co ltd
Original Assignee
Beijing Beehive Century Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Beehive Century Technology Co ltd filed Critical Beijing Beehive Century Technology Co ltd
Priority to CN202111158351.1A priority Critical patent/CN113766140B/en
Publication of CN113766140A publication Critical patent/CN113766140A/en
Application granted granted Critical
Publication of CN113766140B publication Critical patent/CN113766140B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The patent relates to an image shooting method, which comprises the steps of obtaining a first shooting view finding image; acquiring the position of a first photographing focus in the first photographing view-finding image; identifying an object to which the location belongs in the viewfinder image; edge detection is performed on the object and the detected edges are delineated. The patent also relates to an image shooting device, which comprises a first view finding module, a second view finding module and a third view finding module, wherein the first view finding module is used for acquiring a first shooting view finding image; the first focusing module is used for acquiring the position of a first photographing focus in the framing image; a first identification module that identifies an object to which the position belongs in the through-view image; and the edge detection module is used for carrying out edge detection on the object and drawing the detected edge. The image shooting method and the image shooting device can effectively improve the focusing effect during shooting.

Description

Image shooting method and image shooting device
Technical Field
The patent belongs to the field of image processing and particularly relates to an image shooting method and an image shooting device.
Background
Image capture is a common function of prior art smart devices. In the prior art, there is a technical solution for recognizing and automatically focusing an object such as a human face in a captured view image.
However, such focusing is often specific to a particular object, such as a human face, and in the shooting of other scenes, it is difficult for a user to determine whether a selected focal point belongs to a desired object.
In the prior art, common shooting is realized based on a terminal with a display screen, such as a touch screen mobile phone, and focusing and selection of a shooting object are not particularly intuitive and convenient to operate under the condition that a screen capable of being touched cannot be provided, such as a shooting scene of smart glasses.
Disclosure of Invention
The present patent is based on the technical problem that the present patent is to solve that the above-mentioned needs of the prior art are met, and includes providing a shooting method and a shooting device capable of improving the focusing effect during shooting.
In order to solve the technical problem, the technical scheme in the patent comprises:
an image photographing method is provided, including acquiring a first photographed framing image; acquiring the position of a first photographing focus in the first photographing view-finding image; identifying an object to which the location belongs in the viewfinder image; edge detection is performed on the object, and the detected edges are delineated.
Preferably, the identifying the object to which the position belongs in the image includes identifying all objects in a photographed frame image, and determining the object to which the first photographing focus is located in the frame image as the object to which the photographing focus belongs in the frame image.
Preferably, the edge detection of the object includes detecting an outer edge region of the object and detecting an edge region of an inner module of the object.
Preferably, delineating the detected edge comprises using vector graphics to render the detected edge region.
Preferably, the pixel properties of the edge region have a variation during the first period.
Preferably, a sub-object in the object is determined according to the detected edge region, a selection command of the sub-object is acquired, and the position of the second focus is determined according to the selection command.
Preferably, the determining the position of the second focus according to the selected sub-object includes acquiring a selection command to determine the selected sub-object and displaying the sub-object in an enlarged manner.
Preferably, determining the sub-objects in the object according to the detected edge region comprises determining an enclosed region enclosed by the redrawn edge pattern as the sub-object.
Preferably, the method further comprises: and acquiring a second image, identifying an object in the second image, matching the object in the second image with the object of the first focus in the first image to obtain a second object, and determining the position in the second object as a third focus position.
The patent also provides an image shooting device, which comprises a first view finding module, a second view finding module and a third view finding module, wherein the first view finding module is used for acquiring a first shooting view finding image; the first focusing module is used for acquiring the position of a first photographing focal point in the framing image; a first identification module that identifies an object to which the position belongs in the through-view image; and the edge detection module is used for carrying out edge detection on the object and drawing the detected edge.
Preferably, the identifying the object to which the position belongs in the image includes identifying all objects in a photographed through-view image, and determining the object in which the position of the first photographing focus in the through-view image is located as the object to which the photographing focus belongs in the through-view image.
Preferably, the edge detection of the object includes detecting an outer edge region of the object and detecting an edge region of an inner module of the object.
Preferably, tracing the detected edge comprises drawing the detected edge region using vector graphics.
Preferably, the pixel attributes of the edge region have a variation during the first period.
Preferably, a sub-object of the object is determined based on the detected edge region, and the position of the second focus is determined based on the selected sub-object.
Preferably, the determining the position of the second focus according to the selected sub-object includes acquiring a selection command to determine the selected sub-object and displaying the sub-object in an enlarged manner.
Preferably, determining sub-objects in the object based on the detected edge region comprises determining an enclosed region enclosed by the redrawn edge pattern as a sub-object.
Preferably, the apparatus further comprises: the second view finding module is used for obtaining a second image, the second identification module is used for identifying an object in the second image, the second focusing module is used for matching the object in the second image with the object of the first focus in the first image to obtain a second object, and the position in the second object is determined as a third focus position.
Preferably, the image photographing device includes smart glasses.
Compared with the prior art, the method can effectively improve the focusing effect during shooting.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present specification, and other drawings can be obtained by those skilled in the art according to these drawings.
FIG. 1 is a flow chart illustrating the steps of an image capture method according to the present patent;
FIG. 2 is a diagram illustrating the determination of a first focus for taking a picture in one embodiment;
FIG. 3 is a diagram illustrating edge detection of an object at which the first focus belongs according to an embodiment;
fig. 4 is a schematic diagram of an image capture device according to the present patent.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
For the purpose of facilitating understanding of the embodiments of the present application, the following description will be made in terms of specific embodiments with reference to the accompanying drawings, which are not intended to limit the embodiments of the present application.
Example 1
The present embodiment provides an image capturing method, referring to fig. 1.
The image photographing method includes the steps of:
s1 acquires the first photographed view image.
The first photographing view-finding image is an image displayed by the display module of the photographing device when a user uses the photographing device to photograph.
Illustratively, the shooting device is intelligent photographing glasses, when a user uses the intelligent photographing glasses to shoot, software capable of taking a picture is opened, the camera of the intelligent photographing glasses faces an object to be shot, and an image picture displayed on the screen of the intelligent photographing glasses is the first shooting view-finding image.
In a typical photographing process, an image taken by a viewfinder of a camera is presented in a video mode, namely, a picture taken by the viewfinder is reflected in a real-time dynamic mode. At this time, the first photo-frame image may be one frame of the video image or all frames of the video frame image within a certain period of time.
The acquisition of these images can be obtained in real time by means of retrieving the viewfinder signal and processed by means of memory storage.
In the process of shooting by using the smart glasses, the framing image may not be displayed or completely displayed on the near-eye display device, but the framing image still exists in the virtual data of the system, and the above-mentioned acquisition method is still applicable.
And S2, acquiring the position of the first photographing focal point in the viewfinder image.
Based on the general operation of taking a picture, the user usually selects a focus during the framing to meet the requirement of focusing. This selection is often accomplished by clicking on the screen through a touch operation or directing the smart device through a finger click.
In these operations, the user is usually required to click on the image of a specific object, and the system determines that the specific area in the image is the focusing area, which is called the focus in this application, according to the user's selection.
Obviously, the first photographing focus determined by the selection command is positioned on the first through image.
Preferably, the one-frame through image at the time of acquiring the selection command is taken as the first through image. At the moment, the selected focus can be ensured to completely correspond to the image, processing of multi-frame images can be avoided, and the processing speed is improved.
S3 identifies the object to which the position belongs in the through-view image.
The recognition of the object in the through-image in this step is ready for subsequent processing. Usually, a plurality of objects are included in one through image, and all the objects in the through image may be recognized in this step, or only the objects in the range of the area around the focal position may be recognized. The method mainly aims to determine an object at the position of a focus and provide a basis for subsequent processing of the object.
In this embodiment, it is preferable to describe the recognition of all the objects in the through-image as an example.
In the present embodiment, all objects in the photographed through-image are recognized, and the object in which the position of the first photographing focus in the through-image is located is determined as the object to which the photographing focus belongs in the through-image.
The identification of the object in the photographed image can be realized by texture identification and color identification.
The first photographing focus is an area, and each object in the first viewfinder image occupies a plurality of areas respectively. A user manually selects a first photographing focal point, and when the first photographing focal point region is positioned in the region where a certain object in the first framing image is positioned, the certain object is the object where the first photographing focal point is positioned; when the first photographing focal region intersects with a region where an object in the first viewing image is located, one object, the intersection area of which accounts for more than 50% of the first photographing focal region, is the object where the first photographing focal point is located; and when the first photographing focus area comprises the inside of an area where a certain object in the first view finding image is located and the periphery of the object is irrelevant to the object, the object is the object where the first photographing focus is located, and if the inside of the first photographing focus comprises a plurality of objects, a user is advised to adjust the size of the object through a zooming function.
Illustratively, the first photographed framing image acquired by the user using the photographing device includes a plurality of buildings, roads, and trees. And identifying all objects in the first photographed viewfinder image through texture identification and color identification. Referring to fig. 2, the user selects a focus, clicks on a building a, that is, the first photographing focus is located on the building a, and recognizes that the first photographing focus is located on a building. Specifically, the first photographing focus is an area, and when building a is clicked, the area may be inside the area where building a is located; the area of the building A can be intersected with the area of the building A, and the intersected area occupies more than half of the focal area; the area of the building A can be further included, wherein the area of the building A occupies more than half of the focal area.
S4, performing edge detection on the object, and drawing the detected edge.
Detecting the edges of the object includes detecting outer edge regions of the object and detecting edge regions of inner modules of the object. Rendering the detected edge includes rendering the detected edge region using vector graphics.
Illustratively, referring to fig. 3, when it is recognized that the position of the first photographing focus belongs to a building a, edge detection is performed on the building a. Since the image of the building a acquired using the photographing apparatus is a plane image, the outer edge area of the building a is detected, the outermost edge of the building a is detected and the outer edge thereof is drawn by vector graphic fitting. The method includes the steps of detecting an inner edge area in a plane image of the building A, specifically, detecting an edge area of an object including an edge, such as a gate, a window, and the like, in the plane image, and drawing the inner edge of the building A through appropriate amount of graph fitting.
It is noted that the pixel properties of the edge region have a variation during the first time period.
Illustratively, when the user aims at the shooting object by using the shooting device and does not press the shutter, in the first time, the edge will flash to remind the user that the object at the flash edge is the selected object, and the object will be further processed.
For example, when it is determined that the object to which the first photographing focus belongs is the building a, an edge line of the building a may flash to prompt the user that the object to which the first photographing focus belongs is the building a.
Determining a sub-object of the object based on the detected edge region, and determining a position of the second focus based on the selected sub-object.
It should be noted that the change of the pixel attribute does not refer to the flashing of the edge line, but may have a plurality of expression forms, such as the flashing of the whole image or the change of the color of the image, and any method capable of prompting the user to change the specific object determined by the first photographing focus is within the scope of the patent protection.
And determining a sub-object in the object according to the detected edge area, acquiring a selection command of the sub-object, and determining the position of the second focus according to the selection command.
And detecting an edge area of an object to which the first photographing focus belongs, wherein the edge area also comprises a plurality of sub-objects, and a user continues to select the focus and determines the position of the second focus according to a selection command.
Illustratively, the object of the first photographing focus is a building a, an outer contour edge of the building a includes a plurality of sub-objects, such as doors, windows, and the like, among which a user selects an object to be precisely selected, and the user clicks an area inside the image of the building a to determine a position of the second focus.
All child objects having the location in the object are identified.
The object image comprises a plurality of sub-objects, and the second focus is positioned on one sub-object on the object to identify the sub-object. In particular, the identification of the object is achieved by texture recognition and color recognition.
The second focus is an area, and each sub-object of the objects occupies a plurality of areas respectively. A user manually selects a second focus, and when the second focus area is positioned in the area where a certain sub-object of the object is positioned, the certain sub-object is the sub-object where the second focus is positioned; when the second focus area intersects with the area where the sub-object is located, a certain sub-object with an intersection area accounting for more than 50% of the second focus area is the sub-object where the second focus is located; and when the second focus area comprises the inside of an area where a certain sub-object is located and the periphery of the sub-object is irrelevant to the sub-object, the sub-object is the sub-object where the second focus is located.
Illustratively, all the sub-objects in the object are recognized through texture recognition and color recognition, and the object to which the first photographing focus acquired by the user using the photographing device belongs is a building a, wherein the building a includes a plurality of sub-objects, such as doors, windows, and the like. And the user selects the focuses again, clicks the door, namely the second focus is positioned on the door, and identifies that the position of the second focus is positioned on one door. Specifically, the second focus is an area, and when a door is clicked, the second focus area may be inside the area where the building a is located; the area of the focal region can be more than half of the area of the focal region; the focus area can be a focal area, and the focus area can be a focal area of the optical system. All the above situations can determine that the sub-object to which the second focus belongs is a door.
Determining the position of the second focus according to the selected sub-object includes acquiring a selection command to determine the selected sub-object and displaying the sub-object in an enlarged manner.
Illustratively, the user determines the position of the second focus by clicking on an area inside the building a image, and the second focus is located inside the door, so that the door is a selected child object and the door is displayed in an enlarged manner.
Determining sub-objects in the object from the detected edge region comprises determining an enclosed region enclosed by the redrawn edge pattern as a sub-object.
And if the sub-object is shielded to cause the area surrounded by the edge graphs to be a semi-closed area, reselecting.
Illustratively, the user determines that the second focus belongs to a door by clicking on an area in the building a image, and the door is taken as a child object. If the edge graph of the door can enclose a closed area, the door is the sub-object; if the edge pattern of the door cannot enclose a closed area, in a collective aspect, if the door is partially blocked or the door is incomplete in the first photographed frame image, and a part of the door is outside the first photographed frame image, the second focus needs to be reselected.
The image shooting method further comprises the steps of acquiring a second image, identifying an object in the second image, matching the object in the second image with an object of the first focus in the first image to obtain a second object, and determining the position of the second object as a third focus position.
When the user aims at the shooting object by using the shooting device and does not press the shutter, a dynamic image is displayed on the screen of the shooting device, and after the first focus is determined, the shooting device is displaced to a certain extent, so that the second image is formed.
A second image is acquired.
Illustratively, the shooting device is intelligent photographing glasses, when a user uses the intelligent photographing glasses to shoot, software capable of taking a picture is opened, the camera of the intelligent photographing glasses faces an object to be shot, and an image picture displayed on the screen of the intelligent photographing glasses is a first image. After the first image is obtained, the first photographing focus is determined through a selection command, and the object of the first photographing focus is obtained according to the first photographing focus, so that the object is changed into a first object for convenience of the following description. And when the user does not press the shutter and adjust the shooting angle, the image obtained by the shooting equipment after adjusting the shooting angle is the second image.
All objects on the second image are identified.
And matching all objects in the second image with a first object to which the first focus belongs in the first image, changing the position and the angle of the first object in the second image to form a second object if the first object exists in the adjusted second image, and determining the position in the second object as a third focus.
Illustratively, the first photographed framing image acquired by the user using the photographing device includes a plurality of buildings, roads, and trees. The method comprises the steps that a user selects a focus, clicks a building A, namely, a first photographing focus is located on the building A, and finally, the position of the first photographing focus is identified on one building through texture identification and color identification of the building A. At this time, the user adjusts the shooting angle, recognizes all objects in the second image, matches the objects with the first object, and if the building a is found in the second image, the building a is regarded as the second object, and the position in the building a is determined as the third focus. In this embodiment, the position of the third focus is a center point of the second object. The user can avoid repeated selection, and the use experience of the user can be effectively improved.
It should be noted that the position of the third focal point may be any part within the second object, and in this embodiment, the third focal point is set at the central point of the second object.
Example 2
The present embodiment provides an image capturing apparatus, referring to fig. 4.
The image photographing apparatus includes a first view finding module, a first focusing module, a first recognition module, and an edge detection module.
The first framing module can acquire a first photographing framing image.
The first photographing view finding image is an image displayed by the photographing equipment display module when a user uses the photographing equipment to photograph.
Illustratively, the shooting device is intelligent photographing glasses, when a user uses the intelligent photographing glasses to shoot, software capable of taking a picture is opened, the camera of the intelligent photographing glasses faces an object to be shot, and an image picture displayed on the screen of the intelligent photographing glasses is the first shooting view-finding image.
In a typical photographing process, an image taken by a viewfinder of a camera is presented in a video mode, namely, a picture taken by the viewfinder is reflected in a real-time dynamic mode. At this time, the first photographed view image may be one frame in the video image or all frames of the video view image within a certain period of time.
The acquisition of these images can be obtained in real time by retrieving the viewfinder signal and processed by memory storage.
During shooting with the smart glasses, the framing image may not be displayed or completely displayed on the near-eye display device, but the framing image still exists in the virtual data of the system, and the above-mentioned acquisition method is still applicable.
The first focusing module can acquire the position of a first photographing focal point in the framing image.
Based on the general operation of taking a picture, the user usually selects a focus point during the framing to satisfy the focusing requirement. This selection is often accomplished by clicking on the screen through a touch operation or directing the smart device through a finger click.
In these operations, the user is usually required to click on the image of a specific object, and the system determines that the specific area in the image is a focused area, which is referred to as a focus point in the present application, according to the user's selection.
Obviously, the first photographing focus determined by the selection command is positioned on the first through image.
Preferably, one frame of through-image at the time of acquiring the selection command is taken as the first through-image. At the moment, the selected focus can be ensured to be completely corresponding to the image, processing of multi-frame images can be avoided, and the processing speed is improved.
A first recognition module capable of recognizing an object to which the position belongs in the through-view image.
Usually, a plurality of objects are included in one through image, and all the objects in the through image may be recognized in this step, or only the objects in the range of the area around the focal position may be recognized. The method mainly aims to determine an object at the position of a focus and provide a basis for subsequent processing of the object.
In this embodiment, it is preferable to describe the recognition of all the objects in the through-image as an example.
The first identification module receives the information in the first viewing module, identifies all objects in the photographed viewing image, and identifies the object to which the first photographing focus belongs on the first viewing image. And determining the object where the position of the first photographing focus in the viewfinder image is located as the object to which the photographing focus belongs in the viewfinder image, namely the first object.
The identification of the object in the photographed viewfinder image can be realized through texture identification and color identification.
The first photographing focus is an area, and each object in the first viewfinder image occupies a plurality of areas respectively. A user manually selects a first photographing focus, and when the first photographing focus area is positioned in the area where a certain object in the first view-finding image is positioned, the certain object is the object where the first photographing focus is positioned; when the first photographing focal region intersects with a region where an object in the first viewing image is located, one object, the intersection area of which accounts for more than 50% of the first photographing focal region, is the object where the first photographing focal point is located; and when the first photographing focus area comprises the inside of an area where a certain object in the first view finding image is located and the periphery of the object is irrelevant to the object, the object is the object where the first photographing focus is located, and if the inside of the first photographing focus comprises a plurality of objects, a user is advised to adjust the size of the object through a zooming function.
Illustratively, the first photographed framing image acquired by the user using the photographing device includes a plurality of buildings, roads, and trees. And identifying all objects in the first photographed viewfinder image through texture identification and color identification. And selecting a focus by a user, clicking a building A, namely the first photographing focus is positioned on the building A, and identifying that the position of the first photographing focus is positioned on one building. Specifically, the first photographing focus is an area, and when building a is clicked, the area can be inside the area where building a is located; the area of the building A can be intersected with the area of the building A, and the intersected area occupies more than half of the focal area; the area of the building A can be further included, wherein the area of the building A occupies more than half of the focal area.
An edge detection module to perform edge detection on the object and to delineate the detected edges.
And the edge detection receives the object determined by the first identification module and carries out edge detection on the object. Performing edge detection on the object includes detecting an outer edge region of the object and detecting an edge region of an inner module of the object. Rendering the detected edge includes rendering the detected edge region using vector graphics.
Illustratively, when the position of the first photographing focus is identified to belong to a building A, edge detection is performed on the building A. Since the image of the building a acquired using the photographing apparatus is a plane image, the outer edge area of the building a is detected, the outermost edge of the building a is detected and the outer edge thereof is drawn by vector graphic fitting. The method comprises the steps of detecting an inner edge area in a plane image of the building A, specifically, detecting edge areas of objects such as gates, windows and the like which comprise edges in the plane image, detecting the edge areas of the listed objects, and drawing the inner edge of the building A through proper amount of graph fitting.
It is noted that the pixel properties of the edge region have a variation during the first period.
Illustratively, when the user aims at the shooting object by using the shooting device and does not press the shutter, in the first time, the edge will flash to remind the user that the object at the flashing edge is the selected object, and the object will be further processed.
For example, after it is determined that the object to which the first photographing focus belongs is a building a, an edge line of the building a flickers to prompt the user that the object to which the first photographing focus belongs is the building a.
Determining a sub-object of the object based on the detected edge region, and determining a position of the second focus based on the selected sub-object.
It should be noted that the change of the pixel attribute does not refer to the flashing of the edge line, but may have a variety of expression forms, such as the flashing of the whole image or the change of the color of the image, and any method capable of prompting the user to change the specific object determined by the first photographing focus is within the scope of the present patent protection.
And determining a sub-object in the object according to the detected edge area, acquiring a selection command of the sub-object, and determining the position of the second focus according to the selection command.
And detecting an edge area of an object to which the first photographing focus belongs, wherein the edge area also comprises a plurality of sub-objects, and a user continues to select the focus and determines the position of the second focus according to a selection command.
Illustratively, the object of the first photographing focus is a building a, an outer contour edge of the building a includes a plurality of sub-objects, such as doors, windows, and the like, among which a user selects an object to be precisely selected, and the user clicks an area inside the image of the building a to determine a position of the second focus.
Further comprising a sub-object identification module for identifying all sub-objects of said location in said object.
And the sub-object identification module receives the information of the edge detection module to obtain the area surrounded by the edge detection module. The object image comprises a plurality of sub-objects, and the second focus is positioned on one sub-object on the object to identify the sub-object. In particular, the identification of the object is achieved by texture recognition and color recognition.
The second focus is an area, and each sub-object in the object occupies a plurality of areas respectively. A user manually selects a second focus, and when the second focus area is located in the area where a certain sub-object of the object is located, the certain sub-object is the sub-object where the second focus is located; when the second focus area intersects with the area where the sub-object is located, a certain sub-object with an intersection area accounting for more than 50% of the second focus area is the sub-object where the second focus is located; and when the second focus area comprises the inside of the area where a certain sub-object is located and the periphery of the sub-object is irrelevant to the sub-object, the sub-object is the sub-object where the second focus is located.
Illustratively, the identification of all sub-objects in the object is awakened through texture identification and color identification, and the object to which the first photographing focus acquired by the user through the photographing device belongs is a building a, and the building a comprises a plurality of sub-objects, such as doors, windows and the like. And the user selects the focuses again, clicks the door, namely the second focus is positioned on the door, and identifies that the position of the second focus is positioned on one door. Specifically, the second focus is an area, and when a door is clicked, the second focus area may be inside the area where the building a is located; the area of the focal region can be more than half of the area of the focal region; the area where the door is located can be further included, wherein the area where the door is located occupies more than half of the focal area. The above situation can determine that the sub-object to which the second focus belongs is a door.
Determining the position of the second focus according to the selected sub-object includes acquiring a selection command to determine the selected sub-object and displaying the sub-object in an enlarged manner.
Illustratively, the user determines the position of the second focus by clicking on an area inside the building a image, and the second focus is located inside the door, so that the door is a selected child object and the door is displayed in an enlarged manner.
Determining sub-objects in the object according to the detected edge region includes determining an enclosed region surrounded by the redrawn edge pattern as a sub-object.
And if the sub-object is shielded to cause the area surrounded by the edge graphs to be a semi-closed area, reselecting.
Illustratively, the user determines that the second focus belongs to a door by clicking on an area in the building a image, and the door is taken as a child object. If the edge graph of the door can enclose a closed area, the door is the sub-object; if the edge pattern of the door cannot enclose a closed area, in a collective aspect, if the door is partially blocked or the door is incomplete in the first photographed frame image, and a part of the door is outside the first photographed frame image, the second focus needs to be reselected.
The image shooting device further comprises a second view finding module, a second recognition module and a second focusing module.
And the second viewing module is used for acquiring a second image.
When the user aims at the shooting object by using the shooting device and does not press the shutter, a dynamic image is displayed on the screen of the shooting device, and after the first focus is determined, the shooting device is displaced to a certain extent, so that the second image is formed.
A second image is acquired.
Illustratively, the shooting device is intelligent photographing glasses, when a user uses the intelligent photographing glasses to shoot, software capable of taking a picture is opened, the camera of the intelligent photographing glasses faces an object to be shot, and an image picture displayed on the screen of the intelligent photographing glasses is a first image. After the first image is obtained, the first photographing focus is determined through a selection command, and the object of the first photographing focus is obtained according to the first photographing focus, so that the object is changed into a first object for convenience of the following description. And when the user does not press the shutter and adjust the shooting angle, the image obtained by the shooting equipment after adjusting the shooting angle is the second image.
A second recognition module to recognize an object in a second image.
And the second identification module receives the information transmitted by the second identification module. All objects on the second image are identified.
The second focusing module can match the object in the second image with the object of the first focus in the first image to obtain a second object, and the position in the second object is determined as a third focus position.
And matching the second object output by the second recognition module with the first object output by the first recognition module, wherein if the first object exists in the adjusted second image, the position and the angle of the first object in the second image are changed to form the second object. And inputting the matched structure as an input into the second focusing module to determine the third focus.
Illustratively, the first photographed view image acquired by the user using the photographing apparatus includes a plurality of buildings, roads, and trees. The method comprises the steps that a user selects a focus, clicks a building A, namely, a first photographing focus is located on the building A, and finally, the position where the first photographing focus is located is identified on one building through texture identification and color identification of the building A. At this time, the user adjusts the shooting angle, recognizes all objects in the second image, matches the objects with the first object, and if the building a is found in the second image, the building a is regarded as the second object, and the position in the building a is determined as the third focus. In this embodiment, the position of the third focal point is a central point of the second object. The user can avoid repeated selection, and the use experience of the user can be effectively improved.
It should be noted that the position of the third focal point may be any part in the second object, and in this embodiment, the third focal point is set at the central point of the second object.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are described in further detail, it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present application.

Claims (15)

1. An image capturing method, characterized in that the method comprises:
acquiring a first photographing view-finding image;
acquiring the position of a first photographing focus in the first photographing view-finding image;
identifying a subject to which the location belongs in the first photographed viewfinder image;
performing edge detection on the object and drawing the detected edge;
a second image is acquired of the first image,
all objects in the second image are identified,
matching the object in the second image with the object of the first photographing focus in the first photographing view-finding image to obtain a second object, and determining the position in the second object as a third focus position;
the step of identifying the object of which the position belongs to in the first framing photographed image includes identifying all objects in the first framing photographed image, and determining the object of which the position of the first photographing focus in the first framing photographed image is located as the object of which the first photographing focus belongs to in the first framing photographed image.
2. An image capture method according to claim 1, wherein detecting edges of the object comprises detecting outer edge regions of the object and detecting edge regions of inner blocks of the object.
3. An image capture method as claimed in claim 1, wherein delineating the detected edge comprises using vector graphics to map the detected edge region.
4. An image capturing method according to claim 3, characterized in that the pixel properties of the edge region have a variation during the first period.
5. An image capturing method as claimed in claim 1, characterized in that, in dependence on the detected edge region, a sub-object of the object is determined and a selection command for the sub-object is acquired, the position of the second focus being determined in dependence on the selection command.
6. An image capture method according to claim 5 wherein determining the position of the second focus in dependence on the selected sub-object comprises obtaining a selection command to determine the selected sub-object and displaying the sub-object in an enlarged scale.
7. An image capture method according to claim 5, wherein determining sub-objects in the object based on the detected edge regions comprises determining closed regions surrounded by redrawn edge graphics as sub-objects.
8. An image capturing apparatus, characterized in that the apparatus comprises:
the first view finding module is used for acquiring a first photographing view finding image;
the first focusing module is used for acquiring the position of a first photographing focus in the first photographing view-finding image;
the first identification module is used for identifying the object of which the position belongs in the first photographed framing image;
an edge detection module for performing edge detection on the object and drawing the detected edge;
the second view finding module is used for acquiring a second image;
a second recognition module that recognizes all objects in the second image;
the second focusing module is used for matching the object in the second image with the object of the first photographing focus in the first photographing view-finding image to obtain a second object, and determining the position of the second object as a third focus position;
the step of identifying the object to which the position belongs in the first photographing view-finding image comprises identifying all objects in the first photographing view-finding image, and determining the object to which the first photographing focus is located in the position of the first photographing view-finding image as the object to which the first photographing focus belongs in the first photographing view-finding image.
9. An image capture device as claimed in claim 8 wherein edge detection of the object comprises detection of outer edge regions of the object and edge regions of inner modules of the object.
10. The image capture device of claim 8, wherein delineating the detected edge comprises using vector graphics to map the detected edge region.
11. An image capturing apparatus as claimed in claim 10,
the pixel attributes of the edge region have a variation during the first time period.
12. An image capture device as claimed in claim 8, characterized in that sub-objects of the object are determined on the basis of the detected edge region and the position of the second focus is determined on the basis of the selected sub-object.
13. An image capture device as claimed in claim 12 wherein determining the position of the second focus in dependence on the selected sub-object comprises retrieving a selection command to determine the selected sub-object and displaying the sub-object in enlargement.
14. An image capture device as claimed in claim 12, wherein determining sub-objects in the object in dependence on the detected edge region comprises determining an enclosed region enclosed by the redrawn edge pattern as a sub-object.
15. An image capture device as claimed in claim 8, wherein the image capture device comprises smart glasses.
CN202111158351.1A 2021-09-30 2021-09-30 Image shooting method and image shooting device Active CN113766140B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111158351.1A CN113766140B (en) 2021-09-30 2021-09-30 Image shooting method and image shooting device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111158351.1A CN113766140B (en) 2021-09-30 2021-09-30 Image shooting method and image shooting device

Publications (2)

Publication Number Publication Date
CN113766140A CN113766140A (en) 2021-12-07
CN113766140B true CN113766140B (en) 2022-07-26

Family

ID=78798530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111158351.1A Active CN113766140B (en) 2021-09-30 2021-09-30 Image shooting method and image shooting device

Country Status (1)

Country Link
CN (1) CN113766140B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8559705B2 (en) * 2006-12-01 2013-10-15 Lytro, Inc. Interactive refocusing of electronic images
JP5527955B2 (en) * 2008-09-09 2014-06-25 キヤノン株式会社 IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
CN104104787B (en) * 2013-04-12 2016-12-28 上海果壳电子有限公司 Photographic method, system and handheld device
CN108076278B (en) * 2016-11-10 2021-03-19 斑马智行网络(香港)有限公司 Automatic focusing method and device and electronic equipment
JP7444163B2 (en) * 2019-03-28 2024-03-06 ソニーグループ株式会社 Imaging device, imaging method, and program
CN110049253B (en) * 2019-05-31 2021-12-17 努比亚技术有限公司 Focusing control method and device and computer readable storage medium
CN112995507A (en) * 2021-02-08 2021-06-18 北京蜂巢世纪科技有限公司 Method and device for prompting object position
CN113438396B (en) * 2021-06-17 2023-04-18 中兵勘察设计研究院有限公司 Shooting focusing device and method for digital photogrammetry of cultural relics

Also Published As

Publication number Publication date
CN113766140A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN111402135B (en) Image processing method, device, electronic equipment and computer readable storage medium
KR100556856B1 (en) Screen control method and apparatus in mobile telecommunication terminal equipment
CN108933899B (en) Panorama shooting method, device, terminal and computer readable storage medium
WO2018201809A1 (en) Double cameras-based image processing device and method
WO2021027537A1 (en) Method and apparatus for taking identification photo, device and storage medium
US20060078224A1 (en) Image combination device, image combination method, image combination program, and recording medium containing the image combination program
CN116582741B (en) Shooting method and equipment
CN103929596A (en) Method and device for guiding shooting picture composition
US20220383508A1 (en) Image processing method and device, electronic device, and storage medium
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2018094648A1 (en) Guiding method and device for photography composition
CN116324878A (en) Segmentation for image effects
CN107231524A (en) Image pickup method and device, computer installation and computer-readable recording medium
US20200120269A1 (en) Double-selfie system for photographic device having at least two cameras
CN112017137A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112036209A (en) Portrait photo processing method and terminal
WO2019084756A1 (en) Image processing method and device, and aerial vehicle
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN117918057A (en) Display device and device control method
CN113610865B (en) Image processing method, device, electronic equipment and computer readable storage medium
WO2022261828A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN106803920A (en) A kind of method of image procossing, device and intelligent meeting terminal
CN113766140B (en) Image shooting method and image shooting device
CN110312075B (en) Device imaging method and device, storage medium and electronic device
CN113395456B (en) Auxiliary shooting method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant