CN112995507A - Method and device for prompting object position - Google Patents

Method and device for prompting object position Download PDF

Info

Publication number
CN112995507A
CN112995507A CN202110184259.6A CN202110184259A CN112995507A CN 112995507 A CN112995507 A CN 112995507A CN 202110184259 A CN202110184259 A CN 202110184259A CN 112995507 A CN112995507 A CN 112995507A
Authority
CN
China
Prior art keywords
image
target object
head
mounted device
prompt message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110184259.6A
Other languages
Chinese (zh)
Inventor
苏韶华
穆明丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Beehive Century Technology Co ltd
Original Assignee
Beijing Beehive Century Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Beehive Century Technology Co ltd filed Critical Beijing Beehive Century Technology Co ltd
Priority to CN202110184259.6A priority Critical patent/CN112995507A/en
Publication of CN112995507A publication Critical patent/CN112995507A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present disclosure provides a method and an apparatus for prompting a position of an object, where the method may include: determining a target object; acquiring a second image which is acquired by the head-mounted device and contains the target object under the condition that the first image acquired by the head-mounted device does not contain the target object; wherein a field angle of the second image is greater than a field angle of the first image; and generating a first prompt message according to the second image, wherein the first prompt message is used for indicating that the head-mounted device is moved, so that the first image acquired by the head-mounted device again after moving contains the target object. Through the technical scheme, the user can be prompted to move the head-mounted equipment, the lens is conveniently aligned to the target object interested by the user so as to collect the picture containing the target object, and the shooting experience of the user is improved.

Description

Method and device for prompting object position
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a method and an apparatus for prompting a position of an object.
Background
With the development of head-mounted devices, the photographing function of the head-mounted devices has also been popularized and popularized, and in order to improve the photographing effect, more and more head-mounted devices have a zoom photographing function. In the process of zoom shooting, the larger the focal length is, the smaller the field angle is, when shooting a distant object, a zoomed-in picture can be acquired by adopting telephoto shooting, but because the field angle is small, it is difficult to accurately align the lens with a target object to be shot, and if the target object is lost, a user needs to repeatedly search for the target object to be shot again. And because the head-mounted equipment is worn on the head of the user, compared with other handheld equipment with shooting functions such as a mobile phone and a camera, the user is susceptible to various factors such as vision, hearing, smell and the like to generate rotation or other movement of the head, so that the lens moves along with the rotation, and the lens is more difficult to align to a target object to be shot.
Disclosure of Invention
In view of this, the present disclosure provides a method and an apparatus for prompting a position of an object, so as to indicate a movement of a lens to a user when the lens is not aligned with a target to be photographed, thereby facilitating the user to align the lens with the target to be photographed.
Specifically, the present disclosure is realized by the following technical scheme:
according to a first aspect of the present disclosure, a method for prompting a position of a subject is provided, which is applied to a head-mounted device, and includes:
determining a target object;
acquiring a second image which is acquired by the head-mounted device and contains the target object under the condition that the first image acquired by the head-mounted device does not contain the target object; wherein a field angle of the second image is greater than a field angle of the first image;
and generating a first prompt message according to the second image, wherein the first prompt message is used for indicating that the head-mounted device is moved, so that the first image acquired by the head-mounted device again after moving contains the target object.
According to a second aspect of the present disclosure, an apparatus for prompting a position of a subject is provided, which is applied to a head-mounted device, and includes:
a determination unit configured to determine a target object;
an acquisition unit, configured to acquire, when the target object is not included in a first image acquired by the head-mounted device, a second image acquired by the head-mounted device and including the target object; wherein a field angle of the second image is greater than a field angle of the first image;
and a generating unit, configured to generate a first prompt message according to the second image, where the first prompt message is used to instruct to move the head-mounted device, so that a first image acquired by the head-mounted device again after moving includes the target object.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method as described in the embodiments of the first aspect above by executing the executable instructions.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method as described in the embodiments of the first aspect above.
According to the technical scheme, the target object is determined, when the lens is not aligned with the target object, the head-mounted device can provide the prompt message for the user according to the acquired image containing the target object with the larger field angle, so that the user can directly align the lens with the target object by moving the head-mounted device according to the prompt message, and the shooting experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method of prompting a location of an object in accordance with an exemplary embodiment of the present disclosure;
fig. 2 is a schematic view of a head-mounted device shooting scene provided according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of a method for prompting a location of an object according to an embodiment of the present disclosure;
FIG. 4 is a display interface for tele photography provided in accordance with an embodiment of the present disclosure;
fig. 5 is an object position prompt interface when the smart glasses perform telephoto shooting according to the embodiment of the disclosure;
fig. 6 is a subject position prompt interface for performing telephoto shooting by using another pair of smart glasses according to the embodiment of the disclosure;
fig. 7 is a subject position prompt interface for performing telephoto shooting by using another pair of smart glasses according to the embodiment of the disclosure;
FIG. 8 is a schematic diagram illustrating an architecture of an electronic device for hinting at object locations in accordance with an exemplary embodiment of the present disclosure;
FIG. 9 is a block diagram illustrating an apparatus for prompting a location of an object in accordance with an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The embodiments of the present disclosure are explained in detail below.
The head-mounted device may be worn on the head of a user for displaying visual information within the user's field of view to present image content to the user, and may take various forms such as glasses, a helmet, a hat, and the like, which are not limited by the present disclosure. Because the head-mounted device is worn on the head of the user, compared with other handheld devices with shooting functions such as a mobile phone and a camera, the head-mounted device is easy to rotate or move due to the influence of various factors such as vision, hearing and smell, so that the lens moves along with the rotation, and the lens is more difficult to align to a target object to be shot. Based on this, the present disclosure provides a method and an apparatus for prompting a position of an object, so that a user can directly aim a lens at a target object only by moving a head-mounted device according to a prompt, thereby improving the shooting experience of the user.
Fig. 1 is a flowchart illustrating a method of prompting a location of an object according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the method is applied to a head-mounted device, and may include the following steps:
step 102: a target object is determined.
In one embodiment, the determining the target object includes: and determining the target object selected by the user according to the detected user trigger operation. For example, the head-mounted device may display a preview image captured by a current shot to a user, and the user may directly click a target object to be captured in the preview image through a touch operation; or the head-mounted device can track eyeballs of the user, determine a focus point of the sight of the user on the preview image, and determine that an object where the sight stays is a target object when the sight stays for a preset time; alternatively, the head mounted device may determine the target object according to the received dictation of the user, for example, the user only needs to dictate "pine", and the head mounted device may determine "pine" in the preview image as the target object based on semantic analysis and image content analysis, which is to be understood that the target object determination method is only described here by way of example, and the disclosure is not limited thereto.
In one embodiment, the determining the target object includes: and determining a target object matched with the object characteristics according to preset object characteristics in the head-mounted equipment. For example, object features, such as human body features, building features, natural scene features, etc., are preset in the head-mounted device, and when the head-mounted device is in a shooting function and a scene matching the preset object features is captured by a lens, the scene can be automatically determined as a target object.
In an embodiment, the target object comprises a dynamic object whose position is movable and/or a static object whose position is fixed. When the target object is a dynamic object with a movable position, the target object is tracked in real time according to the characteristic information of the target object in the shooting process so as to obtain the real-time position of the target object in the second image. When the target object is a static object with a fixed position, the real-time position of the static object can be determined according to the movement information of the head-mounted device and the position of the static object in the second image when the second image is acquired. When the target object includes both a dynamic object with a movable position and a dynamic object with a fixed position, the head-mounted device can select to track each target object in real time, or select to track only the position of the dynamic object in real time when the head-mounted device is not moved, and update the position of the static object according to the movement after the head-mounted device is moved. The position information of the target object is updated in time, so that the head-mounted device can determine the direction of the prompt message according to the position of the target object in the second image and the area in the second image corresponding to the first image.
In an embodiment, the number of the target objects for determining and prompting the position may be one or more, which is not limited by the present disclosure. For example, a plurality of object features are preset on the head-mounted device, a plurality of scenes matched with the preset object features are captured by the lens under the shooting function, the plurality of scenes can be automatically determined as target objects, and respective prompt messages are generated, for example, different labels can be marked in the prompt messages for different target objects such as human body, nature and the like, so that a user can move the head-mounted device according to the corresponding prompts according to the needs of the user.
By selecting the target object in advance in an autonomous manner, the head-mounted device can know the shooting requirement of the user, the position of the target object can be conveniently and subsequently prompted to the user, and the user can be intelligently assisted to move the head-mounted device to search for the target object in the telephoto image for shooting.
Step 104: acquiring a second image which is acquired by the head-mounted device and contains the target object under the condition that the first image acquired by the head-mounted device does not contain the target object; wherein a field angle of the second image is greater than a field angle of the first image.
In one embodiment, acquiring a second image acquired by the head mounted device containing the target object comprises: acquiring a second image which is shot by a zoom camera carried by the head-mounted device at a relatively small focal length at a historical moment; the first image is an image shot by the zoom camera with a relatively large focal length. The head-mounted device can determine whether the target object is contained in the acquired image according to the position information of the target object acquired when the target object is determined, and also can acquire the graphic characteristic information of the target object, such as the contour line, color and the like of the target object, when the target object is determined, and judge whether the image characteristic information of the target object is contained in the acquired image so as to judge whether the target object is contained in the acquired image. The interval duration between the historical time and the current time is smaller than a preset duration threshold, or the moving distance or the rotating angle generated between the historical time and the current time is smaller than a preset threshold, so that the acquired second image has sufficient referential performance. The first image content shot after the user changes the focal length of the camera is contained in the second image obtained at the historical moment, and the shot scene at the current moment and the second image obtained at the historical moment do not have larger deviation, so that shooting after zooming and amplifying can be prompted according to the image information with the large field angle obtained before zooming on the premise of one lens.
In one embodiment, a head-mounted device is equipped with two cameras, acquiring a second image acquired by the head-mounted device containing the target object, comprising: acquiring the first image shot by a first camera carried by the head-mounted equipment; the second image is obtained by shooting through a second camera carried by the head-mounted equipment, and the focal length of the first lens is larger than that of the second lens. For example, the head-mounted device is configured with a wide-angle camera and a telephoto camera, and in the process of taking a first image with the telephoto camera, the wide-angle camera simultaneously acquires a second image with the same shooting angle and shooting direction as the telephoto camera. The second image viewing angle is larger than the first image viewing angle, and the shooting content of the second image comprises the shooting content of the first image. It should be understood that the wide angle and the tele in the wide camera and the tele camera described in the embodiment may be in a relative relationship in practice, only for indicating the difference of focal lengths between the two, and the form and number of the cameras are not limited by the present disclosure. By acquiring a second image of a larger field of view when the lost target object is taken in focus, the head-mounted device is facilitated to generate directional cues from the target object contained in the large field of view image.
In an embodiment, the first image and the second image acquired by the head-mounted device may be single-frame pictures, videos, or real-time preview images output by a camera, and types of the first image and the second image may be the same or different, which is not limited in this disclosure. For example, a zoom camera carried by the head-mounted device takes a single-frame picture as a second image at a relatively small focal length at a historical moment, and takes and outputs a real-time preview image as a first image at a relatively large focal length at a current moment; or, the head-mounted device carries a first camera with a relatively large focal length and a second camera with a relatively small focal length, the first camera and the second camera shoot simultaneously, the video shot by the first camera is used as a first image, and the video shot by the second camera is used as a second image. And it should be understood that "acquiring the second image" in the above embodiments does not mean "capturing the second image", and the capturing action may be completed before or after determining that the target object is not included in the first image, and the disclosure does not limit the time point at which the second image is acquired.
Step 106: and generating a first prompt message according to the second image, wherein the first prompt message is used for indicating that the head-mounted device is moved, so that the first image acquired by the head-mounted device again after moving contains the target object.
In an embodiment, the generating a first prompting message according to the second image comprises: determining the relative position relation of the target object and the shooting content of the first image in the second image; and generating a first prompt message containing a first moving direction, wherein the first moving direction is the direction determined according to the position relation and pointing to the target object by the shooting content of the first image. Wherein determining the relative positional relationship of the target object and the captured content of the first image in the second image may include: and acquiring a first position of the target object in the second image and a second position of the shooting content of the first image in the second image in real time, and comparing the first position with the second position to determine the relative position relationship between the first image and the target object. The position of the shot content of the first image in the second image can be obtained by obtaining the feature information of the current shot object in the first image, inquiring the current shot object matched with the feature information in the second image, and taking the position of the current shot object in the second image as the position of the shot content of the first image in the second image, for example, the current shot object in the first image is a "pine tree", inquiring the feature information of the "pine tree" in the second image, and taking the position of the corresponding "pine tree" in the second image as the position of the shot content of the first image in the second image. Or, the whole first image may be compared with a second image, and the position of the area in the second image, which is matched with the first image, is the position of the shot content of the first image in the second image; alternatively, the position of the first image in the second image may be determined based on the pre-recorded information about the position between the cameras capturing the first and second images in the head-mounted device and the parameters of each camera, such as the angle of view and the focal length of the camera, and the position of the first image in the second image may be determined based on the information about the angle of view of the camera, the position of the first image in the second image may be determined based on the angle of view of the camera adjusted by the user.
In one embodiment, the prompting message can instruct the user to move the head-mounted device by displaying, playing voice or other forms, or even a combination of ways. The display may be displaying characters, displaying patterns (such as arrows), or combining characters and patterns, and the like, which is not limited in this disclosure. For example, the head-mounted device generates a first prompt message according to the second image, and an arrow pointing to the target object is displayed in the first image, so that the user can conveniently find the target object for shooting only by moving the head-mounted device along with the prompt arrow.
In an embodiment, the second image is displayed while the first image is displayed, and an area corresponding to the first image is marked in the second image. The display area of the second image may be a preset fixed area or a variable area that can be moved, zoomed, or turned off according to a user operation. For example, the head-mounted device may simultaneously display the first image and the second image in a default state, take the first image as a main image for display, and cover the second image in the upper left corner of the first image, and when the user desires to observe a portion of the content in the first image that is blocked by the second image, the user may drag a display area of the second image, and move the second image to the lower right corner of the first image or other positions for display.
In an embodiment, the marking the region corresponding to the first image in the second image may be to frame out the region corresponding to the first image in the second image, where the framed region may be a fixed region in the second image or an arbitrary region in the second image, which is not limited in this disclosure. For example, when the head-mounted device performs shooting through the two cameras, and shooting angles and shooting directions of the first camera and the second camera which are located at the fixed positions of the head-mounted device are the same, the first image shot by the first lens is always located in a fixed area in the second image shot by the second camera; when the second image is shot by the zoom camera at a relatively small focal length at the historical moment, and the first image is shot by the zoom camera at a relatively large focal length, the first image display area selected by the frame in the second image may change along with the movement of the user during operation. The first image and the second image are displayed simultaneously, and the corresponding area of the first image is identified in the second image, so that a user can see the second image with a larger field angle simultaneously in the process of previewing and shooting the first image, and the user can more intuitively know the position of the current first image relative to the target object by observing the first image area identified in the second image and the target object displayed in the second image, so that the user can be assisted in adjusting the lens angle to move the target object to the identification area, so that the target object is displayed in the first image, and the shooting efficiency of the user is improved.
In an embodiment, a second prompt message is generated when the second image does not contain the target object, and the second prompt message is used for instructing to move the head-mounted device, so that the second image acquired by the head-mounted device again after moving contains the target object. Since the angle of view of the first image is relatively small, the user has to track a subject of interest, such as a target object, with a relatively larger rotation amplitude when shooting, so that the target object is easily rotated to an area outside the second image. When the target object is moved out of the second image, that is, the second image does not contain the target object, a second prompt message instructing the user to move the head mounted device may be generated, so that the user moves the head mounted device to reacquire the second image containing the target object, and after moving the target object back to the second image, the method described in step 104 and step 106 may be implemented, and the first prompt message is generated to prompt the user about the position of the target object, so that the user may move the head mounted device to change the shooting object of the first image, thereby facilitating long-focus shooting for finding the target object desired to be shot.
In an embodiment, the generating the second prompting message includes: acquiring movement information recorded by an attitude sensor carried by the head-mounted device, wherein the movement information is used for recording a corresponding historical movement direction of the head-mounted device when the target object moves out of the second image; and generating a second prompt message containing a second moving direction, wherein the second moving direction is the reverse direction of the historical moving direction. Wherein, the gesture sensor may include a motion sensor such as a three-axis gyroscope, a three-axis accelerometer, a three-axis electronic compass, etc., which may record information such as a movement angle, a movement direction, a movement distance, etc. of the head-mounted device, and a user may often move the target object out of a photographable range of the lens due to various factors such as shooting jitter, an excessively long movement distance, etc. in an actual operation of zoom shooting, at this time, the user may move the head-mounted device in a reverse direction according to a historical movement direction of the target object when moving out of the second image, that is, move the head-mounted device back to a position when acquiring the second image, so that the user moves the head-mounted device to reacquire the second image including the target object, and implement the method described in the above step 104 and 106 after moving the target object back to the second image, generate the first prompt message to prompt the user about the position of the target object, so that the user may move, the long-focus shooting is convenient to find the target object to be shot.
According to the technical scheme, the target object to be shot is determined, and the prompt message is displayed to the user to indicate the direction of the position of the target object under the condition that the target object cannot be shot, so that the user can more easily find the target object to be shot when the telephoto shooting is carried out with a small field angle, and the telephoto shooting experience is improved.
The following specifically explains the method for prompting the position of the object provided by the embodiment of the present disclosure with reference to the drawings and application scenarios.
Fig. 2 is a schematic view of a shooting scene of a head-mounted device according to an embodiment of the present disclosure, which will describe in detail a method for prompting a position of a subject according to the present disclosure, taking smart glasses as an example. As shown in fig. 2, the smart glasses 201 carry a telephoto camera 201a and a wide-angle camera 201b, and the scene 202 is a target scene photographed by the smart glasses 201.
Still taking the shooting scene shown in fig. 2 as an example, the smart glasses 201 can perform telephoto shooting on the scene 202 according to the user's requirement, and fig. 3 is a flowchart of a method for prompting the position of a subject according to an embodiment of the present disclosure. As shown in fig. 3, the process of prompting the position of the target object to assist in the zoom photographing includes the steps of:
step 302: a target object is determined.
The smart glasses 201 may trigger an operation according to the detected user to determine the target object selected by the user. Still taking fig. 2 as an example, the smart glasses 201 display a preview picture of the scene 202 to the user in the shooting function, and the user can click "tree" in the preview image by touching the touch area on the frame of the smart glasses 201 as the target object 203 to be shot.
Optionally, the smart glasses 201 may also determine a target object matching the object characteristics according to object characteristics preset in the head-mounted device. A plurality of object features such as "plants" and "animals" are preset in the smart glasses 201, and when the shooting function is started, the smart glasses 201 detect whether an object matching the preset object features exists in the scene 202, and automatically determine "trees" and "birds" in the scene as the target object 203 and the target object 204.
Step 304: the smart glasses 201 capture a first image.
The user adjusts the shooting focal length of the smart glasses 201 according to the shooting requirement, and enlarges the view-finding picture. The smart glasses 201 capture the first image with the adjusted longer focal length.
Step 306: and judging whether the first image contains a target object.
The smart glasses 201 determine whether the first image acquired after the enlarged finder frame includes the target object.
Step 308: a second image is acquired.
When the acquired first image with the longer focal length does not include the target object, the smart glasses 201 acquire a second image with a larger field angle to prompt the user to move the smart glasses 201 according to the position of the target object included in the second image.
In an embodiment, the smart glasses 201 may display the first image and the second image at the same time, and mark a region corresponding to the first image in the second image. Fig. 4 is a display interface for tele photography provided according to an embodiment of the present disclosure. In this embodiment, the smart glasses 201 may simultaneously display a first image 401 captured by the tele camera 201a and a second image 402 captured by the wide camera 201b to the user. As shown in fig. 4, the first image 401 is taken as a main image on the display interface of the smart glasses 201, the second image 402 after the thumbnail is simultaneously displayed in the upper left corner of the first image 401, and the corresponding area 403 of the first image 401 is identified in the second image 402. The user can know the specific position of the corresponding area of the currently photographed first image 401 with a smaller angle of view in the second image 402 with a larger angle of view by observing the second image 402.
Step 310: and judging whether the second image contains the target object.
Step 312: a first prompting message is generated.
And generating a first prompt message instruction according to the second image to move the smart glasses 201 under the condition that the second image contains the target object. The description of generating the first prompt message according to the second image may refer to the description of step 106 in the embodiment shown in fig. 1, and is not repeated here.
The smart glasses 201 generate a first prompt message according to the relative position relationship between the target object 203 and the shooting content of the first image in the second image. Fig. 5 is a prompt interface of the position of an object when the smart glasses perform telephoto shooting according to the embodiment of the disclosure. The screen display system of the smart glasses 201 displays the currently output interface content, wherein the first image 501 is an image captured by the telephoto camera 201a configured with the smart glasses 201, and the second image 502 is an image captured by the wide-angle camera 201b configured with the smart glasses 201. The user adjusts the smart glasses 201 to select the target object "tree" 203 for enlarged shooting, when the target object "tree" 203 is not shot in the first image 501 with a smaller field angle, the smart glasses 201 may generate a prompt message pointing to the target object "tree" 203 according to a relative position relationship between the target object "tree" 203 and the shot content of the first image 501 in the second image 502, as shown in fig. 5, the shot content of the first image 501 is located in a central area of the second image 502, and the target object "tree" 203 is located on the left side of the first image 501, so that a prompt arrow 503 pointing to the left side is generated to instruct the user to move the smart glasses 201 to the left, so that the moved smart glasses 201 may shoot the target object "tree" 203 through the long-focus camera 201 a.
Optionally, the smart glasses 201 may also prompt the positions of a plurality of target objects at the same time, and fig. 6 is an object position prompt interface when another smart glasses perform telephoto shooting according to an embodiment of the present disclosure. The smart glasses 201 determine, according to preset object features, a target object 203 "tree" and a target object 204 "bird" that match the object features, where the target object "tree" 203 is a static object with a fixed position, the target object "bird" 204 is a dynamic object with a movable position, and during the process of shooting the second image 602 by the wide-angle camera 201b, the smart glasses 201 need to determine the position of the target object 204 in real time. And generating prompt messages indicating the target objects according to the relative position relationship among the target object 203, the tree, the target object 204, the bird and the shooting content of the first image 401 in the second image 402. As shown in fig. 6, the captured content of the first image 601 is located in the central area of the second image 602, the target object "tree" 203 is located on the left side of the first image 601, and the target object "bird" 204 is located on the upper right side of the first image, so that a prompt arrow 603 pointing to the left side is generated and a "plant" label is identified, indicating that the user can move the smart glasses 201 to the left to capture the target object "tree" 203, and simultaneously a prompt arrow 604 pointing to the upper right is generated and an "animal" label is identified, indicating that the user can move the smart glasses 201 to the upper right to capture the target object 204 "bird".
Step 314: the first image is reacquired.
After the smart glasses 201 move according to the first prompt message, the first image may be reacquired so that the reacquired first image includes the target object.
Step 316: and generating a second prompting message.
Generating a second prompting message indicating to move the smart glasses 201 if the second image does not contain the target object. The second prompt message may be generated by acquiring movement information recorded by the attitude sensor of the smart glasses 201 to determine a corresponding historical movement direction when the target object moves out of the second image, and using a reverse direction of the historical movement direction as the second movement direction. Fig. 7 is a position prompt interface of an object when another pair of smart glasses performs telephoto shooting according to the embodiment of the disclosure. The screen display system of the smart glasses 201 displays the currently output interface content, wherein the first image 701 is an image captured by the telephoto camera 201a configured to the smart glasses 201, and the second image 702 is an image captured by the wide-angle camera 201b configured to the smart glasses 201. The user adjusts the smart glasses 201 to enlarge the target object 203 for shooting, in the adjusting process, the user moves the smart glasses 201 to enable the target object 203 to move out of the shooting range of the smart glasses 201, the image 704 is a history image acquired by the smart glasses 201 before moving, and after the target object 203 moves out of the shooting range of the smart glasses 201, the target object 203 is not shot in the first image 701 with a small field angle, and the target object 203 cannot be shot in the second image 702 with a large field angle. The smart glasses 201 can know that the smart glasses 201 have moved upward by a distance when the target object 203 moves out of the second image 702 by acquiring the movement information recorded by the carried attitude sensor, and therefore, the smart glasses can generate an upward prompt arrow 703 to instruct the user to move the smart glasses 201 downward, so that the moved smart glasses 201 can shoot the target object 203 again through the wide-angle camera 201 b.
Step 318: the second image is reacquired.
The smart glasses 201 re-acquire the second image after moving according to the second prompting message, and repeat the above steps 310 and 318.
Corresponding to the method embodiments, the present specification also provides an embodiment of an apparatus.
Fig. 8 is a schematic structural diagram of an electronic device for prompting a position of an object according to an exemplary embodiment of the present disclosure. Referring to fig. 8, at the hardware level, the electronic device includes a processor 802, an internal bus 804, a network interface 806, a memory 808, and a non-volatile memory 810, although it may also include hardware required for other services. The processor 802 reads a corresponding computer program from the nonvolatile memory 810 to the memory 808 and then runs the computer program, and forms a device for solving the problem that a target object is difficult to find in telephoto shooting on a logical level. Of course, besides the software implementation, the present disclosure does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
FIG. 9 is a block diagram illustrating an apparatus for prompting a location of an object in accordance with an exemplary embodiment of the present disclosure. Referring to fig. 9, the apparatus includes a determination unit 902, an acquisition unit 904, and a first generation unit 906, in which:
the determining unit 902 is configured to determine the target object.
The acquiring unit 904 is configured to acquire a second image acquired by the head mounted device and including the target object, in a case where the target object is not included in the first image acquired by the head mounted device; wherein a field angle of the second image is greater than a field angle of the first image.
The first generating unit 906 is configured to generate a first prompting message according to the second image, where the first prompting message is used to instruct the head-mounted device to move, so that the first image re-acquired by the head-mounted device after moving contains the target object.
Optionally, the determining the target object includes: determining a target object selected by a user according to the detected user trigger operation; or determining a target object matched with the object characteristics according to preset object characteristics in the head-mounted equipment; wherein the target object comprises a dynamic object with a movable position and/or a static object with a fixed position.
Optionally, the acquiring a second image containing the target object acquired by the head-mounted device includes: acquiring a second image which is shot by a zoom camera carried by the head-mounted device at a relatively small focal length at a historical moment; the first image is an image shot by the zoom camera with a relatively large focal length; or acquiring the first image shot by a first camera carried by the head-mounted equipment; the second image is obtained by shooting through a second camera carried by the head-mounted equipment, and the focal length of the first lens is larger than that of the second lens.
Optionally, the generating a first prompt message according to the second image includes: determining the relative position relation of the target object and the shooting content of the first image in the second image; and generating a first prompt message containing a first moving direction, wherein the first moving direction is the direction determined according to the position relation and pointing to the target object by the shooting content of the first image.
Optionally, the apparatus further comprises:
the display unit 908 is configured to display the second image while displaying the first image, and mark a region corresponding to the first image in the second image.
Optionally, the apparatus further comprises:
the second generating unit 910 is configured to generate a second prompting message in a case that the second image does not contain the target object, where the second prompting message is used to instruct to move the head-mounted device, so that the second image re-acquired by the head-mounted device after moving contains the target object.
Optionally, the generating the second prompt message includes: acquiring movement information recorded by an attitude sensor carried by the head-mounted device, wherein the movement information is used for recording a corresponding historical movement direction of the head-mounted device when the target object moves out of the second image; and generating a second prompt message containing a second moving direction, wherein the second moving direction is the reverse direction of the historical moving direction.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium, e.g. a memory, comprising instructions executable by a processor of a cued object location device to perform a method as in any one of the above embodiments, such as the method may comprise:
determining a target object; acquiring a second image which is acquired by the head-mounted device and contains the target object under the condition that the first image acquired by the head-mounted device does not contain the target object; wherein a field angle of the second image is greater than a field angle of the first image; and generating a first prompt message according to the second image, wherein the first prompt message is used for indicating that the head-mounted device is moved, so that the first image acquired by the head-mounted device again after moving contains the target object.
The non-transitory computer readable storage medium may be, among others, ROM, Random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like, and the present disclosure is not limited thereto.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A method for prompting the position of a subject, which is applied to a head-mounted device, the method comprises the following steps:
determining a target object;
acquiring a second image which is acquired by the head-mounted device and contains the target object under the condition that the first image acquired by the head-mounted device does not contain the target object; wherein a field angle of the second image is greater than a field angle of the first image;
and generating a first prompt message according to the second image, wherein the first prompt message is used for indicating that the head-mounted device is moved, so that the first image acquired by the head-mounted device again after moving contains the target object.
2. The method of claim 1, wherein the determining the target object comprises:
determining a target object selected by a user according to the detected user trigger operation; or determining a target object matched with the object characteristics according to preset object characteristics in the head-mounted equipment;
wherein the target object comprises a dynamic object with a movable position and/or a static object with a fixed position.
3. The method of claim 1, wherein obtaining a second image acquired by the head-mounted device containing the target object comprises:
acquiring a second image which is shot by a zoom camera carried by the head-mounted device at a relatively small focal length at a historical moment; the first image is an image shot by the zoom camera with a relatively large focal length; or,
acquiring the first image shot by a first camera carried by the head-mounted equipment; the second image is obtained by shooting through a second camera carried by the head-mounted equipment, and the focal length of the first lens is larger than that of the second lens.
4. The method of claim 1, wherein generating a first prompting message based on the second image comprises:
determining the relative position relation of the target object and the shooting content of the first image in the second image;
and generating a first prompt message containing a first moving direction, wherein the first moving direction is the direction determined according to the position relation and pointing to the target object by the shooting content of the first image.
5. The method of claim 1, further comprising:
and displaying the second image while displaying the first image, and marking an area corresponding to the first image in the second image.
6. The method of claim 1, further comprising:
and generating a second prompt message when the second image does not contain the target object, wherein the second prompt message is used for indicating that the head-mounted device is moved, so that the second image acquired by the head-mounted device again after moving contains the target object.
7. The method of claim 6, wherein generating the second prompting message comprises:
acquiring movement information recorded by an attitude sensor carried by the head-mounted device, wherein the movement information is used for recording a corresponding historical movement direction of the head-mounted device when the target object moves out of the second image;
and generating a second prompt message containing a second moving direction, wherein the second moving direction is the reverse direction of the historical moving direction.
8. An apparatus for prompting a location of a subject, wherein a head-mounted device is employed, the apparatus comprising:
a determination unit configured to determine a target object;
an acquisition unit, configured to acquire, when the target object is not included in a first image acquired by the head-mounted device, a second image acquired by the head-mounted device and including the target object; wherein a field angle of the second image is greater than a field angle of the first image;
and a generating unit, configured to generate a first prompt message according to the second image, where the first prompt message is used to instruct to move the head-mounted device, so that a first image acquired by the head-mounted device again after moving includes the target object.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-7 by executing the executable instructions.
10. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, perform the steps of the method according to any one of claims 1-7.
CN202110184259.6A 2021-02-08 2021-02-08 Method and device for prompting object position Pending CN112995507A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110184259.6A CN112995507A (en) 2021-02-08 2021-02-08 Method and device for prompting object position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110184259.6A CN112995507A (en) 2021-02-08 2021-02-08 Method and device for prompting object position

Publications (1)

Publication Number Publication Date
CN112995507A true CN112995507A (en) 2021-06-18

Family

ID=76393956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110184259.6A Pending CN112995507A (en) 2021-02-08 2021-02-08 Method and device for prompting object position

Country Status (1)

Country Link
CN (1) CN112995507A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113766140A (en) * 2021-09-30 2021-12-07 北京蜂巢世纪科技有限公司 Image shooting method and image shooting device
CN114500981A (en) * 2022-02-12 2022-05-13 北京蜂巢世纪科技有限公司 Method, device, equipment and medium for tracking venue target
WO2023005338A1 (en) * 2021-07-26 2023-02-02 北京有竹居网络技术有限公司 Photographing method and apparatus, and electronic device, storage medium and computer program product
WO2024125379A1 (en) * 2022-12-14 2024-06-20 华为技术有限公司 Image processing method, head-mounted display device, and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101313565A (en) * 2005-11-25 2008-11-26 株式会社尼康 Electronic camera and image processing device
WO2010073616A1 (en) * 2008-12-25 2010-07-01 パナソニック株式会社 Information displaying apparatus and information displaying method
CN102685382A (en) * 2011-03-18 2012-09-19 安尼株式会社 Image processing device and method and moving object anti-collision device
CN109769066A (en) * 2019-01-14 2019-05-17 广东小天才科技有限公司 Screen display method and system
CN110445978A (en) * 2019-06-24 2019-11-12 华为技术有限公司 A kind of image pickup method and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101313565A (en) * 2005-11-25 2008-11-26 株式会社尼康 Electronic camera and image processing device
WO2010073616A1 (en) * 2008-12-25 2010-07-01 パナソニック株式会社 Information displaying apparatus and information displaying method
CN102685382A (en) * 2011-03-18 2012-09-19 安尼株式会社 Image processing device and method and moving object anti-collision device
CN109769066A (en) * 2019-01-14 2019-05-17 广东小天才科技有限公司 Screen display method and system
CN110445978A (en) * 2019-06-24 2019-11-12 华为技术有限公司 A kind of image pickup method and equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023005338A1 (en) * 2021-07-26 2023-02-02 北京有竹居网络技术有限公司 Photographing method and apparatus, and electronic device, storage medium and computer program product
CN113766140A (en) * 2021-09-30 2021-12-07 北京蜂巢世纪科技有限公司 Image shooting method and image shooting device
CN114500981A (en) * 2022-02-12 2022-05-13 北京蜂巢世纪科技有限公司 Method, device, equipment and medium for tracking venue target
CN114500981B (en) * 2022-02-12 2023-08-11 北京蜂巢世纪科技有限公司 Venue target tracking method, device, equipment and medium
WO2024125379A1 (en) * 2022-12-14 2024-06-20 华为技术有限公司 Image processing method, head-mounted display device, and medium

Similar Documents

Publication Publication Date Title
CN112995507A (en) Method and device for prompting object position
US9729788B2 (en) Image generation apparatus and image generation method
JP5659304B2 (en) Image generating apparatus and image generating method
JP4586709B2 (en) Imaging device
JP5769813B2 (en) Image generating apparatus and image generating method
JP5865388B2 (en) Image generating apparatus and image generating method
US11962891B2 (en) Image preview method and apparatus, electronic device, and storage medium
EP3287844A1 (en) A method in relation to acquiring digital images
US20230040548A1 (en) Panorama video editing method,apparatus,device and storage medium
CN109451240B (en) Focusing method, focusing device, computer equipment and readable storage medium
US20160050349A1 (en) Panoramic video
CN107395957B (en) Photographing method and device, storage medium and electronic equipment
WO2019104569A1 (en) Focusing method and device, and readable storage medium
WO2019128400A1 (en) Method for controlling movement of photographing view angle of smart glasses
US11875080B2 (en) Object sharing method and apparatus
CN117014716A (en) Target tracking method and electronic equipment
CN115376114B (en) Multi-mode view finding method and system for image of automobile camera
CN115022549B (en) Shooting composition method, shooting composition device, computer equipment and storage medium
CN115914859A (en) Shooting method and device, electronic equipment and readable storage medium
CN117837153A (en) Shooting control method, shooting control device and movable platform
CN112565586A (en) Automatic focusing method and device
CN117425076B (en) Shooting method and system for virtual camera
CN117459830B (en) Automatic zooming method and system for mobile equipment
CN117729443A (en) Planar image generation method, device, terminal equipment and storage medium
CN118674657A (en) Image processing method and apparatus, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618