CN111479055B - Shooting method and device, electronic equipment and storage medium - Google Patents

Shooting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111479055B
CN111479055B CN202010280390.8A CN202010280390A CN111479055B CN 111479055 B CN111479055 B CN 111479055B CN 202010280390 A CN202010280390 A CN 202010280390A CN 111479055 B CN111479055 B CN 111479055B
Authority
CN
China
Prior art keywords
image
shooting range
camera
terminal
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010280390.8A
Other languages
Chinese (zh)
Other versions
CN111479055A (en
Inventor
李逸超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010280390.8A priority Critical patent/CN111479055B/en
Publication of CN111479055A publication Critical patent/CN111479055A/en
Application granted granted Critical
Publication of CN111479055B publication Critical patent/CN111479055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device, electronic equipment and a storage medium, relates to the technical field of electronics, is applied to a terminal, and the terminal is provided with at least one camera, and the method comprises the following steps: when the shooting range of the terminal is reduced to a first shooting range, acquiring a first image collected by the camera in the first shooting range; determining a designated coordinate system based on a second image acquired by the terminal in a second shooting range, wherein the second shooting range is larger than the first shooting range; determining the relative positions of a target object and a reference object in the specified coordinate system, wherein the reference object is an object in the first image; determining prompt information according to the relative position; and outputting the prompt information, wherein the prompt information is used for indicating the moving direction of at least one camera, and the moving direction is used for covering the first shooting range on the target object. The target object can be retrieved when the shooting range is reduced.

Description

Shooting method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a shooting method, an apparatus, an electronic device, and a storage medium.
Background
With the development of terminal technology, the shooting function of the terminal is continuously optimized. For example, before shooting an image, the user can enlarge the ratio of the target object in the current preview picture by increasing the magnification of the terminal and reducing the shooting range in order to shoot a close-up image for the target object to be shot. However, at present, with the increase of the magnification of the terminal, the target object is easy to run out of the current preview screen of the terminal, so that the user cannot see the target object through the terminal, and the target object is difficult to be shot again under the current larger magnification and reappeared in the preview screen.
Disclosure of Invention
The embodiment of the application provides a shooting method, a shooting device, electronic equipment and a storage medium, and the shooting method, the shooting device, the electronic equipment and the storage medium can be used for retrieving a target object when the target object disappears in a preview picture, so that the time for retrieving the target object by a user is saved, and the user experience is improved.
In a first aspect, an embodiment of the present application provides a shooting method, which is applied to a terminal, where the terminal is provided with at least one camera, and the method includes: when the shooting range of the terminal is reduced to a first shooting range, acquiring a first image acquired by the camera in the first shooting range; determining a designated coordinate system based on a second image acquired by the terminal in a second shooting range, wherein the second shooting range is larger than the first shooting range; determining the relative positions of a target object and a reference object in the specified coordinate system, wherein the reference object is an object in the first image; determining prompt information according to the relative position; and outputting the prompt information, wherein the prompt information is used for indicating the moving direction of at least one camera, and the moving direction is used for covering the first shooting range on the target object.
In a second aspect, an embodiment of the present application provides a shooting device, which is applied to a terminal, where the terminal is provided with at least one camera, and the device includes: the terminal comprises an image acquisition module, a first image acquisition module and a second image acquisition module, wherein the image acquisition module is used for acquiring a first image acquired by the camera in a first shooting range when the shooting range of the terminal is reduced to the first shooting range; the coordinate system determination model is used for determining a specified coordinate system based on a second image acquired by the terminal in a second shooting range, and the second shooting range is larger than the first shooting range; the position determining module is used for determining the relative positions of a target object and a reference object in the specified coordinate system, wherein the reference object is an object in the first image; the prompt determining module is used for determining prompt information according to the relative position; and the prompt output module is used for outputting the prompt information, the prompt information is used for indicating the moving direction of at least one camera, and the moving direction is used for covering the first shooting range on the target object.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory; one or more processors coupled with the memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, and the one or more application programs are configured to execute the photographing method provided by the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be called by a processor to execute the shooting method provided in the first aspect.
According to the shooting method, the shooting device, the electronic equipment and the storage medium, when the shooting range of the terminal is reduced to the first shooting range, a first image collected by a camera in the first shooting range is obtained, the specified coordinate system is determined based on a second image collected by the terminal in the second shooting range, the second shooting range is larger than the first shooting range, then the relative position of the target object and the reference object in the specified coordinate system is determined, the reference object is an object in the first image, the prompt information is determined according to the relative position, and finally the prompt information is output to indicate that at least one camera moves so that the first shooting range covers the target object. Therefore, when the shooting range is reduced, namely the magnification is increased, the designated coordinate system can be determined based on the second image with the larger shooting range, the relative position of the target object to be shot and the reference object in the first image collected currently under the designated coordinate system is determined, and the camera is indicated to move to cover the target object with the first shooting range based on the relative position, so that the target object is shot again, the time for the user to find the target object is saved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a diagram of a camera capturing an image to be displayed on a display.
Fig. 2 shows a schematic view of a zooming-in process.
Fig. 3 is a schematic structural diagram of the terminal 100 according to an exemplary embodiment of the present application.
Fig. 4 shows a distribution diagram of a plurality of cameras provided in an exemplary embodiment of the present application.
Fig. 5 is a schematic view illustrating a field of view of a camera according to an exemplary embodiment of the present application.
Fig. 6 shows a view field schematic diagram of another camera provided by an exemplary embodiment of the present application.
Fig. 7 shows a flowchart of a shooting method provided in an embodiment of the present application.
Fig. 8 shows a schematic diagram of a shooting scene provided in an exemplary embodiment of the present application.
Fig. 9 shows a schematic diagram of a shooting scene provided in another exemplary embodiment of the present application.
Fig. 10 is a flowchart illustrating a shooting method according to another embodiment of the present application.
Fig. 11 is a flowchart illustrating a shooting method according to still another embodiment of the present application.
Fig. 12 illustrates a flowchart of step S305 in fig. 11 according to an exemplary embodiment of the present application.
Fig. 13 shows a block diagram of a camera provided in an embodiment of the present application.
Fig. 14 illustrates a storage unit provided in an embodiment of the present application for storing or carrying program codes for implementing a shooting method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Definition of terms
Visual field range: the maximum range that an optical instrument (such as a camera) can shoot. In an optical instrument, an angle formed by two edges Of a lens, which is a vertex Of the lens Of the optical instrument and has the maximum range in which an object image Of a target can pass through the lens, is called a Field Of View (FOV). The size of the field of view determines the field of view of the optical instrument, with a larger field of view being the larger the field of view.
A long-focus camera: the long-focus camera has a small visual field range, and the size of an object on an image shot by the long-focus camera is large, so that the long-focus camera can be suitable for shooting a distant object, a close-up scene, object details or shooting a certain small object specially. The focal length of a tele camera is generally above 80 mm.
Wide-angle camera: the wide-angle camera has a large visual field range, and the wide-angle camera can shoot objects and pictures in a large range, such as large scenes and scene subject matters, in a wide, far and deep image. In addition, the wide-angle camera may further include an ultra wide-angle camera. The focal length of a typical wide-angle camera is generally between 24mm-35mm, while the focal length of an ultra-wide-angle camera is generally below 24 mm.
Digital Zoom (Digital Zoom): software is used to change the area of each pixel within the image captured by the optical instrument. In fact, the digital zoom does not change the focal length of the optical instrument, and the digital zoom magnification is only used for magnifying each pixel area in the shot image, which results in loss of image quality. It should be noted that the enlargement described in the embodiment of the present application means that the size of the object on the image becomes large.
Optical Zoom (Optical Zoom): optical zooming is the changing of the focal length of a lens by changing the relative position of the various lenses in a zoom lens.
The shooting range is as follows: the imaging range is a range in which an image displayed in the optical instrument is taken out from an image corresponding to the field of view when the optical instrument performs imaging based on the field of view. The maximum shooting range that a camera can shoot is the visual field range of the camera, namely the shooting range of the camera is generally smaller than or equal to the visual field range, when an image is amplified, the shooting range is reduced, the visual field range of one camera is generally determined when the camera leaves a factory and is fixed, and the shooting range can be changed in the using process.
For example, to illustrate the shooting range and the visual field range, please refer to fig. 1, fig. 1 shows a schematic diagram of a camera capturing an image to be displayed on a display, as shown in fig. 1, the visual field range is the maximum shooting range that the camera can shoot, and actually, the camera can shoot any content in the visual field range to display the shot content, and during the actual use, by adjusting the shooting range, an image of a part of the area can be taken from the content that can be shot in the visual field range to be displayed as a preview image, as shown in fig. 1, if a shooting instruction is triggered at this time, the currently displayed preview image can be stored. In fact, the implementation process of the digital zoom is similar to the process shown in fig. 1, and will not be described herein again.
With the development of terminal technology, the shooting function of the terminal is continuously optimized, and the supportable magnification factor is also continuously increased. The digital zoom is a means for realizing the amplification of the current terminal, and the terminal with a single camera generally adopts the digital zoom mode to realize the amplification.
In addition, in some terminals, in order to optimize a shooting function, more and more cameras are integrated, and the terminal implements an amplification means and also implements optical zooming, which needs to integrate a plurality of cameras with different focal sections in the terminal. Generally, the longer the focal length, the smaller the field of view, and the larger the magnification. For example, the terminal may be integrated with a tele camera having a longer focal length than a wide camera but a smaller field of view than the wide camera. When the zoom-in is achieved by using the optical zoom, the camera is generally switched, for example, the camera is originally shooting based on the wide-angle camera, and when the adjustment magnification exceeds a certain value, the camera can be switched from the wide-angle camera to the telephoto camera, that is, the preview picture displayed on the terminal can be switched from the picture shot at the wide-angle camera to the picture shot at the telephoto camera, so that the zoom-in of the optical zoom can be achieved.
However, as the magnification factor increases, the shooting screen is enlarged, the visual field range is often reduced, the content included in the screen is reduced, and the originally selected target object is likely to run out of the current preview screen, that is, may not be in the current preview screen. For example, referring to fig. 2, fig. 2 is a schematic view of a zooming-in process. As shown in fig. 2, (a) in fig. 2 is a preview image captured by the terminal in the maximum shooting range before enlargement, (b) in fig. 2 is a preview image captured by the terminal in the smaller shooting range when the enlargement factor is large, and (c) in fig. 2 is a preview image captured by the terminal in the minimum shooting range when the enlargement factor is maximum. At this time, in order to retrieve the target object for shooting again (so that the target object appears again in the preview image), the user can only tentatively change the camera angle around blindly in order to retrieve the target object, which requires much time and effort of the user and also cannot retrieve the target object.
It should be noted that, when shooting is not triggered, the acquisition of the preview image may only display at least part of the image shot by the camera on the display screen, and the image is not actually acquired and stored in the terminal.
Based on the above problems, embodiments of the present application provide a shooting method, an apparatus, an electronic device, and a computer-readable storage medium, where when a shooting range is reduced, a specified coordinate system is determined based on a second image with a larger shooting range, so as to determine a relative position of a target object to be shot and a reference object in a currently acquired first image in the specified coordinate system, and based on the relative position, a camera is instructed to move to a position where the first shooting range can cover the target object, so as to shoot the target object again, thereby saving time for a user to retrieve the target object, and improving user experience.
The shooting method provided by the embodiment of the application can be applied to a terminal. For example, the terminal may be a Mobile phone, a tablet Computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook Computer, an Ultra-Mobile Personal Computer (UMPC), a netbook, a Personal Digital Assistant (PDA), or a specialized camera (e.g., a single lens reflex camera, a card camera), and the like. The embodiment of the present application does not limit the specific type of the terminal.
Illustratively, fig. 3 shows a schematic structural diagram of the terminal 100 provided in an exemplary embodiment of the present application, and the terminal 100 may include a processor 110, a memory 120, a camera 130, and a display screen 140.
It is to be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation to the terminal 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. For example, in one example, terminal 100 may also lack display 140; in another example, the terminal 100 may further include an antenna, a wireless communication module, and the like in addition to the aforementioned devices to implement communication; in yet another example, the terminal 100 may further include a motion sensor, such as an acceleration sensor, a geomagnetic sensor, or the like, for acquiring motion information of the terminal 100.
Processor 110 may include one or more processing cores, among other things. The processor 110 connects various parts within the overall terminal 100 using various interfaces and lines, and performs various functions of the terminal 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use, such as a phonebook, audio-video data, chat log data, and the like.
The terminal 100 may be integrated with at least one camera 130 for capturing still images or videos, and the at least one camera 130 may include at least one rear camera and may also include at least one front camera. In addition, in some embodiments, the at least one camera 130 can move relative to the body of the terminal 100, for example, the terminal 100 can further include a body and a rotating component, the at least one camera 130 is mounted on the rotating component, and the rotating component is rotatably connected to the body of the terminal, so that at least the camera can rotate relative to the body of the terminal 100 under the driving of the rotating component, in addition, the terminal 100 can be further provided with a driving component, which can be a motor, and can drive the rotating component to rotate, and then the rotating direction of the rotating component can be two-dimensional or three-dimensional.
The at least one camera 130 may include cameras of different types and different focal lengths, and the focal length refers to a variation range of the focal length. The focal segment may include, but is not limited to: a first focal length (also called a short focal length) having a focal length smaller than a first preset value (for example, 35mm), a second focal length (also called a middle focal length) having a focal length greater than or equal to the first preset value and smaller than or equal to a second preset value (for example, 80mm), and a third focal length (also called a long focal length) having a focal length greater than the second preset value. The field of view of the camera of the first focal length is large, and the camera of the first focal length can be a wide-angle camera. The view finding range that the camera of third focal length can shoot is less, and the camera of third focal length can be long focus camera. The size of the view finding range that the camera of the second burnt section can shoot is placed in the middle, and the camera of the second burnt section can be the well burnt camera.
Illustratively, the terminal 100 may include a wide camera, a middle camera and a telephoto camera, the distribution of the 3 cameras may be shown in fig. 4, and as shown in fig. 4, the wide camera C1, the middle camera C2 and the telephoto camera C3 may be disposed on the terminal 100. The drawing is only an example of distribution, and the embodiment of the present application does not limit the specific distribution position.
For example, the view ranges of the 3 cameras shown in fig. 4 can be seen from the schematic diagram shown in fig. 5, and in fig. 5, the view ranges a1, a2, A3 are the view ranges corresponding to the wide-angle camera C1, the mid-focus camera C2, and the tele-focus camera C3, respectively. As shown in fig. 5, the field of view a1 may cover only the object 1, the field of view a2 may cover the objects 1, 2, and the field of view A3 may cover the objects 1, 2, 3.
In addition, when the motion posture of the terminal 100 is changed, the image content which can be covered by the view range of at least one camera on the terminal 100 is changed. Illustratively, referring to fig. 5 and fig. 6 together, after the terminal 100 shown in fig. 6 changes the motion posture, the view range a1 of the wide-angle camera C1 cannot cover the object 3, and the view range A3 of the tele-camera C3 cannot cover any of the objects 1, 2, and 3, i.e. the content of the image that can be captured by the camera in fig. 6 changes compared to fig. 5.
In some embodiments, when the terminal 100 is provided with only one camera, zooming in is typically achieved by digital zooming; when the terminal is integrated with a plurality of cameras, the terminal can be zoomed in by digital zooming or by optical zooming, and the terminal is not limited herein.
In some embodiments, the terminal may also be provided with a display screen 140. In a shooting scene, the display screen 140 may display images or videos captured by the camera for a user to preview, post-shooting view, and the like.
Alternatively, the display screen 140 may include at least first and second display areas, the first display area may be used to display a preview image in the current photographing range, and the second display area may be used to display a reference image. In some embodiments, the reference image may be an image having a larger shooting range than the preview image currently displayed in the first display region. As a mode, the current preview image can be marked on the reference image, so that a user can know the current shooting position by looking up the reference image, and the user can conveniently adjust the angle of the camera.
In some embodiments, the display screen 140 may also display prompt information to guide the user to manipulate the camera to move so that the terminal can quickly retrieve the target object to capture an image that may cover the target object.
The shooting method, the shooting device, the electronic device and the storage medium provided by the embodiments of the present application will be described in detail through specific embodiments.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a shooting method provided in an embodiment of the present application, where the shooting method is applicable to the terminal, and the terminal is provided with at least one camera. The flow shown in fig. 7 will be described in detail below. The photographing method may include the steps of:
step S110: when the shooting range of the terminal is reduced to a first shooting range, a first image collected by a camera in the first shooting range is acquired.
When the shooting range of the terminal is reduced to the first shooting range, at which time the magnification is increased, in one example, the user can adjust the magnification of the terminal to reduce the shooting range, so that the ratio of the content to be shot in the preview image is increased to achieve the effect of magnification.
In some embodiments, when at least one camera is started, the terminal may detect whether the shooting range is reduced, and when the shooting range of the terminal is reduced to a first shooting range, obtain a first image acquired by the camera in the first shooting range. When the terminal is provided with a display screen and displays an image collected by the camera, the first image is a currently displayed preview image, namely an image which is correspondingly seen after a user adjusts a shooting range.
In some embodiments, the terminal may acquire a photographing range adjustment instruction to adjust a photographing range of the terminal. In one embodiment, the terminal is provided with a display screen, the display screen may include a display panel and a touch panel, the display panel may display an interface and further display an image collected by the camera, and the touch panel may detect a trigger operation of a user and report the trigger operation to the terminal processor for response processing. Specifically, when the user uses the terminal to shoot an image, the shooting range can be adjusted through touch operation, key operation, space gesture operation and the like. Taking touch operation as an example, the shooting range or the magnification factor can be adjusted by moving the two fingers on the display screen relatively, if the two fingers are far away from each other, the shooting range can be reduced, and the magnification factor can be improved, if the two fingers are close to each other, the shooting range can be enlarged, and the magnification factor can be reduced.
In other embodiments, the shooting range may be adjusted by various methods such as voice input and image recognition, which is not limited in this embodiment. For example, "zoom in" may be input by voice to narrow the shooting range and increase the magnification. For another example, the eye image of the user may be obtained, the gazing duration of the user may be detected, and the shooting range may be adjusted according to the gazing duration, for example, the longer the gazing duration is, the smaller the shooting range may be, and the higher the magnification may be. And if the eye closing of the user is detected, the current watching time length can be acquired as the time length from the detection starting moment to the eye closing moment.
Step S120: and determining a designated coordinate system based on a second image acquired by the terminal in the second shooting range.
The second shooting range is larger than the first shooting range, and at least part of the second shooting range is overlapped with the first shooting range. It should be noted that, if the second shooting range at least partially overlaps the first shooting range, the second image captured in the second shooting range also at least partially overlaps the first image captured in the first shooting range.
The second shooting range may or may not be a shooting range over which the shooting range of the terminal is reduced to the first shooting range. For example, the terminal may be provided with a wide-angle camera, a middle-focus camera and a long-focus camera, if the terminal starts the camera, the middle-focus camera is adopted by default to shoot, when the shooting range is reduced to the first shooting range and the long-focus camera is switched to shoot, the second image collected in the second shooting range may be the image collected by the middle-focus camera, or may also be the image collected by the wide-angle camera, or may still be the image collected by the long-focus camera, as long as the shooting range is greater than the first shooting range and the second shooting range at least partially coincides with the first shooting range.
It should be noted that the first shooting range and the second shooting range may correspond to different magnification factors at different times, and actual ranges may also be different.
In some embodiments, before step S120, a target object may be determined, and the target object is an object that the user needs to photograph. The determination mode of the target object can include, but is not limited to, manual determination by a user, and automatic determination by a terminal.
In some embodiments, the target object may be determined according to a setting instruction by acquiring the setting instruction of the target object input by the user. The setting instruction can be triggered by the contact operation such as touch operation and key operation or the non-contact operation such as voice input and spaced gesture input by the user,
in other embodiments, the target object may also be automatically determined by recognizing an image captured by the camera, which includes, but is not limited to, recognition according to a shooting mode, user behavior, eye focus, picture center position, picture Principal Component Analysis (PCA), and the like.
For example, a current shooting mode may be acquired, and a target object may be determined according to the current shooting mode, for example, if the current shooting mode is a human image shooting mode, a human face in an image captured by a camera may be identified, so as to determine the human face therein as the target object.
For another example, an eye image of the user may be acquired, an eye attention point therein may be identified, the eye attention point and an image collected by the camera may be mapped, and an image content corresponding to the eye attention point in the image collected by the camera may be acquired to determine the target object.
For another example, the center position in the image acquired by the camera may be determined, and the target object may be determined according to the image content corresponding to the center position.
For example, the main component analysis of the image collected by the camera may be performed to obtain the main image content of the current image, and then the target object may be determined according to the main image content.
In some embodiments, before step S120, it may be further detected whether a target object exists in the first image acquired in the first shooting range, and if not, step S120 is performed. Therefore, the relative position can be determined again for prompting when the target object cannot be shot, and the power consumption is saved. Of course, the step S120 and the subsequent steps may not be performed to detect whether the target object exists in the first image, so that the current shooting range can better cover the target object, a better shooting effect is obtained, and the user experience is further improved.
Step S130: the relative positions of the target object and the reference object in the specified coordinate system are determined.
The reference object is an object in the first image, and may be one or more objects in the first image, which is not limited herein. In some examples, the reference object may be all objects within the first image, i.e. all image content of the first image as reference objects.
Since the second photographing range at least partially coincides with the first photographing range, the position of the reference object in the designated coordinate system may be determined based on the coinciding portion of the second photographing range with the first photographing range. For convenience of description, the position of the reference object in the specified coordinate system is taken as the reference position.
In some embodiments, the target object may be covered by the second photographing range, that is, if the target object exists in the second image acquired based on the second photographing range, the position of the target object in the designated coordinate system may be determined based on the second image. For convenience of description, the position of the target object in the specified coordinate system is recorded as the target position.
In some embodiments, if the target object exists in the second image, the image coordinate system of the second image may be used as the designated coordinate system to determine the relative positions of the target object and the reference object in the second image.
As an embodiment, if the reference object is located at the overlapping portion of the first shooting range and the second shooting range, that is, the reference object exists in both the first image and the second image, the position of the reference object in the second image may be obtained, that is, the reference position is obtained; acquiring the position of the target object on the second image as a target position; then, according to the target position and the reference position, the relative positions of the target object and the reference object in the specified coordinate system can be determined.
As another embodiment, if the reference object is not located at the overlapping portion of the first shooting range and the second shooting range, that is, the reference object exists only in the first image but not in the second image, an image that is overlapped between the first image captured in the first shooting range and the second image captured in the second shooting range may be acquired as a overlapped image, a first position of the reference object in the first image relative to the overlapped image is calculated, a second position of the target object in the second image relative to the overlapped image is calculated, and then a relative position of the target object and the reference object on the second image may be determined by combining the first position and the second position as a relative position of the target object and the reference object in a specified coordinate system.
In other embodiments, if the target object cannot be covered by the second shooting range at this time, that is, the target object does not exist in the second image, the movement trajectory of the terminal is acquired and the movement is prompted until the target object can be covered by the second shooting range, that is, the target object exists in the second image acquired by the terminal through the at least one camera in the second shooting range, and then the relative position can be determined according to the foregoing method. The detailed description of the embodiments can be seen in the following examples, which are not repeated herein.
Step 140: and determining prompt information according to the relative position.
Since the shooting range is narrowed, the target object may be in the current shooting range, i.e., the first shooting range, but the position is not appropriate, and in addition, the target object may not be in the current shooting range. The prompt information may be determined according to the relative position so that the current shooting range can cover or better cover the target object.
Wherein the prompt information may be used to indicate a moving direction of the at least one camera, the moving direction being used to cover the first photographing range over the target object, i.e. such that the target object may appear in the first image, to be able to be photographed. In some examples, if the terminal can display the preview image acquired by the camera in real time for the user to view, the target object can be seen by the user again in the preview image after disappearing from the preview image, time consumed for the user to find the target object blindly is avoided, time for the user to find the target object is saved, and user experience is improved.
In some embodiments, the at least one camera disposed on the terminal may be fixed to the terminal and only move along with the movement of the terminal, and then the moving direction indicating the at least one camera may be the moving direction indicating the terminal, and the at least one camera is moved to enable the first shooting range to cover the target object by indicating the terminal to move along the moving direction.
In other embodiments, at least one camera may also be a movable camera capable of moving relative to the terminal, and the prompt information may be a control instruction at this time, and may be used to instruct the camera to automatically move according to the relative position, so as to automatically retrieve the target object. Of course, the prompt information may not be a manipulation instruction, and is not limited herein.
In some embodiments, the prompt message may be a control instruction for instructing the at least one camera to move in the moving direction, so that the at least one camera can automatically move according to the relative position to retrieve the target object and cover the first shooting range on the target object.
In other embodiments, the prompt information may not be the operation instruction, and the prompt information differs according to different prompt modes, where the prompt modes may include, but are not limited to, a text prompt, a voice prompt, a symbol prompt, a light prompt, a vibration prompt, and the like, and the details thereof are described in step S150 described later.
Step 150: and outputting prompt information.
By outputting the prompt information, the moving direction of the at least one camera may be indicated to cover the first photographing range over the target object, so that the target object may reappear in the preview image.
In some embodiments, the prompt information may be a control instruction, that is, a control instruction may be generated according to the relative position, and the control instruction may be output to the control object to control the control object to move according to the relative position. For example, if the control object is at least one camera, the terminal outputs the control instruction to the at least one camera, and the at least one camera can automatically move according to the relative position to cover the first shooting range on the target object, that is, the target object is automatically retrieved.
In another embodiment, the prompting mode may be text prompting, and then the moving direction may be determined according to the relative position, and the prompting text may be generated according to the moving direction. For example, if the target object is on the left side of the reference object, a prompt text "please move left" may be generated and displayed, thereby instructing the user to manipulate the camera to move left to retrieve the target object.
In still other embodiments, the prompting mode may be voice prompting, and a certain direction may be determined according to the relative position and a prompting voice may be generated according to the moving direction. For example, if the target object is on the left side of the reference object, a prompt voice "please move left" may be generated and output through a speaker or other playback device.
In still other embodiments, the hinting means may be a symbolic hint, and an image containing the target object and the reference object, such as a second image, may be displayed, and then a symbol drawn on the second image according to the relative position to visually indicate the relative position. The symbol may be an arrow, and the arrow may be pointed to the target object by the reference object, and in addition, the symbol may also be another symbol that may indicate a direction, which is not limited herein.
In still other embodiments, the prompting manner may also be a light-on prompt, and the terminal may be provided with a light-emitting device, which may be an LED lamp or others, which is not limited herein. The light emitting devices may be disposed on two or three sides or four sides of the terminal frame, and the terminal processor may control the light emitting devices corresponding to the relative positions to be turned on according to the relative positions of the target object with respect to the reference object. For example, if the target object is on the left side of the reference object, the light emitting device disposed on the left side of the terminal frame may be controlled to be turned on.
In still other embodiments, the prompting manner may also be vibration prompting, and the terminal may be provided with a vibration module, and may be preset with a mapping relationship between a vibration strategy and a moving direction, where the vibration strategy may include a vibration duration, a continuous vibration frequency, and the like, and after the moving direction is determined according to the relative position, a vibration strategy corresponding to the moving direction may be searched, and the vibration module is controlled to vibrate according to the vibration strategy. For example, if the target object is on the left side of the reference object, the vibration strategy may be determined to be vibrating 2 times in succession, i.e., if the user feels 2 vibrations, the camera or the terminal may be moved to the left.
It is understood that the above is only an example, and the method provided by the present embodiment is not limited to the above manner, but is not exhaustive for reasons of space.
According to the shooting method provided by the embodiment of the application, when the shooting range of the terminal is reduced to the first shooting range, a first image collected by a camera in the first shooting range is obtained, the specified coordinate system is determined based on a second image collected by the terminal in the second shooting range, the second shooting range is larger than the first shooting range, then the relative position of the target object and the reference object in the specified coordinate system is determined, the reference object is an object in the first image, the prompt information is determined according to the relative position, and finally the prompt information is output to indicate at least one camera to move, so that the first shooting range covers the target object. Therefore, when the shooting range is reduced, namely the magnification is increased, the designated coordinate system can be determined based on the second image with the larger shooting range, the relative position of the target object to be shot and the reference object in the first image collected currently under the designated coordinate system is determined, and the camera is indicated to move to cover the target object with the first shooting range based on the relative position, so that the target object is shot again, the time for the user to find the target object is saved, and the user experience is improved.
In some embodiments, the terminal is provided with a display screen, the display screen can be used for previewing images collected in the current shooting range, and the previewing images are images displayed on the display screen for a user to view. For example, if the current shooting range is the first shooting range, a first image captured in the first shooting range may be displayed, and if the current shooting range is the second shooting range, a second image captured in the second shooting range may be displayed.
In one example, from time t1 to time t2, in the process that the shooting range of the terminal is reduced from the second shooting range to the first shooting range, the display screen can display the image collected by the camera when the shooting range is reduced from the second shooting range to the first shooting range. Specifically, referring to fig. 8, fig. 8 shows a schematic diagram of a shooting scene provided by an exemplary embodiment of the present application, a display area 801 shown in (a) in fig. 8 shows an image captured by a terminal at a time t1, and a display area 801 shown in (b) in fig. 8 shows an image captured by a terminal at a time t2, so that a user can preview the image captured by a camera through a display screen in real time during shooting.
In other embodiments, the display screen may include different display areas, and a first image acquired by the terminal based on the first shooting range and a second image acquired based on the second shooting range may be respectively displayed in the different display areas, where the second image includes the target object and the first image is a current preview image, so that when the field of view changes in the shooting scene, a user may view the target object and a position of the current preview image (i.e., the first image) in the second image in a larger field of view in real time through the display screen, thereby facilitating the user to know a relative position between a reference object in the current shot image and the target object to be shot in real time, so that the user may simply find the target object through the second image to shoot the target object even under high magnification.
In some embodiments, the relative position between the target object and the reference object is visually prompted, and the reference object may be marked in the second image, so that the user can intuitively know where the current shooting position is by observing the second image, and the global situation is not lost due to the reduction of the visual field.
In other embodiments, not only the reference object but also the target object may be marked in the second image, so that the user can intuitively know the relative position between the reference object and the target object in the current preview image by observing the second image. In one example, the target object may be marked when the target object does not exist in the first image, so that the user may be assisted in finding the target object when the target object runs out of the first photographing range, and not marked when the target object does not run out of the first photographing range, to save power consumption and computational resources.
In an example, referring to fig. 9, fig. 9 shows a schematic view of a shooting scene according to another exemplary embodiment of the present application, where in fig. 9, a display area 901 is a first display area for displaying a first image in real time, and a display area 902 is a second display area for displaying a second image. A display area 901 shown in fig. 9 (a) shows an image captured by the camera at time t1, at which the field of view has not been reduced; the display area 901 shown in (b) of fig. 9 shows the first image captured by the camera at time t2, the display area 902 shown in (b) of fig. 9 shows the second image captured by the camera at time t2, and the box 9021 on the second image is used to mark the position of the first image at this time in the second image; the display area 901 shown in fig. 9 (c) shows the first image captured by the camera at the time t3, the display area 902 shown in fig. 9 (c) shows the second image captured by the camera at the time t3, and the block 9021 in the second image is for marking the position of the reference object on the second image within the first image at this time, and the block 9022 is for marking the position of the target object on the second image at this time. Thus, it is possible to visually mark to help the user which part of the image content currently captured is in the global, and even when the target object is not found at all in the preview image at time t3, that is, the first image at time t3, by marking the target object and the reference object on the second image, the relative position can be visually displayed.
It is understood that the above are only examples of several scenarios, and the method provided by the present embodiment is not limited to the above scenarios, but is not exhaustive here for reasons of space.
Referring to fig. 10, fig. 10 is a schematic flow chart illustrating a shooting method according to another embodiment of the present application, where the shooting method includes:
step S210: and when the terminal uses the second camera, detecting whether a camera switching instruction is acquired.
In this embodiment, the at least one camera includes a first camera and a second camera, where a field of view of the first camera is a first shooting range, and a field of view of the second camera is a second shooting range. Specifically, the visual field range of the first camera is smaller than that of the second camera, the shooting ranges of the first camera and the second camera which can shoot are different, and if the shooting range is reduced to a certain degree, a camera switching instruction is triggered, so that the camera after switching shoots based on a smaller shooting range.
Therefore, when the terminal has cameras with different visual field ranges, if the visual field range of the terminal is reduced, the corresponding magnification factor is reduced to a specified value, and the cameras can be switched at the moment so as to realize the reduction of the visual field range to a greater extent. Specifically, the terminal can be provided with the camera of a plurality of different focuses, and every camera can correspond different focus section and field of vision scope, and different field of vision scope corresponds different magnification. When the user adjusts the magnification factor, if the magnification factors before and after adjustment correspond to different focal sections, that is, the magnification factor reaches a specified value, a camera switching instruction can be triggered, so that a corresponding target camera is determined according to the adjusted magnification factor, and switching to the target camera is controlled.
As an embodiment, the terminal may pre-store the mapping relationship between the cameras and the magnification factor, the first camera may correspond to the magnification factor being at a specified value and above the specified value, and the second camera may correspond to the magnification factor being below the specified value, as shown in table one. And adjusting the magnification factor when the terminal uses the second camera, and triggering a camera switching instruction if the magnification factor reaches a specified value so that the terminal can detect and acquire the camera switching instruction. The designated value may be determined according to the visual field range, may be preset by a program, may be customized by a user, and is not limited herein.
Watch 1
Camera head Field of view Magnification factor
First camera First field of view Specified value and above
Second camera Second field of view (0, value specified)
For example, the first camera may be a telephoto camera, the second camera may be a wide-angle camera, and when the terminal uses the wide-angle camera, if the field of view is narrowed on the basis, that is, the magnification is increased, and when the magnification exceeds a specified value, the camera switching instruction may be triggered, and it is detected that the currently used camera is controllable to be switched from the wide-angle camera to the telephoto camera, so as to use the telephoto camera to perform shooting. Then, because the magnification of a higher multiple is realized at this time, the shooting range of the telephoto camera cannot cover the target object easily, and therefore, whether a camera switching instruction is acquired or not can be detected to help the user find the target object when the terminal uses the second camera.
Step S220: if the camera switching instruction is acquired, responding to the camera switching instruction, the control terminal acquires a first image by using the first camera, and the shooting range of the terminal is reduced to the first shooting range.
If a camera switching instruction is acquired, responding to the camera switching instruction, switching the currently used camera to the first camera, and controlling the terminal to acquire the first image by using the first camera, wherein at the moment, the shooting range of the terminal is reduced to the first shooting range.
Step S230: and when the camera switching instruction is responded, a second image acquired by the second camera is acquired.
In response to the camera switching instruction, a second image captured by the second camera may be captured, thereby capturing an image larger than the capture range of the currently captured first image, and according to the above, the image may at least partially coincide with the first image.
It should be noted that the shooting range corresponding to the second image acquired by the second camera may be any shooting range in which the second camera is larger than the first field of view within the field of view, and only the shooting range may at least partially coincide with the first shooting range.
Step S240: and if the target object exists in the second image, taking the image coordinate system of the second image as the designated coordinate system.
Whether the target object exists in the second image or not is detected, and when the target object exists in the second image, the image coordinate system of the second image can be used as a specified coordinate system. Therefore, when the current camera cannot shoot the target object, whether other cameras can shoot the target object or not can be detected, and when at least one camera can shoot the target object, the relative position is determined based on the second image, collected by the at least one camera, of the target object. Therefore, when high-power amplification is realized through optical zooming, if the current camera loses a target object, the target object can be found back through other cameras with large visual field ranges. For example, when switching from a wide-angle camera to a tele-camera, if the wide-angle camera can capture the target object, the target object can be retrieved based on the image captured by the wide-angle camera.
Step S250: the relative positions of the target object and the reference object in the specified coordinate system are determined.
After the image coordinate system of the second image is taken as the designated coordinate system, the specific implementation of determining the relative positions of the target object and the reference object in the designated coordinate system can refer to the description of step S130, and is not described herein again.
Step S260: and determining prompt information according to the relative position.
Step S270: and outputting prompt information.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the foregoing embodiments, and details are not described herein again.
In some embodiments, the shooting method provided by the present embodiment can retrieve the target object in a scene where the shooting range is reduced through optical zooming. If the shooting range is continuously narrowed through digital zooming after the optical zooming, the target object can be continuously locked through the shooting method provided by the embodiment, and the user can be helped to find the target object. And will not be described in detail herein. For example, when the shooting is switched from the wide-angle camera to the telephoto camera, the target object can be retrieved by the shooting method provided by the embodiment,
the shooting method provided by the embodiment is applicable to a terminal provided with a plurality of cameras, can realize scenes of optical zooming, can acquire a first image based on the first camera by detecting whether a camera switching instruction is acquired or not when the terminal is zoomed in by the optical zooming, and switches from the second camera to the first camera when the camera switching instruction is acquired, acquires a second image acquired under a second shooting range larger than the current shooting range when the camera switching instruction is responded, determines the relative positions of the target object and the reference object under a specified coordinate system based on the second image, and prompts to find the target object, so that the target object can be found back by other cameras with large visual field ranges under high magnification if the current camera loses the target object, and a user can conveniently shoot the target object, the time for the user to retrieve the target object for shooting is shortened, and the user experience is improved.
In addition, in some embodiments, when the user reduces the shooting range and enlarges the image, the angle of the camera may be changed, and at this time, the target object to be shot may run out of the visual field range of all the cameras, that is, the visual field range of any camera cannot cover the target object, at this time, the movement track of the terminal may be determined to indicate the terminal to move, so that the visual field range of at least one camera can cover the target object, and then the target object is retrieved for shooting. Specifically, referring to fig. 11, fig. 11 is a schematic flowchart illustrating a shooting method according to another embodiment of the present application, where the shooting method includes:
step S301: and when the terminal uses the second camera, detecting whether a camera switching instruction is acquired.
Step S302: if the camera switching instruction is acquired, responding to the camera switching instruction, the control terminal acquires a first image by using the first camera, and the shooting range of the terminal is reduced to the first shooting range.
Step S303: and when the camera switching instruction is responded, a second image acquired by the second camera is acquired.
Step S304: detecting whether a target object exists in the second image.
In this embodiment, detecting whether the target object exists in the second image may further include:
if the target object does not exist in the second image, step S305 may be executed;
if the target object exists in the second image, step S305 to step S307 may be skipped, and step S308 is performed.
Step S305: and acquiring the moving track of the terminal before the moment of responding to the camera switching instruction.
And if the target object does not exist in the second image, acquiring the moving track of the terminal before the moment of responding to the camera switching instruction.
In some embodiments, if the target object does not exist in the second image, the movement track of the terminal may be acquired based on a motion sensor of the terminal, and specifically, the movement track of the terminal before the time of responding to the camera switching instruction is acquired. Because when the terminal takes place the displacement, the angle that the camera was shot also can change, then this moment probably leads to the target object to disappear in the field of vision of camera, and the motion trail at terminal this moment can reflect the motion trail of camera relative target object to a certain extent, controls the terminal and moves back along the movement trail at terminal so, can make the target object reappear in the second shooting scope, can appear on the second image.
In other embodiments, if the target object does not exist in the second image, a plurality of historical images acquired by the second camera before the time of responding to the camera switching instruction may also be acquired, and the movement track of the terminal may be determined based on the plurality of historical images. Specifically, referring to fig. 12, fig. 12 is a schematic flowchart illustrating the flow of step S305 in fig. 11 according to an exemplary embodiment of the present application, where in this embodiment, step S305 may include:
step S3051: and splicing the plurality of historical images to obtain a spliced image.
After the shooting range is reduced, if the shooting angle of the camera is changed, the content of the image collected by the camera is changed, and in the process of changing the angle, the camera can 'see' the image content under a plurality of different angles, and if the images corresponding to the image content are spliced, an image with a larger view field, namely a spliced image, can be obtained.
It can be understood that if the camera cannot move relative to the terminal body, the shooting angle of the camera changes, which may be caused by the movement of the terminal, and when a user holds the terminal to narrow the shooting range, the terminal may be greatly deviated, so that the visual field range of any camera cannot cover the target object.
Therefore, if the target object does not exist in the second image, a plurality of historical images collected by the second camera before the time responding to the camera switching instruction can be obtained, and the plurality of historical images are spliced to obtain a spliced image. The history images are preview images acquired by a camera cached in the front of the terminal, and the spliced image comprises the image content of each history image in the plurality of history images.
In some embodiments, the terminal may buffer a previously acquired preview image in the process of reducing the shooting range, so that when no target object exists in the second image, the previously buffered preview image is acquired as a history image, and a spliced image is obtained by splicing based on a plurality of history images.
As an embodiment, when detecting that the shooting range is reduced, the terminal may cache the preview image collected by the camera in real time, so that the terminal may start caching the preview image before the target object does not exist in the second image.
As another embodiment, the terminal may also start to cache the first image captured by the first camera when the target object does not exist in the first image captured by the first camera.
As another embodiment, the terminal may start to cache the first image captured by the first camera only when the target object does not exist in the second image captured by the second camera, so that power consumption may be reduced.
In some other possible embodiments, the terminal may cache the preview image acquired by the camera in real time when detecting that the shooting range is reduced, so that the terminal may start caching the preview image before the target object runs out of the visual field range of all cameras, and at this time, may find the target history image in which the target object exists from the cached preview image, that is, the history image, so as to determine the relative position based on the image coordinate system of the target history image as the designated coordinate system, specifically, in an implementation manner in which the image coordinate system is used as the designated coordinate system to determine the relative position, the foregoing implementation manner in which the image coordinate system of the second image is used as the designated coordinate system may be referred to, which has similar principles and will not be described herein again.
Step S3052: and acquiring the position information of the image content of each historical image in the spliced image according to the sequence of the acquisition of the plurality of historical images.
Because a plurality of historical images are collected in a specified time period, the position information of the plurality of historical images in the spliced image can represent the moving track of the terminal in the specified time period to a certain extent, and therefore the position information of the image content of each historical image in the spliced image can be obtained according to the sequence of collecting the plurality of historical images.
In some embodiments, the position of the center position of the history image within the stitched image may be used as the position information of the history image within the stitched image. In some other embodiments, the position of the other position of the history image in the stitched image may also be used as the position information thereof, which is not limited herein.
Step S3053: and acquiring the moving track of the terminal according to the position information.
In some embodiments, each history image may be used in the stitched image according to the order and the position information of the collected plurality of history images to obtain a stitching track of the plurality of history images, and the stitching track may be used as a moving track of the terminal.
Step S306: and outputting movement prompt information according to the movement track.
The mobile prompt information is used for indicating a user to move the terminal to a specified position, and the target object is located in a second shooting range of the second camera at the specified position. Therefore, when the target object does not exist in the second image, the movement prompt information is output by acquiring the movement track, and a user is prompted to operate the terminal to enable the visual field range of at least one camera to cover the target object, so that the target object can be found.
Step S307: and acquiring a second image acquired by the second camera again until a target object exists in the second image acquired by the second camera.
In some embodiments, after step S307, step S308 may be continuously executed, so that when the target object does not exist in any view range of the camera due to an excessively large offset of the terminal or the camera when the user takes a picture, the user is prompted to control the terminal to move through the terminal moving track until the view range of at least one camera can cover the target object again, an image including the target object is further acquired, and further, the more accurate finding of the target object is realized through step S308 and the subsequent steps.
Step S308: and if the target object exists in the second image, taking the image coordinate system of the second image as the designated coordinate system.
Step S309: the relative positions of the target object and the reference object in the specified coordinate system are determined.
Step S310: and determining prompt information according to the relative position.
Step S311: and outputting prompt information.
The shooting method provided by this embodiment may be based on the foregoing embodiment, and when there is no target object in the second image, that is, when the target object disappears in the field of view of all the cameras, the movement prompt information is output by acquiring the movement trajectory of the terminal before the time of responding to the camera switching instruction, so as to instruct the user to manipulate the terminal to move to enable the second shooting range to re-shoot the target object. Therefore, when the target object is lost, the target object reappears in the second visual field range of the camera, so that the second image containing the target object can be acquired based on the second visual field range, the target object can be retrieved quickly by the aid of a user, time for retrieving the target object is saved, and user experience is improved.
Referring to fig. 13, a block diagram of a camera 1300 according to an embodiment of the present disclosure is shown, where the camera 1300 is applicable to the terminal, and the camera 1300 may include: the image acquisition module 1310, the coordinate system determination module 1320, the position determination module 1330, the prompt determination module 1340, and the prompt output module 1350, specifically:
an image capturing module 1310, configured to acquire a first image captured by the camera within a first capturing range when the capturing range of the terminal is reduced to the first capturing range;
a coordinate system determining module 1320, configured to determine a designated coordinate system based on a second image acquired by the terminal in a second shooting range, where the second shooting range is larger than the first shooting range;
a position determining module 1330 configured to determine a relative position of the target object and a reference object in the designated coordinate system, wherein the reference object is an object in the first image;
a prompt determining module 1340, configured to determine prompt information according to the relative position;
a prompt output module 1350, configured to output the prompt information, where the prompt information is used to indicate a moving direction of at least one camera, and the moving direction is used to cover the first shooting range on the target object.
Further, at least one of the cameras includes a first camera and a second camera, the field of view of the first camera is a first shooting range, the field of view of the second camera is a second shooting range, and the image capturing module 1310 includes: a switching instruction detection submodule and a switching instruction response submodule, wherein:
the switching instruction detection submodule is used for detecting whether a camera switching instruction is acquired or not when the terminal uses the second camera;
and the switching instruction response submodule is used for responding to the camera switching instruction if the camera switching instruction is acquired, controlling the terminal to use the first camera to acquire a first image, and reducing the shooting range of the terminal to a first shooting range.
Further, the coordinate system determination module 1320 includes: a second image acquisition sub-module and a coordinate system determination sub-module, wherein:
the second image acquisition submodule is used for acquiring a second image acquired by the second camera when responding to the camera switching instruction;
and the coordinate system determination submodule is used for taking the image coordinate system of the second image as the specified coordinate system if the target object exists in the second image.
Further, the photographing apparatus 1300 further includes: track acquisition module, removal suggestion module and image acquisition module, wherein:
a track obtaining module, configured to obtain, if a target object does not exist in the second image, a moving track of the terminal before a time when the camera switching instruction is responded;
the mobile prompt module is used for outputting mobile prompt information according to the mobile track, the mobile prompt information is used for indicating a user to move the terminal to a specified position, and the specified position is that the target object is located in a second shooting range of the second camera;
and the image acquisition module is used for acquiring the second image acquired by the second camera again until a target object exists in the second image acquired by the second camera.
Further, the trajectory acquisition module includes: history acquisition submodule and history determination submodule, wherein:
the history acquisition submodule is used for acquiring a plurality of history images acquired by the second camera before the moment of responding to the camera switching instruction if the target object does not exist in the second image;
and the history determining submodule is used for determining the movement track of the terminal based on the plurality of history images.
Further, the history determination sub-module includes: wherein:
the image splicing unit is used for splicing the plurality of historical images to obtain a spliced image, and the spliced image comprises the image content of each historical image in the plurality of historical images;
the position acquisition unit is used for acquiring the position information of the image content of each historical image in the spliced image according to the sequence of the acquisition of the plurality of historical images;
and the track acquisition unit is used for acquiring the moving track of the terminal according to the position information.
Further, the terminal is provided with the display screen, the shooting device still includes: a first image display module, a second image display module, a target object marking module, and a reference object marking module, wherein:
and the first image display module is used for displaying a first image acquired by the terminal in the first shooting range.
The second image display module is used for displaying a second image acquired by the terminal in the second shooting range, the second image and the first image are displayed in different display areas of the display screen, and the second image comprises a target object;
a target object marking module for marking the target object in the second image;
a reference object marking module for marking the reference object in the second image.
The shooting device provided by the embodiment of the application is used for realizing the corresponding shooting method in the method embodiment, has the beneficial effects of the corresponding method embodiment, and is not repeated herein.
In several embodiments provided in the present application, the coupling of the modules to each other may be electrical, mechanical or other forms of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Embodiments of the present application further provide an electronic device, which may include one or more of the following components: a processor, a memory, at least one camera, and one or more applications, wherein the one or more applications may be stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods as described in the foregoing method embodiments.
In some embodiments, the electronic device may also include a display screen.
In an exemplary embodiment, the electronic device provided in the embodiment of the present application may be the terminal 100 shown in fig. 3, then the processor in the electronic device may be the processor 110 shown in fig. 3, the memory in the electronic device may be the memory 120 shown in fig. 3, the at least one camera in the electronic device may be the at least one camera 130 shown in fig. 3, and the display screen in the electronic device may be the display screen 140 shown in fig. 3.
Referring to fig. 14, a block diagram of a computer-readable storage medium according to an embodiment of the present disclosure is shown. The computer-readable storage medium 1400 has stored therein program code that can be called by a processor to execute the method described in the above embodiments.
The computer-readable storage medium 1400 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 1400 includes a non-volatile computer-readable storage medium. The computer readable storage medium 1400 has storage space for program code 1410 for performing any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 1410 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A shooting method is applied to a terminal, the terminal is provided with at least one camera, and the method is applied to the shooting method for acquiring an image including a target object when the shooting range of the terminal is reduced from a second shooting range to a first shooting range; the method comprises the following steps:
when the shooting range of the terminal is reduced from the second shooting range to the first shooting range, acquiring a first image acquired by the camera in the first shooting range;
if the first image does not have the target object, determining a designated coordinate system based on a second image acquired by the terminal in a second shooting range, wherein the second shooting range is larger than the first shooting range, and at least part of the second shooting range is overlapped with the first shooting range; the second shooting range and the first shooting range are different corresponding shooting ranges at different moments; the second shooting range is a shooting range which is passed before the shooting range of the terminal is reduced to the first shooting range;
determining the relative positions of a target object and a reference object in the specified coordinate system, wherein the reference object is an object in the first image; wherein the determining the relative positions of the target object and the reference object in the specified coordinate system comprises: if the reference object is not positioned at the overlapped part of the first shooting range and the second shooting range; acquiring a superposed image between the first image and the second image; calculating a first position of the reference object in the first image relative to the coincident image; calculating a second position of the target object in the second image relative to the coincident image; determining the relative position of the target object and the reference object on the second image according to the first position and the second position;
determining prompt information according to the relative position;
and outputting the prompt information, wherein the prompt information is used for indicating the moving direction of at least one camera, and the moving direction is used for covering the first shooting range on the target object.
2. The method according to claim 1, wherein the at least one camera comprises a first camera and a second camera, the field of view of the first camera is a first shooting range, the field of view of the second camera is a second shooting range, and when the shooting range of the terminal is reduced to the first shooting range, acquiring a first image captured by the camera in the first shooting range comprises:
when the terminal uses the second camera, detecting whether a camera switching instruction is acquired;
and if the camera switching instruction is acquired, responding to the camera switching instruction, controlling the terminal to acquire a first image by using the first camera, and reducing the shooting range of the terminal to a first shooting range.
3. The method of claim 2, wherein determining the designated coordinate system based on a second image acquired by the terminal at a second shooting range comprises:
when the camera switching instruction is responded, a second image acquired by the second camera is acquired;
and if the target object exists in the second image, taking the image coordinate system of the second image as the specified coordinate system.
4. The method of claim 3, further comprising:
if the target object does not exist in the second image, acquiring a moving track of the terminal before the moment of responding to the camera switching instruction;
outputting movement prompt information according to the movement track, wherein the movement prompt information is used for indicating a user to move the terminal to a specified position, and the target object is located in a second shooting range of the second camera at the specified position;
and acquiring the second image acquired by the second camera again until a target object exists in the second image acquired by the second camera.
5. The method according to claim 4, wherein the obtaining the movement trajectory of the terminal before the time point of responding to the camera switching instruction if the target object does not exist in the second image comprises:
if the target object does not exist in the second image, acquiring a plurality of historical images collected by the second camera before the moment of responding to the camera switching instruction;
and determining the movement track of the terminal based on the plurality of historical images.
6. The method according to claim 5, wherein the determining the movement track of the terminal based on the plurality of history images comprises:
splicing the plurality of historical images to obtain a spliced image, wherein the spliced image comprises the image content of each historical image in the plurality of historical images;
acquiring the position information of the image content of each historical image in the spliced image according to the sequence of the acquisition of the plurality of historical images;
and acquiring the moving track of the terminal according to the position information.
7. The method of claim 1, further comprising:
and displaying a first image acquired by the terminal in the first shooting range.
8. The method of claim 7, wherein the terminal is provided with a display screen, the method further comprising:
displaying a second image acquired by the terminal in the second shooting range, wherein the second image and the first image are displayed in different display areas of the display screen, and the second image comprises a target object;
marking the target object in the second image;
marking the reference object in the second image.
9. A photographing apparatus applied to a terminal provided with at least one camera, the apparatus being applied to acquire an image including a target object when a photographing range of the terminal is reduced from a second photographing range to a first photographing range, the apparatus comprising:
the image acquisition module is used for acquiring a first image acquired by the camera in the first shooting range when the shooting range of the terminal is reduced from the second shooting range to the first shooting range;
a coordinate system determining module, configured to determine, if the first image does not have the target object, a designated coordinate system based on a second image acquired by the terminal in the second shooting range, where the second shooting range is larger than the first shooting range; the second shooting range and the first shooting range are different corresponding shooting ranges at different moments; the second shooting range is a shooting range which is passed before the shooting range of the terminal is reduced to the first shooting range;
the position determining module is used for determining the relative positions of a target object and a reference object in the specified coordinate system, wherein the reference object is an object in the first image; the position determining module is further used for determining whether the reference object is located in the overlapping part of the first shooting range and the second shooting range; acquiring a superposed image between the first image and the second image; calculating a first position of the reference object in the first image relative to the coincident image; calculating a second position of the target object in the second image relative to the coincident image; determining the relative position of the target object and the reference object on the second image according to the first position and the second position;
the prompt determining module is used for determining prompt information according to the relative position;
and the prompt output module is used for outputting the prompt information, the prompt information is used for indicating the moving direction of at least one camera, and the moving direction is used for covering the first shooting range on the target object.
10. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-8.
11. A computer-readable storage medium having program code stored therein, the program code being invoked by a processor to perform the method of any of claims 1-8.
CN202010280390.8A 2020-04-10 2020-04-10 Shooting method and device, electronic equipment and storage medium Active CN111479055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010280390.8A CN111479055B (en) 2020-04-10 2020-04-10 Shooting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010280390.8A CN111479055B (en) 2020-04-10 2020-04-10 Shooting method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111479055A CN111479055A (en) 2020-07-31
CN111479055B true CN111479055B (en) 2022-05-20

Family

ID=71751673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010280390.8A Active CN111479055B (en) 2020-04-10 2020-04-10 Shooting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111479055B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111970438B (en) * 2020-08-03 2022-06-28 Oppo广东移动通信有限公司 Zoom processing method and device, equipment and storage medium
CN114071003B (en) * 2020-08-06 2024-03-12 北京外号信息技术有限公司 Shooting method and system based on optical communication device
CN112788244B (en) * 2021-02-09 2022-08-09 维沃移动通信(杭州)有限公司 Shooting method, shooting device and electronic equipment
CN115346333A (en) * 2022-07-12 2022-11-15 北京声智科技有限公司 Information prompting method and device, AR glasses, cloud server and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819847A (en) * 2012-07-18 2012-12-12 上海交通大学 Method for extracting movement track based on PTZ mobile camera
JP2017098851A (en) * 2015-11-26 2017-06-01 富士通株式会社 Display control method, display control program and information processing device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202979117U (en) * 2012-08-06 2013-06-05 上海中和软件有限公司 Double-camera multiple dimensional mobile object tracking system
JP2015149600A (en) * 2014-02-06 2015-08-20 ソニー株式会社 image processing apparatus, image processing method, and program
JP6300550B2 (en) * 2014-02-07 2018-03-28 キヤノン株式会社 Automatic focusing device and automatic focusing method
CN104869317B (en) * 2015-06-02 2018-05-04 广东欧珀移动通信有限公司 Smart machine image pickup method and device
JP6643843B2 (en) * 2015-09-14 2020-02-12 オリンパス株式会社 Imaging operation guide device and imaging device operation guide method
CN107800953B (en) * 2016-09-02 2020-07-31 聚晶半导体股份有限公司 Image acquisition device and method for zooming image thereof
CN108734726A (en) * 2017-12-04 2018-11-02 北京猎户星空科技有限公司 A kind of method for tracking target, device, electronic equipment and storage medium
CN108429881A (en) * 2018-05-08 2018-08-21 山东超景深信息科技有限公司 Exempt from the focal length shooting tripod head camera system application process by zoom view repeatedly
CN112333380B (en) * 2019-06-24 2021-10-15 华为技术有限公司 Shooting method and equipment
CN110602389B (en) * 2019-08-30 2021-11-02 维沃移动通信有限公司 Display method and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819847A (en) * 2012-07-18 2012-12-12 上海交通大学 Method for extracting movement track based on PTZ mobile camera
JP2017098851A (en) * 2015-11-26 2017-06-01 富士通株式会社 Display control method, display control program and information processing device

Also Published As

Publication number Publication date
CN111479055A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN111479055B (en) Shooting method and device, electronic equipment and storage medium
US11119577B2 (en) Method of controlling an operation of a camera apparatus and a camera apparatus
US9699389B2 (en) Image displaying apparatus and image displaying method
US8885069B2 (en) View angle manipulation by optical and electronic zoom control
US9215377B2 (en) Digital zoom with sensor mode change
KR102018887B1 (en) Image preview using detection of body parts
CN107580178B (en) Image processing method and device
US9001255B2 (en) Imaging apparatus, imaging method, and computer-readable storage medium for trimming and enlarging a portion of a subject image based on touch panel inputs
CN113747050B (en) Shooting method and equipment
CN109120840A (en) Application processor for the disparity compensation in digital photographing apparatus between the image of two cameras
WO2014105507A1 (en) Front camera face detection for rear camera zoom function
KR20190008610A (en) Mobile terminal and Control Method for the Same
CN112866576B (en) Image preview method, storage medium and display device
US20110069156A1 (en) Three-dimensional image pickup apparatus and method
CN108449546B (en) Photographing method and mobile terminal
CN110266957B (en) Image shooting method and mobile terminal
CN113875220B (en) Shooting anti-shake method, shooting anti-shake device, terminal and storage medium
CN112532808A (en) Image processing method and device and electronic equipment
CN113840070B (en) Shooting method, shooting device, electronic equipment and medium
CN113747049B (en) Shooting method and equipment for delayed photography
CN110691192B (en) Image processing method, image processing device, storage medium and electronic equipment
JP2007251429A (en) Moving image imaging unit, and zoom adjustment method
US20230362477A1 (en) Photographing method and apparatus, electronic device and readable storage medium
CN115484375A (en) Shooting method and electronic equipment
CN112637495B (en) Shooting method, shooting device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant