WO2022022726A1 - Procédé et dispositif de capture d'image - Google Patents

Procédé et dispositif de capture d'image Download PDF

Info

Publication number
WO2022022726A1
WO2022022726A1 PCT/CN2021/109922 CN2021109922W WO2022022726A1 WO 2022022726 A1 WO2022022726 A1 WO 2022022726A1 CN 2021109922 W CN2021109922 W CN 2021109922W WO 2022022726 A1 WO2022022726 A1 WO 2022022726A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
camera
telephoto
shooting
Prior art date
Application number
PCT/CN2021/109922
Other languages
English (en)
Chinese (zh)
Inventor
吴亮
敖欢欢
郭勇
王妙锋
王军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202011296335.4A external-priority patent/CN114071009B/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022022726A1 publication Critical patent/WO2022022726A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the embodiments of the present application relate to the field of electronic technologies, and in particular, to a photographing method and device.
  • electronic devices such as mobile phones or watches can use a wide-angle camera with a smaller equivalent focal length to capture a target image with a larger field of view (FOV).
  • FOV field of view
  • the clarity of local details on this target image is low.
  • an electronic device uses a wide-angle camera to shoot a larger scene or a distant scenery, the user may not be able to clearly see the details on the target image.
  • Embodiments of the present application provide a shooting method and device, which can refer to an image collected by a first camera with a larger field of view, use a second camera with a smaller field of view to shoot images, and stitch them to obtain a target image with a larger field of view , and the clarity of the target image is high, the details are clear, and the shooting effect is better.
  • an embodiment of the present application provides a shooting method, which is applied to an electronic device.
  • the electronic device includes a first camera and a second camera, and the equivalent focal length of the second camera is greater than the equivalent focal length of the first camera.
  • the method includes: the electronic device starts a photographing function; after the electronic device detects a user's photographing operation, the electronic device displays a first image on a photographing interface, and a guide frame superimposed on the first image.
  • the first image is obtained according to the image collected by the first camera
  • the guide frame includes a plurality of grids, and a single grid corresponds to the size of the field of view of the second camera.
  • the electronic device displays splicing information on the shooting interface, the splicing information is used to indicate the shooting progress, and the splicing information corresponds to the multi-frame target shooting images matched with the multiple grids in the guide frame, and the target shooting images are acquired by the second camera.
  • the electronic device generates a stitched image according to the multi-frame target captured images. After the shooting, the electronic device generates the target image according to the stitched image.
  • the electronic device can refer to the first image captured by the first camera with a smaller equivalent focal length and a larger field of view, and use the second camera with a larger equivalent focal length and a smaller field of view to capture an image of the target , and splicing to obtain a target image with a larger field of view, and the target image has high definition, clear details and better shooting effect.
  • the first image is displayed as a background image, and a guide frame can be superimposed on the first image to guide the second camera to move to shoot a target image that matches the grid in the guide frame.
  • the electronic device may also display stitching information on the shooting interface to indicate the current shooting progress in real time for the user.
  • the photographing interface further includes an image frame superimposed on the first image, the screen range of the first image in the image frame corresponds to the screen range of the second image, and the second image is the second image. image captured by the camera.
  • the picture area of the first image in the image frame may be equal to or slightly smaller than the picture area of the second image.
  • the electronic device can prompt the user with the real-time shooting range and picture range of the second camera through the image frame, so as to present the picture information collected by the second camera in real time to the user.
  • the content of the first image in the grid is bound to the grid.
  • the first image is the first frame image collected by the first camera after detecting the user's photographing operation; or, the first image is the first camera after detecting the user's photographing operation.
  • the collected Q frame images are fused images, where Q is an integer greater than 1.
  • the first image is fixed instead of being refreshed in real time.
  • the first image is an image acquired by the first camera during the photographing process after detecting the photographing operation of the user.
  • the first image is refreshed in real time.
  • the splicing information is a splicing image thumbnail, and the splicing image thumbnail is obtained according to the down-sampled target captured image, or the splicing image thumbnail is obtained after down-sampling the splicing image; or, the splicing information is A splicing frame, where the splicing frame is the frame of the spliced image thumbnail; or, the splicing information is the matched grid or the frame of the matched grid during the photographing process.
  • the stitching information used to indicate the shooting progress on the shooting interface may be a stitched image thumbnail, and the stitched image thumbnail can be obtained in various ways.
  • the splicing information is an enlarged displayed spliced image thumbnail, and the spliced image thumbnail is obtained from the down-sampled target captured image, or the spliced image thumbnail is obtained after down-sampling from the spliced image; or,
  • the splicing information is the enlarged display splicing frame, and the splicing frame is the frame of the spliced image thumbnail; or, the splicing information is the matched grid or the frame of the matched grid during the photographing process that is enlarged and displayed.
  • the stitching information used to indicate the shooting progress on the shooting interface may be in various forms, such as a thumbnail image of the stitched image displayed in an enlarged manner, a stitching frame, a matched grid, or a frame of a matched grid.
  • displaying the first image on the shooting interface of the electronic device includes: the electronic device zooms in and displays the target area image corresponding to the guide frame on the first image on the shooting interface, and the size of the target area image is the same as that of the guide frame.
  • the ratio r of the dimensions is greater than or equal to 1.
  • the electronic device does not display the entire first image and the corresponding guide frame on the shooting interface, but enlarges and displays the image of the target area corresponding to the guide frame on the first image, so that the size of the grid in the guide frame can be relatively small. Large, easy for electronic equipment to move and match according to the larger size grid.
  • the ratio between the equivalent focal length of the second camera and the equivalent focal length of the first camera is greater than or equal to the first preset value.
  • the ratio between the equivalent focal length of the second camera and the equivalent focal length of the first camera is greater than or equal to the first preset value, if the entire first image is displayed, the size of the grid is smaller; For the target area image corresponding to the guide frame on an image, the size of the guide frame and the grid is larger.
  • the method further includes: after the electronic device starts the photographing function, displaying a third image on the preview interface, where the third image is an image collected by the first camera.
  • the electronic device displays the image captured by the first camera on the preview interface.
  • the method further includes: the electronic device superimposes and displays a guide frame on the third image of the preview interface, where the guide frame includes M rows*N columns of grids, and the third image corresponds to R*R grid, M and N are both positive integers less than or equal to R, and at least one of M and N is greater than 1.
  • R is K1, K2, or the larger value of K1 and K2
  • K1 is the ratio of the equivalent focal length of the second camera to the equivalent focal length of the first camera, and is rounded up or down
  • K2 is the first The ratio of the field of view of one camera to the field of view of the second camera is rounded up or down.
  • the division of the grid in the guide frame is related to the equivalent focal length and/or field of view of the first camera and the second camera.
  • the guide frame is located in the middle of the third image.
  • the guide frame on the preview interface may be located in the middle area of the third image by default.
  • the method further includes: prompting the user to set a guide frame on the preview interface by the electronic device.
  • the electronic device acquires the position and/or the specification of the guide frame in response to the user's first setting operation, where the specification includes the values of M and N.
  • the electronic device superimposes and displays the guide frame on the third image of the preview interface, including: the electronic device superimposes and displays the guide frame on the third image of the preview interface according to the position and/or specification of the guide frame.
  • the guide frame on the preview interface may be set by the user.
  • the first setting operation is an area selection operation by the user based on the third image, and the guide frame is used to cover the area selected by the user; or, the first setting operation is the user specifying the subject's area based on the third image.
  • the guide frame is used to cover the main body; or, the first setting operation is an operation of the user selecting a specification control, and the specification control is used to indicate the specification of the guide frame.
  • the user can set the guide frame in various ways.
  • the electronic device displays the third image on the preview interface, including: the electronic device enlarges and displays the target area image corresponding to the guide frame on the third image on the preview interface, and the size of the target area image is the same as that of the guide frame.
  • the ratio r of the frame size is greater than or equal to 1.
  • the ratio between the equivalent focal length of the second camera and the equivalent focal length of the first camera is greater than or equal to the first preset value.
  • the ratio between the equivalent focal length of the second camera and the equivalent focal length of the first camera is greater than or equal to the first preset value, if the entire third image is displayed, the size of the grid is smaller; For the target area image corresponding to the guide frame on the three images, the size of the guide frame and the grid is larger.
  • the shooting interface and/or the preview interface further includes first prompt information, where the first prompt information is used to prompt the user to shoot according to the grid in the guide frame.
  • the user can shoot according to the grid in the guide frame according to the prompt information.
  • the shooting interface further includes second prompt information, where the second prompt information is used to indicate the shooting sequence of the grids in the guide frame.
  • the user can move the second camera to shoot according to the shooting sequence.
  • the method further includes: the electronic device prompts the user to set a shooting order of the grids in the guide frame.
  • the electronic device in response to the user's second setting operation, acquires the shooting order of the grids in the guide frame.
  • the shooting order of the grids in the guide frame may be set by the user.
  • the preview interface further includes an image frame superimposed on the third image, the screen range of the third image in the image frame corresponds to the screen range of the second image, and the second image is the third image. Images captured by two cameras.
  • the frame area of the third image in the image frame may be equal to or slightly smaller than the frame area of the second image.
  • the electronic device can prompt the user with the real-time shooting range and picture range of the second camera through the image frame, so as to present the picture information collected by the second camera in real time to the user.
  • the method further includes: when the captured image of the first target matches the first grid in the guide frame, displaying the first grid differently from other grids.
  • the electronic device can prompt the user of the matching progress between the target captured image and the grid in the guide frame, and can also facilitate the user to know the next grid to be matched, and guide the user to move the direction or path of the second camera.
  • the method further includes: when the target photographed image matches a grid in the guide frame for the first time, the electronic device displays a thumbnail of the first matched target photographed image on the photographing interface.
  • the electronic device may display the thumbnail image of the target photographed image that matches the grid for the first time to the user, so as to facilitate the user to view.
  • the method further includes: when the captured image of the target matches a grid in the guide frame for the first time, zooming in and displaying the thumbnail of the captured image of the target matched for the first time on the capturing interface.
  • the electronic device can enlarge and display the thumbnail image of the target captured image that matches the grid for the first time, so as to facilitate the user to view through the large image.
  • the electronic device generates a spliced image according to the multi-frame target shooting images, including: during the photographing process, when the grids in the guide frame are not matched, the electronic device is based on the matching grids.
  • the multi-frame target shooting images generate a stitched image; or, after the grid matching in the guide frame is completed, the electronic device generates a stitched image according to the multi-frame target shooting images matched with the grid; or, after the shooting is completed, the electronic device Match the multi-frame target captured images corresponding to the grid to generate a stitched image.
  • the electronic device generates the target image in different ways according to the spliced image.
  • the electronic device determining that the shooting ends includes: after the electronic device completes the matching of the grids in the guide frame, determining that the shooting ends.
  • the electronic device can automatically determine that the shooting is over.
  • the target image is obtained by cropping the spliced image, and the edges of the target image are aligned; or, the target image is obtained by filling the unaligned edge area of the spliced image according to the first image, and the edges of the target image are aligned.
  • the electronic device can crop or fill the edges of the stitched image to obtain an edge-aligned target image.
  • the electronic device determines that the shooting is ended, including: before the grids in the guide frame are not matched, if a user's operation to stop taking pictures is detected, determining that the shooting is ended; If the moving direction moves out of the guide frame, the shooting is determined to end; or, if the deviation range between the moving direction of the electronic device and the indicated direction of the guide frame is greater than or equal to the second preset value, the shooting is determined to be complete.
  • the electronic device may determine the end of shooting.
  • the target image is obtained according to the stitched image corresponding to the entire row/column grid that has been matched; or, the target image is obtained according to the stitched image corresponding to the matched grid, and the first image does not match The image area corresponding to the grid is obtained.
  • the electronic device can obtain the target image according to the stitched images corresponding to the entire row/column grid, or obtain the target image by performing filling or super-resolution processing in combination with the image area corresponding to the unmatched grid on the first image.
  • the method further includes: the electronic device obtains a target zoom magnification, the guide frame corresponds to the target zoom magnification, and the target zoom magnification is greater than the zoom magnification of the first camera and smaller than the zoom magnification of the second camera .
  • the electronic device generates a target image according to the stitched image, including: the electronic device cuts the stitched image to generate a target image, and the target image corresponds to the target zoom magnification.
  • the electronic device can stitch the target image captured by the second camera with a smaller field of view to obtain a stitched image with a larger and clearer field of view, and then crop the stitched image to obtain the target zoom A clear target image corresponding to the magnification.
  • the electronic device does not need to perform image magnification through digital zoom, so the high resolution of the second camera and the high resolution of the second image can be retained, and the zoom effect of optical zoom can be achieved.
  • the size of the target image is consistent with the size of the image area corresponding to the field angle of the target zoom magnification.
  • the electronic device can obtain the target image of the size corresponding to the field angle of the target zoom magnification.
  • the method further includes: the electronic device displays a target frame on the shooting interface, the target frame is located in the middle of the third image, and the size of the target frame corresponds to the image corresponding to the field of view of the target zoom magnification The size of the regions is the same.
  • the electronic device can prompt the user with the position and size of the image area size corresponding to the field angle of the target zoom magnification, so that the user can know the size of the target image that can be obtained according to the target zoom magnification.
  • an embodiment of the present application provides another shooting method, which is applied to an electronic device.
  • the electronic device includes a first camera and a second camera, and the equivalent focal length of the second camera is greater than the equivalent focal length of the first camera.
  • the method includes: the electronic device starts a photographing function.
  • the electronic device displays a third image on the preview interface, and superimposes an image frame on the third image.
  • the third image is an image captured by the first camera, and the screen range of the third image in the image frame is the same as that of the second image.
  • the second image is an image collected by the second camera.
  • the electronic device After detecting the user's photographing operation, the electronic device displays the first image and the image frame on the photographing interface, and the first image is obtained according to the image collected by the first camera.
  • the electronic device displays stitching information on the shooting interface.
  • the stitching information is used to indicate the shooting progress.
  • the stitching information corresponds to the multi-frame target shooting images obtained during the shooting process, and the adjacent target shooting images match each other.
  • the electronic device generates a stitched image according to the multi-frame target captured images. After the shooting, the electronic device generates the target image according to the stitched image.
  • the electronic device can refer to the first image captured by the first camera with a smaller equivalent focal length and a larger field of view, and use the second camera with a larger equivalent focal length and a smaller field of view to capture an image of the target , and splicing to obtain a target image with a larger field of view, and the target image has high definition, clear details and better shooting effect.
  • the electronic device displays an image frame on the preview interface and the shooting interface, so as to facilitate the user to move the second camera according to the real-time shooting range of the second camera.
  • the electronic device may also display stitching information on the shooting interface to indicate the current shooting progress for the user in real time.
  • the electronic device determining the end of shooting includes: after the electronic device detects the user's operation to stop taking photos, determining that the shooting is ended; or, after the electronic device obtains the target shooting image with a preset number of frames, determining the shooting Finish.
  • the electronic device may determine to end shooting in various ways.
  • the target image has a regular shape; the target image is obtained by cropping the spliced image; or the target image is obtained by filling the edge area of the spliced image according to the first image.
  • the target image can be obtained by cropping or padding the stitched image.
  • the method further includes: the electronic device acquires a target image range set by the user, and the size of the target image is consistent with the target image range.
  • the user can set the target image range, and the electronic device can process the stitched image according to the target image range, so as to generate the target image of the corresponding size.
  • an embodiment of the present application provides a photographing apparatus, and the apparatus is included in an electronic device.
  • the device has the function of implementing the behavior of the electronic device in any of the above aspects and possible designs, so that the electronic device executes the photographing method performed by the electronic device in any of the possible designs in the above-mentioned aspects.
  • This function can be implemented by hardware or by executing corresponding software by hardware.
  • the hardware or software includes at least one module or unit corresponding to the above-mentioned functions.
  • the apparatus may include an activation unit, a detection unit, a display unit, a generation unit, and the like.
  • an embodiment of the present application provides an electronic device, including: a first camera and a second camera for capturing images; a screen for displaying an interface; one or more processors; a memory; and one or more One or more computer programs, one or more computer programs are stored in memory, the one or more computer programs include instructions that, when executed by an electronic device, cause the electronic device to perform any of the above aspects. Possible designs for the electronic device to perform Shooting method.
  • an embodiment of the present application provides an electronic device, including: one or more processors; and a memory, where codes are stored in the memory.
  • the electronic device is caused to execute the photographing method executed by the electronic device in any possible design of the above aspect.
  • an embodiment of the present application provides a computer-readable storage medium, including computer instructions, which, when the computer instructions are executed on an electronic device, enable the electronic device to execute the photographing method in any of the possible designs in the foregoing aspect.
  • an embodiment of the present application provides a computer program product, which, when the computer program product runs on a computer, enables the computer to execute the photographing method performed by the electronic device in any of the possible designs in the above aspect.
  • an embodiment of the present application provides a chip system, and the chip system is applied to an electronic device.
  • the chip system includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected by lines; the interface circuit is used for receiving signals from the memory of the electronic device and sending signals to the processor, and the signals are included in the memory Stored computer instructions; when the processor executes the computer instructions, the electronic device is made to execute the photographing method in any possible design of the above aspect.
  • FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a photographing method provided by an embodiment of the present application.
  • 3A is a schematic diagram of a group of interfaces provided by an embodiment of the present application.
  • 3B is a schematic diagram of an interface provided by an embodiment of the present application.
  • Fig. 4 is another interface schematic diagram provided by the embodiment of the present application.
  • FIG. 5 is another set of interface schematic diagrams provided by an embodiment of the present application.
  • FIG. 7A is another set of interface schematic diagrams provided by an embodiment of the present application.
  • FIG. 7B is another set of interface schematic diagrams provided by an embodiment of the present application.
  • FIG. 8 is another set of interface schematic diagrams provided by an embodiment of the present application.
  • 9A is another set of interface schematic diagrams provided by an embodiment of the present application.
  • 9B is a schematic diagram of a group of shooting sequences provided by an embodiment of the present application.
  • 10A is a set of interface schematic diagrams and a mobile phone moving effect schematic diagram provided by an embodiment of the present application.
  • FIG. 10B is a schematic diagram of an interface provided by an embodiment of the present application.
  • FIG. 11 is another set of interface schematic diagrams provided by this embodiment of the present application.
  • FIG. 12B is a schematic diagram of a group of images provided by an embodiment of the present application.
  • FIG. 12C is a schematic diagram of another group of images provided by this embodiment of the present application.
  • FIG. 12D is a schematic diagram of another set of images provided by an embodiment of the present application.
  • FIG. 13A is a schematic diagram of an image fusion process provided by an embodiment of the present application.
  • FIG. 13B is a schematic diagram of another image fusion process provided by an embodiment of the present application.
  • FIG. 13C is a schematic diagram of a group of interfaces provided by an embodiment of the present application.
  • FIG. 14 is another set of interface schematic diagrams provided by an embodiment of the present application.
  • 15A is another set of interface schematic diagrams provided by an embodiment of the present application.
  • 15B is another set of interface schematic diagrams provided by an embodiment of the present application.
  • 15C is a schematic diagram of an interface provided by an embodiment of the present application.
  • 15D is another set of interface schematic diagrams provided by an embodiment of the present application.
  • FIG. 16 is another interface schematic diagram provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of a group of images provided by an embodiment of the present application.
  • 18 is a schematic diagram of the identification of a group of target images provided by an embodiment of the present application.
  • FIG. 19 is a schematic flowchart of another photographing method provided by an embodiment of the present application.
  • FIG. 20 is a schematic diagram of a group of interfaces provided by an embodiment of the present application.
  • FIG. 21 is another set of interface schematic diagrams provided by this embodiment of the present application.
  • FIG. 22 is another set of interface schematic diagrams provided by this embodiment of the present application.
  • 23A is another set of interface schematic diagrams provided by an embodiment of the present application.
  • 23B is a schematic diagram of an interface and a schematic diagram of a target image provided by an embodiment of the present application.
  • FIG. 25 is a schematic diagram of a group of interfaces provided by an embodiment of the present application.
  • 26A is a set of schematic diagrams of a guide frame and a schematic diagram of an interface provided by an embodiment of the present application;
  • 26B is a schematic diagram of an interface provided by an embodiment of the present application.
  • FIG. 27 is a schematic diagram of a group of interfaces provided by an embodiment of the present application.
  • FIG. 28 is another set of interface schematic diagrams and target image schematic diagrams provided by an embodiment of the present application.
  • FIG. 29 is another interface schematic diagram provided by this embodiment of the application.
  • FIG. 30 is a schematic structural diagram of another electronic device provided by an embodiment of the present application.
  • the first image the background image displayed on the shooting interface.
  • the first image is a wide-angle image.
  • the first image may be fixed and not refreshed, for example, the first frame of wide-angle image or the following initial wide-angle image captured by the wide-angle camera after the user's photographing operation is detected in the following embodiments.
  • the first image may also be refreshed in real time, for example, a wide-angle image collected in real time by the wide-angle camera during the photographing process.
  • Second image the image captured by the second camera in real time.
  • the second image is a telephoto image.
  • Target Capture Image A second image that matches the grid in the guide frame.
  • the image captured by the target may be a telephoto image of the target that matches the grid in the guide frame.
  • the third image the real-time changing image displayed on the preview interface, acquired through real-time acquisition by the first camera.
  • the first camera is a wide-angle camera
  • the third image may be a wide-angle image displayed on the preview interface in the following embodiments.
  • Stitching information used to indicate the shooting progress of the photographing process, indicating the matching progress of the grids in the guide frame during the photographing process.
  • the stitching information corresponds to the multi-frame target telephoto images matched with the grids in the guide frame during the photographing process.
  • the splicing information may be the spliced image thumbnails, splicing frames, matched grids or borders of matched grids displayed on the shooting interface in the following embodiments.
  • Image frame used to indicate the real-time shooting range of the second camera.
  • the image frame is the telephoto frame in the following embodiments.
  • the first setting operation is an operation for the user to set the position and/or specification of the guide frame.
  • the second setting operation is an operation for the user to set the shooting order of the grids in the guide frame.
  • the first target telephoto image a frame of the target telephoto image obtained during the photographing process.
  • First Grid A grid in the guide frame that matches the first target telephoto image.
  • the second target telephoto image another frame of the target telephoto image obtained during the photographing process.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • plural means two or more.
  • the embodiment of the present application provides a shooting method, which can be applied to electronic equipment, and can refer to an image collected by a first camera with a larger field of view, and use a second camera (telephoto camera) with a smaller field of view to shoot images and stitch them together A target image with a larger field of view is obtained, so that the target image has higher clarity and resolution, clear local details, prominent subject, and better shooting effect.
  • a shooting method which can be applied to electronic equipment, and can refer to an image collected by a first camera with a larger field of view, and use a second camera (telephoto camera) with a smaller field of view to shoot images and stitch them together
  • a target image with a larger field of view is obtained, so that the target image has higher clarity and resolution, clear local details, prominent subject, and better shooting effect.
  • the equivalent focal length of the second camera is relatively large, and the field of view angle is relatively small, for example, it may be a telephoto camera or an ultra-telephoto camera.
  • the first camera involved in the embodiments of the present application may be a camera with a smaller equivalent focal length and a larger field of view, such as a wide-angle camera, an ultra-wide-angle camera, or a panoramic camera.
  • the equivalent focal length of the second camera may be 240mm, and the field angle may be 10°.
  • the equivalent focal length of the second camera may be 125mm, and the field of view angle may be 20°.
  • the equivalent focal length of the second camera may be 80mm, and the field of view angle may be 30°.
  • the equivalent focal length of the first camera may be 26mm, and the field angle may be 80°.
  • the equivalent focal length of the first camera may be 16mm, and the field of view angle may be 120°.
  • the first camera is a wide-angle camera
  • the second camera is a telephoto camera
  • the first camera is an ultra-wide-angle camera
  • the second camera is a telephoto camera
  • the first camera is a wide-angle camera
  • the second camera is a Super telephoto camera.
  • the field of view of the image captured by the second camera and the target image obtained by splicing is smaller than or equal to the field of view of the first camera.
  • the equivalent focal length of the second camera may be greater than or equal to a preset value, so that the zoom magnification of the second camera is larger, the field of view is smaller, and the image resolution is higher; in other embodiments, The ratio of the equivalent focal length of the second camera to the first camera may be greater than or equal to the preset value of 1, so that the ratio of the zoom magnification of the second camera to the first camera is larger, and the difference between the field of view of the first camera and the second camera The proportion is larger.
  • the preset value 1 may be 2 or 3 or the like. In this way, the target image obtained by the electronic device has higher definition and resolution, clearer local details, more prominent subject, and better shooting effect.
  • the equivalent focal length refers to the length of the diagonal of the image area of the photoelectric sensor chip of the camera.
  • the actual focal length of the camera corresponds to the focal length of the 35mm camera lens.
  • the zoom ratio describes the meaning of the relative equivalent focal length. For example, if the zoom ratio of a wide-angle camera is defined as 1, then the zoom ratio of other cameras (such as telephoto cameras) is equal to the equivalent focal length of the telephoto camera and the wide-angle camera as the benchmark. ratio of equivalent focal lengths.
  • the size of the field of view determines the field of view of the camera. The larger the field of view, the larger the field of view. The larger the equivalent focal length, the smaller the field of view.
  • the target image obtained by splicing the images captured by the second camera may be in the form of a wide format (including a horizontal format or a vertical format), a square format, an ultra-wide format, or a panorama.
  • the aspect ratio of the target image can be 2:1, 9:16, 1:1, or 2.39:1, etc.
  • target images of different frames can give users different visual feelings, so for different themes or themes, appropriate frames can be used to shoot.
  • landscape subjects can be shot in horizontal format to show the wide and atmospheric characteristics of the scene.
  • high-rise buildings, towers, mountains and other themes can be shot in vertical format to express a towering and upright picture effect.
  • the shooting methods provided in the embodiments of the present application can be used for rear image shooting, and can also be used for front image shooting.
  • the electronic device may be a mobile phone, a tablet, a wearable device (eg, a smart watch), an in-vehicle device, an augmented reality (AR)/virtual reality (VR) device, a laptop, a super mobile personal computer
  • a wearable device eg, a smart watch
  • AR augmented reality
  • VR virtual reality
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • FIG. 1 shows a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the display screen 194 may be used to display a preview interface and a photographing interface and the like in the photographing mode.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, converting it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the camera 193 may include cameras with different focal lengths, for example, a first camera with a short focal length and a second camera with a long focal length.
  • the equivalent focal length of the first camera is small (for example, 13mm, 16mm, 26mm, 30mm or 40mm, etc.), and the field of view angle is large (for example, the field of view angle can be 80°, 120°, or 150°, etc.), which can be used for Take larger pictures such as landscapes.
  • a current wide-angle camera, an ultra-wide-angle camera, a panoramic camera, and other cameras with a larger field of view may be referred to as the first camera.
  • the equivalent focal length of the second camera is larger (for example, 80mm, 125mm, 150mm or 240mm, etc.), and the field of view angle is small (for example, the field of view angle can be 10°, 20° or 30°, etc.), which can be used to shoot distant objects, the area that can be photographed is small.
  • the field of view angle can be 10°, 20° or 30°, etc.
  • both the current telephoto camera and the ultra-telephoto camera can be called the second camera.
  • the second camera is fixed, and the user can move the second camera by moving the electronic device 100 .
  • the second camera can be moved independently, and the user can directly move the second camera through a certain key, control or operation without moving the mobile phone; or, the mobile phone can automatically control the movement of the second camera.
  • the content of the picture captured by the second camera also changes accordingly.
  • the camera 193 may further include a depth camera for measuring the object distance of the object to be photographed, and other cameras.
  • the depth camera may include a 3-dimensions (3D) depth camera, a time of flight (TOF) depth camera, or a binocular depth camera, and the like.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 110 can realize the reference to the image collected by the first camera by running the instructions stored in the internal memory 121, use the second camera to capture the image and stitch to obtain the target image with a larger field of view, so as to The target image has high definition, clear details and better shooting effect.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a touch screen.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the location where the display screen 194 is located.
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the first camera and the second camera in the camera 193 can be used to capture images;
  • the display screen 194 can be used to display a preview interface and a shooting interface when taking pictures;
  • the instructions of the memory 121 can realize the reference to the image collected by the first camera with a larger field of view, use the second camera with a smaller field of view to capture the image and stitch to obtain the target image with a larger field of view, so that the target image has a larger field of view.
  • the sharpness is high, the details are clear, and the shooting effect is better.
  • the photographing method provided by the embodiment of the present application will be described below by taking the electronic device as a mobile phone having the structure shown in FIG. 1 as an example. As shown in Figure 2, the method may include:
  • the mobile phone starts the camera function.
  • the camera function of the mobile phone can be activated.
  • the phone can launch the camera app, or launch other apps with a shooting function (such as AR apps such as Douyin or Hetu cyberverse), thereby launching the app's camera function.
  • AR apps such as Douyin or Hetu cyberverse
  • the mobile phone after the mobile phone detects that the user clicks the camera icon 301 shown in (a) in FIG. 3A , the mobile phone starts the photographing function of the camera application, and displays the preview interface shown in (b) in FIG. 3A .
  • the mobile phone displays a desktop or a non-camera application interface, starts the camera function after detecting the user's voice command to open the camera application, and displays a preview interface as shown in (b) of FIG. 3A .
  • the mobile phone can also activate the camera function in response to other user touch operations, voice commands or quick gestures and other operations, and the embodiment of the present application does not limit the operation of triggering the mobile phone to activate the camera function.
  • the mobile phone can use the shooting method provided by the embodiment of the present application to use a second camera with a smaller field of view to collect multiple frames of images, and stitch the multiple frames of the images into a field of view target image with larger corners.
  • the clarity and resolution of the target image are high, the local details are clear, the main body is prominent, and the shooting effect is better.
  • the shooting method provided by the embodiment of the present application can use the second camera with a smaller field of view to collect multiple frames of images, and then record the multiple frames of the The images are stitched into a target image with a larger field of view.
  • the mobile phone can use the shooting method provided by the embodiment of the present application to refer to the image collected by the first camera with a larger field of view, use the second camera with a smaller field of view to shoot images, and stitch them to obtain the field of view.
  • the larger the target image the higher the clarity and resolution of the target image, the clear local details, the prominent subject, and the better shooting effect.
  • the target photographing mode may be specifically referred to as a wide-frame mode, a wide-view mode, or a high-definition image mode, and the embodiment of the present application does not limit the specific name of the target photographing mode.
  • the target photographing mode is taken as the wide-view mode as an example for description.
  • the wide view mode is entered as shown in (c) in FIG. 3A .
  • the interface shown in (d) in FIG. 3A is displayed; after the mobile phone detects the operation that the user clicks the control 304 , as shown in (c) in Figure 3A to enter the wide-view mode.
  • the mobile phone when the mobile phone displays a desktop or a non-camera application interface, it detects the user's voice command to enter the wide-view mode and starts the camera function, and enters the wide-view mode as shown in (c) of FIG. 3A .
  • the mobile phone can also activate the camera function and enter the wide-view mode in response to other user touch operations, voice commands, or shortcut gestures.
  • the mobile phone can prompt the user of the function of the shooting mode in the wide-view mode by displaying information or voice broadcasting.
  • the mobile phone displays text prompt information on the preview interface: in the wide-view mode, you can refer to the image captured by the first camera with a larger field of view, use the second camera with a smaller field of view to capture the image and Stitching obtains a target image with a larger field of view.
  • the mobile phone in the wide view mode, can prompt the user about the specific cameras used by the first camera and the second camera.
  • the mobile phone displays text information on the preview interface to prompt the user
  • the first camera is a wide-angle camera
  • the second camera is a telephoto camera.
  • the first camera and the second camera in the wide-view mode are cameras set by the user, for example, the first camera is an ultra-wide-angle camera set by the user, and the second camera is a telephoto camera set by the user.
  • This embodiment of the present application does not limit the specific manner in which the user sets the first camera and the second camera.
  • the first camera and the second camera in the wide-view mode are default cameras, for example, the first camera is a wide-angle camera by default, and the second camera is a telephoto camera by default.
  • the user can also modify the camera types of the first camera and the second camera.
  • the first camera is a wide-angle camera and the second camera is a telephoto camera as an example for description.
  • the mobile phone displays a wide-angle image on the preview interface.
  • the phone After the phone starts the camera function, it enters the preview state.
  • the mobile phone can collect wide-angle images in real time through the wide-angle camera according to the preset frame rate 1, and display the obtained wide-angle images on the preview interface, so as to present the global image within a larger field of view to the user ( or panoramic image).
  • the wide-angle camera is the first camera.
  • the mobile phone can also display a guide frame for guiding the telephoto camera to move and shoot on the preview interface, so that the telephoto camera can shoot multiple frames of images according to the guide frame during the photographing process.
  • the guide frame is superimposed on the wide-angle image in the form of a transparent floating frame.
  • the field of view of the guide frame is smaller than or equal to the field of view of the wide-angle camera. That is to say, the field of view of the target image generated by the mobile phone according to the instructions of the guide frame is smaller than or equal to the field of view of the wide-angle camera.
  • the maximum number of grids that the guide frame can include is R*R, and R is related to K1 and/or K2.
  • K1 is the ratio of the equivalent focal length of the telephoto camera to the equivalent focal length of the wide-angle camera, rounded up or down.
  • K2 is the ratio of the field of view of the wide-angle camera to the field of view of the telephoto camera, rounded up or down.
  • R is K1; in other embodiments, R is K2; in other embodiments, R is the greater of K1 and K2.
  • the guide frame displayed by the mobile phone on the guide interface includes M (rows)*N (columns) grids.
  • M ⁇ R, N ⁇ R, and at least one of M and N is greater than 1.
  • This M*N can be called the specification of the lead frame. That is to say, the specification of the guide frame includes the number of grids in the guide frame and the arrangement of the grids.
  • the wide-angle image corresponds to the lead frame with the highest number of grids.
  • the size and field of view of the wide-angle image can be slightly larger or equal to the size and field of view of the guide frame with the largest number of grids
  • the field of view of the telephoto camera corresponds to the corresponding field of view of a single grid.
  • the field of view of the telephoto camera may be slightly larger than or equal to the field of view corresponding to a single grid.
  • the equivalent focal length of the wide-angle camera is 125mm
  • the equivalent focal length of the telephoto camera is 25mm
  • the number of grids in each row of the guide frame is less than or equal to 5, and the number of grids in each column is also less than or equal to 5.
  • the guide frame may include at most 5*5 (ie, 5 rows and 5 columns) grids; the guide frame may also include less than 5*5 grids, such as 3*3, 3*4 or 4* 5 grids.
  • the field of view of the wide-angle camera corresponds to the 5*5 grids
  • a single grid corresponds to the field of view of the telephoto camera.
  • the ratio between the equivalent focal length of the telephoto camera and the equivalent focal length of the wide-angle camera is less than or equal to a preset value of 2.
  • the preset value 2 may be 8 or 10 or the like.
  • the guide frame includes at most 5*5 grids as an example for description.
  • the guide frame is displayed by default in the middle of the image displayed in the preview interface.
  • the specification of the guide frame is the default specification, or the specification that the phone last adopted in the wide view mode.
  • the position or specification of the guide frame displayed on the preview interface by the mobile phone may be set by the user.
  • the mobile phone determines a matching guide frame according to the area selected by the user.
  • the area corresponding to the guide frame determined by the mobile phone can cover the user's selection area, or the ratio of covering the user's selection area is greater than or equal to a preset ratio (such as 90%), or slightly larger than the user's selection area.
  • a preset ratio such as 90%
  • the mobile phone determines that the target image the user wants to shoot is the size corresponding to the area. Then, the mobile phone determines the position or size of the guide frame according to the area.
  • the guide frame includes 3*3 grids, and as shown in (c) of FIG. 5 , the guide frame 501 is displayed on the preview interface.
  • the mobile phone displays the setting interface after detecting that the user clicks the setting control 601 on the preview interface as shown in (a) of FIG. 6 .
  • the setting interface includes the setting control of the target image guide frame.
  • the largest guide frame is displayed.
  • the largest lead frame includes the largest number of grids (eg 5*5 grids above).
  • the mobile phone detects that the user performs a frame selection on the largest guide frame and clicks to confirm the control, the range of the position corresponding to the user's frame selection operation is determined as the position and size of the guide frame.
  • the mobile phone displays the determined guide frame 602 including 2*3 grids on the preview interface.
  • the mobile phone displays the largest guide frame including the maximum number of grids on the preview interface, and prompts the user: please set the guide frame of the target image.
  • the user drags on the largest guide frame to select multiple grids, and the mobile phone determines that the position and range corresponding to the user's frame selection operation are the position and size of the guide frame.
  • the mobile phone displays a determined guide frame 701 including 3*3 grids on the preview interface, and the preview interface also includes a wide-angle image.
  • the user can indicate the main body on the preview interface, and the mobile phone determines the position and size of the guide frame according to the main body indicated by the user, so that the guide frame can cover the main body indicated by the user.
  • the mobile phone may prompt the user on the preview interface to select the subject to be photographed.
  • the mobile phone detects the operation of the user clicking on the building, it determines that the entire connected building is the subject to be photographed, and thus determines that the guide frame is on the preview interface as shown in (c) in FIG. 7B .
  • the guide frame 703 that can cover the entire connected building includes a total of 3*3 grids.
  • the mobile phone displays multiple specification controls on the preview interface, such as 5*5, 4*5, 4*4, 3*4 or 3*3, etc. Determines the number of grids included in the guide box based on the user-selected specification control. For example, after the mobile phone detects the operation of clicking the 3*3 specification control, as shown in (b) of FIG. 8, a guide frame 801 including 3*3 grids is displayed on the preview interface, and the preview interface also includes Wide angle image.
  • the mobile phone may also operate according to the user's instruction to determine the shooting sequence of the grids in the guide frame.
  • the user can move the telephoto camera according to the photographing sequence, so that the telephoto camera is sequentially matched with the grid specified in the sequence.
  • the user when the user sets the position or specification of the guide frame through the methods described in the above embodiments (eg, the methods shown in FIGS. 5-8 ), the user may also set the shooting order of the grids in the guide frame.
  • the mobile phone after determining the specification of the guide frame, can display multiple sequence modes corresponding to the guide frame of the specification, and the mobile phone can determine one sequence mode according to the user's selection operation.
  • the size of the guide frame is 3*3, see (a) in FIG. 9A
  • the preview interface includes multiple sequential mode controls, such as controls 901 to 903 .
  • the mobile phone determines that the shooting sequence is to first shoot the image corresponding to the middle row of grids from left to right, then shoot the image of the upper row of grids from left to right, and then from left to right. Go to the right to take an image of the lower row of grids.
  • the mobile phone detects the user's operation of clicking the sequence mode control 902, it determines that the photographing sequence is to first photograph the image of the grid in the middle row from left to right, then photograph the image of the grid in the upper row from right to left, and then from the left. Go to the right to take an image of the lower row of grids.
  • the mobile phone detects that the user clicks on the sequence mode control 903
  • the mobile phone determines that the shooting sequence is to shoot according to the "zigzag" track from top to bottom.
  • the guide frame of each specification corresponds to a shooting order by default, and the mobile phone uses the default shooting order to shoot after determining the specification of the guide frame.
  • the mobile phone can also modify the shooting sequence according to the user's instructions.
  • the mobile phone may prompt the user to shoot according to the shooting sequence.
  • the mobile phone can display shooting sequence prompt information on the preview interface.
  • the mobile phone can display a label and a guide line with an arrow on the guide frame, and the label is used to indicate different rows of grids.
  • the arrows of the guide line indicate the direction to indicate the shooting order of each row of grids.
  • the mobile phone can display a guide line with an arrow on the guide frame to indicate the shooting sequence corresponding to different grids.
  • the shooting sequence may also include other sequential modes, for example, may also include sequential modes as shown in (a)-(d) in FIG. limited.
  • the mobile phone may briefly display the guide frame on the photographing interface, and then display the guide frame on the photographing interface after the user's photographing operation is subsequently detected.
  • the mobile phone does not display the guide frame on the preview interface, but displays the guide frame on the shooting interface after detecting the user's photographing operation.
  • the mobile phone in the preview state, can also collect telephoto images through the telephoto camera according to the preset acquisition frame rate 2 .
  • the telephoto camera is the second camera.
  • the mobile phone may also prompt the user with the real-time shooting range of the telephoto camera through the telephoto frame, so as to present the partial picture captured by the telephoto camera in real time to the user.
  • the telephoto frame is superimposed and displayed on the wide-angle image in the form of a transparent floating frame.
  • the position and size of the telephoto frame on the wide-angle image corresponds to the shooting range of the telephoto camera.
  • the field of view of the wide-angle image in the telephoto frame may be equal to or slightly smaller than the shooting range and field of view of the telephoto camera.
  • the picture range of the wide-angle image in the telephoto frame corresponds to that of the telephoto camera.
  • the frame range of the wide-angle image in the telephoto frame may be equal to or slightly smaller than the frame range of the telephoto image captured by the telephoto camera.
  • the collection frame rate 2 and the collection frame rate 1 may be the same or different, and are not limited.
  • the preview interface includes a wide-angle image 1001 , a telephoto frame 1002 and a guide frame 1003 .
  • the relative position and size of the telephoto frame on the wide-angle image on the preview interface remain basically unchanged.
  • the object distance and the size of the field of view are related. Exemplarily, see (b) in FIG. 10A , the distance between the lens centers of the wide-angle camera and the telephoto camera is fixed, and when the object distance is constant, the size of the field of view is also fixed.
  • the telephoto camera and the wide-angle camera move at the same time, but the relative relationship between the telephoto camera’s field of view and the wide-angle camera’s field of view remains unchanged, so the relative position and size of the telephoto frame on the wide-angle image are also Basically unchanged.
  • the field angle of the telephoto camera is also quite different from the field angle of the wide-angle camera.
  • the field of view is small, and the size of the telephoto frame is small, making it inconvenient for users to view details within the shooting range of the telephoto camera. Therefore, the mobile phone can enlarge the telephoto frame and the wide-angle image in the telephoto frame and display them on the preview interface, so as to facilitate the user to know the shooting range and details of the telephoto camera.
  • the preview interface includes a wide-angle image 1004 , a zoomed-in telephoto frame 1005 , and a guide frame 1006 .
  • the mobile phone displays the target area image corresponding to the guide frame on the wide-angle image on the preview interface, but does not display the complete wide-angle image.
  • the ratio of the size of the target area image to the size of the guide frame is r, and r ⁇ 1.
  • the target area image can be obtained by cropping and zooming in on the full wide-angle image.
  • the field of view of the telephoto camera is also quite different from that of the wide-angle camera.
  • the size of the telephoto frame is small, which is not convenient for users to view the details within the shooting range of the telephoto camera.
  • the mobile phone can enlarge the target area image and the guide frame proportionally and display them on the preview interface, so that the user can easily know the shooting range and details of the telephoto camera.
  • the wide-angle image can be cropped according to the guide frame of the default specification to obtain the target area image, and the target area image and The guide frame is proportionally enlarged and displayed on the preview interface.
  • the specification of the guide frame corresponds to the area/subject selected by the user
  • the mobile phone collects the wide-angle image
  • the wide-angle image can be cropped according to the guide frame to obtain the target area image, and the target area image and the guide frame are proportional to each other. After zooming in, it will be displayed on the preview interface.
  • the complete wide-angle image on the preview interface shown in (b) of FIG. 8 can be replaced with the target area image 1007 in the wide-angle image on the preview interface shown in FIG. 10B .
  • the target area image and the guide frame are enlarged in equal proportions, and the size of the target area image is the same as the guide frame.
  • the size of the ratio r is greater than 1.
  • the mobile phone After detecting the user's photographing operation, the mobile phone displays a wide-angle image and a guide frame superimposed on the wide-angle image on the photographing interface.
  • a shooting operation can be triggered to make the mobile phone enter the shooting process. For example, after the mobile phone detects the user's operation of clicking the photographing control on the preview interface, it determines that the user's photographing operation is detected, thereby entering the photographing process. For another example, after the mobile phone detects the operation of the user's voice instructing to start taking a photo, it determines that the user's photo-taking operation is detected, thereby entering the photo-taking process. It can be understood that the manner for triggering the mobile phone to enter the photographing process may also include gestures and other manners, which are not limited in the embodiments of the present application.
  • a wide-angle image is displayed on the photographing interface, and the wide-angle image is acquired by using a wide-angle camera (ie, the first camera).
  • the wide-angle image on the photographing interface is used as a background image to provide the user with a panoramic image, and the user determines the moving path of the telephoto camera according to the range of the wide-angle image, thereby generating multiple telephoto images stitched image.
  • the wide-angle image is the first frame of image collected by the wide-angle camera after detecting the user's photographing operation.
  • the mobile phone always displays the first frame of image, not the image collected by the wide-angle camera. Perform refresh display.
  • the mobile phone may collect Q (Q is an integer greater than 1) frames of wide-angle images after detecting the user's photo-taking operation, so as to fuse the Q frames of wide-angle images into an initial wide-angle image with better quality image as background image.
  • Q is an integer greater than 1
  • the mobile phone always displays the initial wide-angle image, and does not refresh and display the image captured by the wide-angle camera. That is to say, after the mobile phone detects the user's photographing operation, the wide-angle image displayed on the photographing interface and used as a preview remains unchanged.
  • the first wide-angle image serving as the background image on the photographing interface changes, and is an image acquired by the mobile phone through real-time acquisition of the wide-angle camera according to the preset acquisition frame rate 3 .
  • the acquisition frame rate 3 and the acquisition frame rate 1 may be the same or different.
  • the mobile phone can also display the above-mentioned guide frame on the photographing interface, and the guide frame is superimposed on the background image in the form of a transparent floating frame.
  • the mobile phone can continuously display the entire guide frame on the shooting interface; in other embodiments, the mobile phone can only display the unmatched grids in the guide frame on the shooting interface, instead of continuously displaying the entire guide frame .
  • the mobile phone can collect telephoto images through the telephoto camera according to the preset acquisition frame rate 4 .
  • the acquisition frame rate 4 and the acquisition frame rate 2 may be the same or different.
  • the user can move the telephoto camera by moving the mobile phone, or the user can directly move the telephoto camera, or the mobile phone can automatically control the telephoto camera to move at preset angular intervals.
  • the mobile phone can also display a telephoto frame on the shooting interface. As the telephoto camera moves, the shooting range of the telephoto camera changes, the content of the telephoto image collected by the telephoto camera also changes accordingly, and the position of the telephoto frame also changes accordingly.
  • the telephoto frame can prompt the user in real time the dynamic change process of the shooting range of the telephoto camera during the movement of the telephoto camera.
  • the wide-angle image on the shooting interface can be used as a background image to provide the user with a panoramic image, so as to guide the user to move the telephoto camera to match the telephoto frame with the grids in the guide frame one by one.
  • the mobile phone can control the telephoto camera to automatically move according to the grid order according to the grid arrangement in the guide frame, so that the telephoto camera is connected to the guide frame.
  • the grids are matched one by one without the need to display a telephoto frame on the shooting interface.
  • the wide-angle image used as the background image on the shooting interface changes, because the user does not specifically move the mobile phone, the wide-angle image captured by the mobile phone in real time is basically unchanged or changes little, and the guide frame and grid are on the screen.
  • the position of the telephoto camera is basically unchanged, so the mobile phone controls the telephoto camera to automatically move in the grid order, and the telephoto camera can still be matched with the grid in the guide frame one by one without displaying the telephoto frame.
  • the mobile phone generates a stitched image according to the acquired target telephoto image, and displays a thumbnail of the stitched image on the shooting interface.
  • the mobile phone can collect multiple frames of target telephoto images that match the grid in the guide frame through the telephoto camera.
  • the mobile phone can stitch the target telephoto image to generate the target image.
  • the telephoto image matches the grid
  • the telephoto image may be called the target telephoto image.
  • the content of the telephoto image matches the content of the wide-angle image in the grid means that the content of the telephoto image is exactly or substantially the same as the content of the wide-angle image in the grid.
  • the overlap ratio of the telephoto image and the same content of the wide-angle image in the grid is greater than or equal to a preset value of 3 (eg, 80% or 90%, etc.).
  • the similarity between the histogram of the telephoto image and the wide-angle image in the grid is greater than or equal to a preset value of 4.
  • the histogram here may be a histogram of characteristic parameter values such as brightness.
  • the telephoto image and the wide-angle image in the grid are in the same transform domain (such as fast Fourier transform (FFT), wavelet transform (WT) or discrete cosine transform (discrete cosine transform), The similarity of DCT), etc.) is greater than or equal to the preset value of 5.
  • the measure of similarity can be the sum of the differences of the scale coefficients corresponding to different values.
  • the feature matching degree between the telephoto image and the wide-angle image in the grid is greater than or equal to a preset value of 6, for example, the feature includes a corner point, a convolutional neural network feature, or a SIFI feature.
  • the ways for the mobile phone to obtain the target telephoto image may include: mode 1. After the mobile phone determines that a certain frame of telephoto image matches the grid, the mobile phone captures and obtains the target telephoto image that matches the grid; After matching with the grid, it is determined that this frame of telephoto image is the target telephoto image.
  • the mobile phone can prompt the user of the photographing sequence of the grid in the guide frame, so that the user can move the telephoto camera according to the photographing sequence during the photographing process.
  • the mobile phone can prompt the user for the shooting order during the shooting process, so that the user can move the mobile phone or directly move the telephoto camera according to the shooting order, so that the The telephoto image captured by the telephoto camera is matched to the grid in the guide frame in the order in which it was taken.
  • the mobile phone may prompt the user for the complete shooting sequence.
  • the mobile phone may only prompt a part of the currently required shooting sequence according to the shooting situation, instead of prompting the user for the complete shooting sequence.
  • the user can move the mobile phone or directly move the telephoto camera according to their needs, habits or wishes, so that the telephoto image collected by the telephoto camera and the guide frame grid to match.
  • the telephoto image should be matched with the adjacent grids in the row/column as far as possible.
  • the grid will not be matched again.
  • the wide-angle image on the shooting interface can provide the user with a panoramic image within a large field of view, so the wide-angle image can provide a reference for the user to move the mobile phone or telephoto camera.
  • the corresponding position of the content of the focus image on the wide-angle image is used to accurately control the movement path (such as the moving direction and amplitude) of the mobile phone or telephoto camera, so that the telephoto image can be quickly and accurately matched with the next grid.
  • the telephoto frame can be displayed in real time on the shooting interface, so as to guide the user to move the shooting range of the telephoto camera and the telephoto frame to the position where the first grid to be matched is located.
  • the first grid to be matched is the first grid on the left side of the middle row as shown in (a) of FIG. 11
  • the preview interface includes a wide-angle image and a guide frame.
  • the shooting interface includes a wide-angle image, a guide frame and a telephoto frame 1101 .
  • the photographing interface shown in (b) of FIG. 11 does not display the prompt information of the photographing sequence.
  • the photographing interface shown in (c) of FIG. 11 displays prompt information of the complete photographing sequence, and the photographing interface shown in (d) of FIG. 11 displays prompt information of part of the photographing sequence.
  • the mobile phone may also prompt the user on the shooting interface: please move the mobile phone in the direction of the arrow and match the grid.
  • the photographing process will be described below by taking the photographing sequence shown in (c) of FIG. 11 and the prompt information of a part of the photographing sequence displayed on the photographing interface as an example.
  • the mobile phone may determine, according to the reference wide-angle image, configuration parameters when the telephoto camera collects the target telephoto images corresponding to different grids.
  • the mobile phone performs automatic exposure (AE) configuration, automatic white balance (AWB) adjustment and dynamic range correction (DRC) on the target telephoto image according to the configuration parameters, so as to obtain the target length. focus image.
  • AE automatic exposure
  • AVB automatic white balance
  • DRC dynamic range correction
  • the mobile phone determines the configuration parameters of the telephoto camera when collecting the target telephoto images corresponding to different grids according to the reference wide-angle image corresponding to the global scope, which can make the overall exposure effect, AWB effect and dynamic range of the stitched image obtained according to the target telephoto image. Try to be consistent with the reference wide-angle image, so that the overall change of the stitched image is smooth and the transition is natural.
  • the reference wide-angle image is used to determine the configuration parameters of the telephoto image of the acquisition target during the photographing process, and the reference wide-angle image is the same frame image during the photographing process.
  • the reference wide-angle image may be the above-mentioned initial wide-angle image, or the first frame image collected by the wide-angle camera after detecting the user's photographing operation.
  • the image may be an initial wide-angle image obtained by fusing the above-mentioned multiple frames of images.
  • the reference wide-angle image is correspondingly divided into a plurality of image patches (patches) according to the grid of the guide frame, and each image patch corresponds to a grid of the guide frame.
  • grid 1 corresponds to image block 1
  • grid 2 corresponds to image block 2
  • grid 3 corresponds to image block 3.
  • the mobile phone sets configuration parameters such as AE, AWB or DRC of the target telephoto image of the grid corresponding to the image block according to the parameters such as brightness, color and dynamic range of each image block of the reference wide-angle image.
  • the mobile phone collects the target telephoto image according to the configuration parameters, and realizes the configuration of AE, AWB and DRC according to the wide-angle image to guide the target telephoto image in the corresponding grid.
  • the first grid to be matched is taken as grid 1, and grid 1 corresponds to target telephoto image 1 as an example for description.
  • the mobile phone can first collect the telephoto image, and after determining that the telephoto image matches grid 1, set the configuration parameters of the telephoto camera according to the image block 1 corresponding to grid 1 on the reference wide-angle image, so that the telephoto camera can The configuration parameters are collected to obtain the target telephoto image 1 corresponding to the grid 1 .
  • the mobile phone can set the configuration parameters of the telephoto camera according to the image block 1 corresponding to grid 1 on the reference wide-angle image, so that the telephoto camera can collect telephoto images according to the configuration parameters, until a certain frame of telephoto images and grid 1 After matching, the mobile phone determines that the telephoto image of the frame is the target telephoto image 1 .
  • the mobile phone can perform light metering on the image block 1 on the reference wide-angle image, so as to obtain brightness parameters such as the brightness value, brightness average value, and maximum value of the image block 1, which can reflect the ambient brightness.
  • the exposure meter in the mobile phone is preset with a corresponding relationship between ambient brightness and exposure parameters.
  • the mobile phone can determine the AE configuration parameters of the telephoto image to be collected corresponding to grid 1 according to the ambient brightness situation reflected by the brightness parameter of the image block 1 on the wide-angle image, combined with the exposure table of the telephoto image, for example, determine the exposure time and the exposure time of the telephoto image.
  • Exposure parameters such as ISO.
  • the mobile phone uses the exposure parameter to automatically expose the target telephoto image 1 corresponding to the grid 1, so as to improve the exposure effect of the target telephoto image 1.
  • the exposure parameters of the target telephoto image can be determined according to the exposure information of the image blocks in the reference wide-angle image, and the exposure effect of the stitched images obtained from different target telephoto images can be made global as much as possible. Consistent with the reference wide-angle image, so that the overall exposure of the stitched image is also better.
  • the mobile phone can also adjust the target telephoto corresponding to the image block 1 according to the brightness of the adjacent image blocks of the image block 1
  • the exposure parameters of image 1 make the brightness transition between the brightness of the target telephoto image 1 and the adjacent target telephoto images more natural, and avoid the mobile phone independently automatically exposing the target telephoto image corresponding to each grid, resulting in different
  • the exposure effect of the target telephoto image varies greatly, the brightness process of the spliced image is not smooth and natural, and the splicing traces are obvious, which can improve the overall quality of the spliced image.
  • the mobile phone can increase the exposure parameter of target telephoto image 1 determined according to image block 1 by one brightness level, so that the target telephoto The brightness transition between image 1 and target telephoto image 2 corresponding to image block 2 is relatively natural.
  • the mobile phone can determine the AWB configuration parameters of the target telephoto image 1 corresponding to the grid 1 according to the color distribution of the image block 1 on the reference wide-angle image, for example, determine the WB value of the target telephoto image 1 corresponding to the grid 1, that is, RGB The ratio of the three primary colors.
  • the mobile phone correspondingly adjusts the RGB ratio of the target telephoto image 1 corresponding to the grid 1 according to the ratio, so as to improve the color configuration effect of the target telephoto image.
  • the white balance information of the image blocks of the reference wide-angle image can be taken from the global perspective, so that the white balance of the target telephoto image can be achieved from a global perspective.
  • the balance effect is as consistent as possible with the reference wide-angle image, so that the color transition between different target telephoto images included in the spliced image is also more natural, avoiding the mobile phone to perform automatic white balance adjustment for the target telephoto image corresponding to each grid.
  • the white balance effect of different target telephoto images is quite different, the color transition of the stitched image is not smooth and natural enough, and the stitching traces are obvious, which can improve the overall quality of the stitched image.
  • the mobile phone can determine the DRC configuration parameters of the target telephoto image 1 corresponding to the grid 1 according to the dynamic range of the image block 1 on the reference wide-angle image, so as to adjust the dynamic range of the target telephoto image 1 corresponding to the grid 1 accordingly, so that the target
  • the brightness distribution of the telephoto image 1 is relatively consistent with the brightness distribution of the corresponding image block.
  • the dynamic range may include the brightness distribution of different pixels on the image, the brightness difference between different pixels, and the like.
  • the mobile phone can count the dynamic range of image block 1 through the luminance histogram.
  • the image block 1 includes a plurality of pixels whose brightness is lower than 100 and pixels whose brightness is higher than 200.
  • the mobile phone can control the brightness of the pixels of the target telephoto image, so that the target telephoto image includes multiple pixels with a brightness lower than 100 and pixels with a brightness higher than 200, thereby increasing the dynamic range of the target telephoto image.
  • the dynamic range of the reference wide-angle image since the dynamic range of the reference wide-angle image is relatively large, adjusting the dynamic range of the target telephoto image 1 corresponding to the grid 1 according to the dynamic range of the image block 1 on the reference wide-angle image can increase the dynamic range of the target telephoto image 1.
  • the dynamic range makes the target telephoto image 1 have a larger brightness range, richer levels of light and dark, and can provide more image details in bright and dark parts.
  • the mobile phone configures the DRC parameters of the target telephoto image according to the dynamic range information of the image block of the reference wide-angle image, which can make the brightness distribution transition between different target telephoto images on the spliced image more natural from a global perspective, and avoid the mobile phone to separate each network.
  • the DRC of the corresponding target telephoto image results in a large difference in the dynamic range of different target telephoto images, poor dynamic range of the stitched image, and obvious stitching traces, which can improve the overall quality of the stitched image.
  • the telephoto camera can move and continue to collect the telephoto image to acquire the target telephoto image 2 that matches the next grid 2 to be matched.
  • the next grid 2 to be matched is adjacent to at least one grid (eg grid 1 ) that has already been matched.
  • the mobile phone can also configure the telephoto camera to collect the target telephoto image corresponding to other grids according to the image block of the reference wide-angle image (for example, the target telephoto image 2 corresponding to grid 2, the grid The configuration parameters of the corresponding target telephoto image 3, etc.) are not repeated here.
  • the mobile phone can also prompt the grid to the user, so that the user can know the location of the currently matched grid and the current shooting progress.
  • the mobile phone can display the matched grid and other grids in a differentiated manner to prompt the user where the currently matched grid is located, so that the user can know the current shooting progress, the subsequent shooting direction and the telephoto camera. direction of movement.
  • the currently matched grid can be highlighted, bolded, displayed in a color different from other grids, displayed in a specific color, or transformed into a line type different from other grids for display, etc.
  • the user moves the mobile phone or the telephoto camera so that the target telephoto image 1 matches the content of the wide-angle image in the leftmost grid 1 in the middle row of the guide frame.
  • the boundary of grid 1 is changed from a dotted line to a thick solid line, so that it is displayed differently from other grids, so that the user can know that grid 1 is currently matched, and the current shooting progress is the same as that of other grids.
  • Grid 1 corresponds.
  • the mobile phone can also display grid 2 differently from other grids, so that the user can know that grid 2 is currently matched.
  • the current shooting progress corresponds to grid 2.
  • the mobile phone may spliced with the previously obtained spliced image every time a new target telephoto image is obtained, thereby generating a new spliced image. That is, when all grids in the guide frame are not matched, a stitched image is generated according to the multi-frame target telephoto images corresponding to the matched grids.
  • the mobile phone generates a stitched image according to the target telephoto image after obtaining the target telephoto image that matches all the grids in the guide frame, or after shooting ends. This embodiment of the present application does not limit the stitching timing of the stitched images.
  • the process of splicing different target telephoto images to generate a spliced image by a mobile phone may include image registration, image uniformity and color uniformity, and image fusion.
  • image registration refers to the process of matching and stacking different images.
  • the mobile phone can perform feature extraction on the two frames of images to be registered to obtain feature points, find the matching feature point pairs by performing similarity measurement, and then obtain the image space coordinate transformation parameters through the matching feature point pairs, and finally use the coordinate transformation parameters.
  • Image registration The mobile phone can calculate the homography matrix of the target telephoto image 2 relative to the target telephoto image 1, so as to register the target telephoto image 2 with the target telephoto image 1 according to the homography matrix.
  • the image registration algorithm may include SURF feature matching algorithm, SKB feature matching algorithm, ORB feature matching algorithm, grid registration algorithm, optical flow registration algorithm or convolutional neural network (artificial intelligence (AI) network) A registration algorithm, etc., the embodiment of the present application does not limit the specific type of the image registration algorithm.
  • the image homogenization refers to the brightness equalization of the registered images after image registration, so that the brightness transition between adjacent target telephoto images is natural.
  • Brightness equalization of image stitching is a mature technology, which is not limited here.
  • Image homogenization means that after image registration, the statistics of brightness histogram and color histogram are performed for the overlapping part of the registered images, and the cumulative distribution of image brightness and image color is obtained by curve fitting (such as spline curve). function.
  • the brightness and color distribution of one image can be used as the criterion, and the brightness and color of other images can be corrected according to its cumulative distribution function of brightness and color to achieve the purpose of uniform light and color; it can also be based on the brightness of multiple images.
  • color parameters as the common optimization goal, iterative optimization, to achieve the purpose of uniform light and color of all images.
  • Image fusion refers to the process of extracting the relevant information from each image to the maximum extent and synthesizing it into a high-quality image through image processing and computer technology.
  • the image fusion algorithm may include an alpha fusion algorithm, a Poisson fusion algorithm, or a convolutional neural network fusion (AI fusion) algorithm, and the like.
  • AI fusion convolutional neural network fusion
  • the embodiment of the present application does not limit the specific type of the image fusion algorithm.
  • the target telephoto image 2 and the target telephoto image 1 can be registered, lighted and evened, and fused to form a stitched image with a larger field of view. .
  • the target telephoto image 2 and the target telephoto image 1 can be directly registered and fused according to an image registration algorithm, thereby generating a stitched image.
  • a certain overlap ratio for example, 20%
  • the mobile phone can extract features from the target telephoto image 1 and the target telephoto image 2 respectively according to a preset image registration algorithm, and perform image registration according to the feature matching pair between the two.
  • the target telephoto image 2 and the target telephoto image 1 are directly registered and fused to generate a stitched image.
  • the mobile phone can also correct the registered image or the stitched image according to the reference wide-angle image, so as to avoid registration errors caused by less feature point pairs between the telephoto images of the target to be registered.
  • some image contents may be distorted. The coordinates of the distorted image content, so as to correct the distortion and improve the quality of the stitched image.
  • the target telephoto image 2 and the target telephoto image 1 are registered and fused according to the reference wide-angle image.
  • the overlap ratio between the target telephoto image 2 to be registered and the target telephoto image 1 may be small (for example, it may be 10% or 5%, etc.);
  • the mobile phone can also accurately register and fuse the target telephoto image 2 and the target telephoto image 1 according to the reference wide-angle image. In this way, the requirements for the target telephoto image to be matched are relatively loose.
  • the mobile phone can quickly and easily obtain the target telephoto image, reducing the user's shooting time, and avoiding the need to be matched.
  • Registration errors caused by fewer feature point pairs between quasi-target telephoto images Moreover, if the overlap ratio between the target telephoto images is small or non-overlapping, the user can complete the photographing process by taking a few target telephoto images, thereby reducing the number of shooting frames and the shooting time, and improving the shooting efficiency and user shooting experience. .
  • the target telephoto image 2, the target telephoto image 1 and the reference wide-angle image can be registered and fused together.
  • the target telephoto image 2, the target telephoto image 1 and the reference wide-angle image can extract features respectively and match them in pairs, and more feature matching pairs can be obtained in the overlapping area of the three.
  • the mobile phone can obtain more and more accurate feature matching pairs, thereby calculating a more accurate homography matrix, and deforming the target telephoto image 2 according to the homography matrix, so as to achieve better registration, stitching and fusion effects. .
  • the target telephoto image 2 and the target telephoto image 1 can be registered and fused with the reference wide-angle image respectively, and no registration is required between the target telephoto image 2 and the target telephoto image 1 and fusion.
  • the target telephoto image 2 and the reference wide-angle image can be extracted and matched respectively, so as to calculate the homography matrix, and the target telephoto image 2 can be deformed according to the homography matrix, so that the target telephoto image 2 is longer than the target image.
  • Focus image 1 blends better.
  • the overlap ratio between the target telephoto image 2 to be registered and the target telephoto image 1 may be small (for example, it may be 10% or 5%, etc.).
  • the mobile phone can quickly and easily obtain the target telephoto image, reducing the user's shooting time.
  • the overlap ratio between the target telephoto images is small or non-overlapping, the user can complete the photographing process by taking a few target telephoto images, thereby reducing the number of shooting frames and the shooting time, and improving the shooting efficiency and user shooting experience. .
  • the target telephoto image 1 may be registered according to the coordinate system of the reference wide-angle image, that is, the target telephoto image 1 may be fitted to the coordinate position of the same content on the reference wide-angle image.
  • the target telephoto image 2 can also be registered according to the coordinate system of the reference wide-angle image, that is, the target telephoto image 2 can be fitted to the coordinate position of the same content on the reference wide-angle image. If there is no void between the target telephoto image 1 and the target telephoto image 2 that are attached to the reference wide-angle image, the whole formed by the attachment of the target telephoto image 1 and the target telephoto image 2 is the stitched image.
  • the acquisition angle of the telephoto camera may be deflected due to the user's hand shaking or the rotation of the mobile phone, so that the image shift between the target telephoto image 2 and the target telephoto image 1 is large, and it is easy to make the target telephoto image.
  • the mobile phone can detect the hole through various methods.
  • the target telephoto image 1 and the target telephoto image 2 can both be attached to the position of the same content on the reference wide-angle image to determine whether there is a gap between the attached target telephoto image 1 and the target telephoto image 2 , thereby detecting the presence of voids.
  • the phone can determine that a hole has been detected.
  • the mobile phone can also measure and calculate the spatial positional relationship between the target telephoto image 1 and the target telephoto image 2 when shooting through an inertial measurement unit (IMU) (such as a gyroscope) of the mobile phone, and sequentially to detect the presence of voids.
  • IMU inertial measurement unit
  • the mobile phone determines that there is a hole between the target telephoto image 2 and the target telephoto image 1, the mobile phone can fill the content of the same position on the reference wide-angle image into the hole between the target telephoto image 1 and the target telephoto image 2. . In this way, the whole formed by the target telephoto image 1 , the target telephoto image 2 and the filled portion is a stitched image.
  • the mobile phone can fill in the holes between the telephoto images of the target to be spliced according to the reference wide-angle image, and the target telephoto images cannot be registered and spliced due to the holes, and the target telephoto image does not need to be cropped according to the holes.
  • the image field angle and image resolution are lost during stitching, and the stitched image can have a larger field angle and image resolution.
  • cropping is performed at the minimum image height, which will lose image resolution and image view. field angle.
  • the reference wide-angle image can refer to the image 1210 shown in FIG. 12B (a)
  • the target telephoto image 1 can refer to the image 1211 shown in FIG. 12B (a)
  • the target telephoto image 2 can refer to FIG. Image 1212 shown in (b) in 12B.
  • the mobile phone can fill the hole 1213 according to the content of the corresponding position of the reference wide-angle image.
  • the overall image 1214 formed by the target telephoto image 1 , the target telephoto image 2 and the filling portion is a stitched image.
  • the mobile phone may guide the user to take a new frame of the target telephoto image 2 to fill the gap. Empty. For example, in a possible implementation manner, in the presence of holes, the grid 2 corresponding to the target telephoto image 2 will not be displayed differently from other grids, so as to guide the user to shoot a frame matching grid 2 again. Target telephoto image 2. In another possible implementation manner, the mobile phone prompts the user by displaying information or voice playback, etc., to collect a frame of the target telephoto image again to correspond to the current grid.
  • the mobile phone can perform image super-resolution processing on the cavity area to reduce the resolution difference between the cavity filled area and the surrounding area, so as to achieve a better user experience.
  • the mobile phone can also use the AI local search method or the AI image restoration method to predict and fill in the hollow area.
  • the embodiments of the present application do not limit the specific methods for filling and image super-resolution.
  • the mobile phone may also perform ghost removal processing on the photographed moving objects during the splicing process.
  • the target telephoto image may appear on the adjacent N frames (N is an integer greater than 2) of the target telephoto image.
  • N is an integer greater than 2
  • the image of the object, resulting in ghosts (or ghosts, ghosts, etc.) of moving objects appear on the stitched image.
  • the mobile phone can also perform ghost removal processing during the photographing process.
  • the following takes the splicing scene of the target telephoto image 2 and the target telephoto image 1 as an example, and describes the method for removing the ghost image of the moving object through two cases.
  • Moving objects move at a slower speed, and moving objects do not appear ghosted on wide-angle images.
  • Moving objects appear on both the adjacent target telephoto image 2 and target telephoto image 1.
  • the images of moving objects on different target telephoto images are distinguished from each other and are not connected, which easily leads to multiple moving objects appearing on the spliced image. Images are ghosts.
  • the detection of ghost image area is a relatively mature technology at present. For example, it can be estimated and detected by optical flow method or combined with gyroscope data of mobile phone, which will not be described in detail here.
  • the mobile phone can first detect the position of the moving object (the optical flow method, the object detection and tracking method, the method of semantic segmentation, and the method of brightness difference and expansion corrosion can be used). Then, on the spliced image of the target telephoto image 2 and the target telephoto image 1, the mobile phone can retain the complete image of the moving object on the target telephoto image 1, the target telephoto image 2 or the wide-angle image (also called ghost-free image). ), delete the images of moving objects in other areas, and fill the deleted area with the content corresponding to the deleted area on the reference wide-angle image, so as to avoid ghosting of moving objects on the spliced image.
  • the optical flow method, the object detection and tracking method, the method of semantic segmentation, and the method of brightness difference and expansion corrosion can be used. Then, on the spliced image of the target telephoto image 2 and the target telephoto image 1, the mobile phone can retain the complete image of the moving object on the target telephoto image 1, the target telephoto image 2 or
  • the wide-angle image used for hole filling may be a recently collected frame of wide-angle image, a frame of wide-angle image previously collected by the mobile phone during the photographing process, or a reference wide-angle image, etc., which are not limited in the embodiments of the present application.
  • the mobile phone can retain the complete image of the moving object on the wide-angle image.
  • the mobile phone in order to make the position of the moving object on the final target image closer to what the user sees last, the mobile phone can retain the complete image of the moving object on the last frame of the target telephoto image.
  • the positions of the moving objects mentioned here may be different on the three images of the target telephoto image 2, the target telephoto image 1 and the wide-angle image.
  • the mobile phone can determine to retain the complete image of the moving object on a certain image according to the corresponding measurement.
  • the mobile phone when the mobile phone retains the complete image of the moving object on the target telephoto image 1 on the spliced image of the target telephoto image 2 and the target telephoto image 1, the mobile phone can delete the image of the moving object on the target telephoto image 2 , and fill the deleted area with the content of the corresponding position on the wide-angle image.
  • the mobile phone can retain the complete image of the moving object on the target telephoto image 1 and delete the image of the moving object on the target telephoto image 2 before the target telephoto image 1 and the target telephoto image 2 are registered. Then, the mobile phone can register and fuse the target telephoto image 1 and the deleted target telephoto image 2, and use the content of the corresponding position on the reference wide-angle image to fill the deleted blank area to generate a stitched image. .
  • the mobile phone can first register and fuse the target telephoto image 1 and the target telephoto image 2 to generate a stitched image. Then, the mobile phone can delete the images of the moving objects on the stitched image (including the images of the moving objects on the target telephoto image 1 and the target telephoto image 2). The mobile phone fits the complete image of the moving object on the target telephoto image 1 to the corresponding position on the stitched image. The mobile phone uses the content of the corresponding position on the reference wide-angle image to fill the hole in the stitched image, and the hole position is the area where the image of the deleted moving object on the target telephoto image 2 is located.
  • the mobile phone When the mobile phone retains the complete image of the moving object on the target telephoto image 2 on the spliced image of the target telephoto image 2 and the target telephoto image 1, the mobile phone can delete the image of the moving object on the target telephoto image 1, and Fill the deleted area with the content of the corresponding position on the wide-angle image.
  • the mobile phone When the mobile phone retains the complete image of the moving object on the wide-angle image on the spliced image of the target telephoto image 2 and the target telephoto image 1, the mobile phone can delete the moving object on the target telephoto image 1 and the target telephoto image 2. image, and fill the deleted area with the content of the corresponding position on the wide-angle image.
  • reference to the wide-angle image may refer to (a) in FIG. 12C .
  • the target telephoto image 1 can be seen in (b) of FIG. 12C , including an image 1221 of a moving object.
  • the target telephoto image 2 can be seen in (c) of FIG. 12C, including an image 1222 of a moving object.
  • the mobile phone deletes the image of the moving object on the target telephoto image 1 and the target telephoto image 2, and fills the deleted area with the content of the corresponding position on the reference wide-angle image, so as to obtain the image shown in (d) in Figure 12C stitched image.
  • Case a No ghosts of moving objects appear on the reference wide-angle image.
  • the mobile phone can use the same method as the above-mentioned slow-moving object to perform ghost removal processing, and use the content of the corresponding position on the reference wide-angle image to fill the ghost area deleted on the stitched image.
  • Case b The ghost of a moving object appears on the reference wide-angle image, and the ghost of a moving object appears on the multi-frame wide-angle images collected in real time by the mobile phone.
  • the mobile phone can perform motion vector estimation or optical flow estimation in combination with the multi-frame wide-angle images collected during the photographing process, so as to determine the motion trajectory of the moving object, and remove the connected area of the moving object on the wide-angle image to obtain a complete Images of moving objects without ghosting. That is to say, the mobile phone can obtain a wide-angle image without ghost images of moving objects directly or after processing.
  • the moving object When the moving speed of the moving object is fast, the moving object appears on both the adjacent target telephoto image 2 and the target telephoto image 1, and the images of the moving object on different target telephoto images are connected with each other, which easily leads to the splicing of images. ghosting of moving objects appears.
  • the mobile phone can attach the target telephoto image 1 and the target telephoto image 2 to the ghost-free wide-angle image, thereby marking the target telephoto image 1 and the target telephoto image 2
  • the connected area of the moving object image on the fitted overall image that is, the ghost area.
  • the wide-angle image without ghost image may be a wide-angle image without ghost image in a certain frame, or a wide-angle image obtained after performing ghost removal processing.
  • the mobile phone can fit the target telephoto image 1 and the target telephoto image 2 to the ghost-free wide-angle image during the registration process, and mark the connected area.
  • the mobile phone can perform motion vector estimation and other processing in combination with multiple frames of wide-angle images collected during the photographing process, thereby determining the connected area of the moving object image on the target telephoto image 2 and the target telephoto image 1 .
  • the mobile phone can delete the image area of the moving object on the spliced image of the target telephoto image 2 and the target telephoto image 1, and fill the deleted area according to the content of the corresponding position on the ghost-free wide-angle image to avoid the spliced image. ghosting of moving objects appears.
  • reference to the wide-angle image may refer to (a) in FIG. 12D .
  • the target telephoto image 1 can be seen in (b) of FIG. 12D , including the ghost image 1231 of the moving object.
  • the target telephoto image 2 can be seen in (c) of FIG. 12D , including the ghost image 1232 of the moving object.
  • the mobile phone deletes the ghost image of the moving object on the target telephoto image 1 and the target telephoto image 2, and fills the deleted area with the content of the corresponding position on the reference wide-angle image, so as to obtain (d) in Figure 12D stitched image.
  • the mobile phone can eliminate the ghosting of moving objects on the stitched image of the target telephoto image during the photographing process, and present a clear, ghost-free stitched image to the user on the shooting interface, thereby improving the image display effect during the photographing process.
  • ghost images may also appear on consecutive multi-frame target telephoto images, for example, the ghost area on consecutive multi-frame target telephoto images is a cat whose body is dragged for a long time.
  • the mobile phone can use the same stitching method to stitch the subsequent target telephoto image k (k is an integer greater than 2) with the previously generated stitched image, thereby generating a new stitched image.
  • the mobile phone can stitch the target telephoto image 3 with the previously generated stitched image to generate a new stitched image.
  • the splicing of the target telephoto image 3 with the previous spliced image can also be understood as the splicing of the target telephoto image 3 and the target telephoto image 2 adjacent to the target telephoto image 3 in the previous spliced image.
  • the stitching process of the subsequent target telephoto images will not be described in detail here.
  • the non-first frame target telephoto image may be spliced with one or more adjacent target telephoto images.
  • registration and splicing can be performed only with the target telephoto image corresponding to the grid in the first row and second column, or it can be registered with all the target telephoto images corresponding to the grid in the first row and second column. Registration and stitching with its adjacent target telephoto images.
  • the mobile phone after obtaining the target telephoto image 1 that matches the grid 1 (ie, the first grid to be matched), the mobile phone can use the thumbnail of the target telephoto image 1 as the background After the wide-angle images of the images are matched and fused, a thumbnail of the target telephoto image 1 is displayed on the shooting interface, so that the user can see the real picture of the target telephoto image 1 that matches the grid 1 .
  • the thumbnail of the target telephoto image 1 is attached to an area on the wide-angle image that has the same image content as the target telephoto image 1 .
  • the position and size of the thumbnail of the target telephoto image 1 may have some deviations from the matched grid 1, or may be exactly the same as the position and size of the matched grid 1.
  • the thumbnail image of the target telephoto image 1 is overlaid on the area of the same image content of the wide-angle image.
  • the thumbnail of the target telephoto image 1 is an image obtained by down-sampling the target telephoto image 1 collected by the telephoto camera.
  • the wide-angle image displayed by the mobile phone on the preview interface and the shooting interface is usually an image obtained by down-sampling the image collected by the wide-angle camera. Since the thumbnail of the target telephoto image and the wide-angle image displayed on the interface come from different cameras and data sources, the display effect of the interface may be different before and after the thumbnail of the target telephoto image is attached.
  • the mobile phone may display a thumbnail of the spliced image on the photographing interface, so as to facilitate the user to know the current real-time photographing progress. For example, after matching and fusing the stitched image thumbnail with the wide-angle image as the background image, the mobile phone can fit the stitched image thumbnail on the area where the wide-angle image and the stitched image thumbnail have the same image content.
  • the stitched image thumbnail can be obtained in different ways.
  • the mobile phone registers and splices the down-sampled adjacent target telephoto images to obtain spliced image thumbnails, and attaches the spliced image thumbnails to the wide-angle image and the spliced image thumbnails that have the same image above the content area.
  • the processing process of obtaining the stitched image thumbnail by the mobile phone is relatively simple, and the processing time is short, and the stitched image thumbnail can be obtained in real time and displayed on the shooting interface in time to avoid stuttering.
  • the process of obtaining the stitched image thumbnail by the mobile phone is processed in parallel with the process of obtaining the stitched image by the mobile phone.
  • the stitched images can be obtained by stitching in real time during the photographing process; they can also be obtained by uniform stitching after obtaining the target telephoto images matching all grids or after shooting.
  • the mobile phone can obtain and display the stitched image thumbnails in real time according to the target telephoto image obtained each time during the photographing process.
  • the mobile phone downsamples the stitched image to obtain a stitched image thumbnail, and fits the stitched image thumbnail on the wide-angle image on an area having the same image content as the stitched image thumbnail.
  • the position and size of the thumbnails of the spliced images may have some deviations from the positions and sizes of the multiple grids that have been matched, or may be exactly the same as the positions and sizes of the multiple grids that have been matched.
  • the mobile phone fits the stitched image thumbnail on the wide-angle image of the shooting interface, allowing users to feel the real, real-time stitching process and stitching progress.
  • the following describes the display situations of the spliced image thumbnails on the shooting interface, respectively, in the two cases that the background image is fixed and not fixed.
  • the user can move the telephoto camera by moving the mobile phone, or the user can directly move the telephoto camera, or the mobile phone can automatically control the telephoto camera to move.
  • the background image on the shooting interface is fixed as the reference wide-angle image
  • the relative position of the guide frame and the background image remains unchanged
  • the content of the background image corresponding to each grid also remains unchanged
  • the telephoto frame is relative to the background.
  • the position of the image can be changed in real time.
  • the mobile phone first matches the grid in the middle row of the guide frame, then matches the grid in the row above the guide frame, and then matches the row below the guide frame. grid.
  • the photographing interface displaying the thumbnails of the spliced images can be referred to (c) in FIG. 12A .
  • the telephoto camera continues to move.
  • the mobile phone displays a real-time telephoto frame corresponding to the shooting range of the telephoto camera on the shooting interface.
  • the photographing interface displaying the thumbnails of the stitched images can be seen in (d) of FIG. 12A .
  • the photographing interface displaying the thumbnails of the spliced images can be referred to (e) in FIG. 12A .
  • the shooting interface showing the thumbnails of the stitched images can be seen in (f) in Figure 12A, and the mobile phone generates a stitching corresponding to the 3*3 grid Image thumbnail.
  • the schematic diagram of the photographing interface may refer to (a)-(e) in FIG. 14 .
  • the background image is not fixed and is a wide-angle image collected in real time by a mobile phone
  • the telephoto frame is determined by the lens center of the wide-angle camera and the telephoto camera, the object distance and the size of the field of view, when the object distance is basically unchanged, in view of the wide-angle camera.
  • the relative position of the lens center of the telephoto camera remains unchanged, and the relative position of the telephoto frame and the background image is basically unchanged. For example, when taking a photo, the telephoto frame is always positioned near the middle of the background image.
  • the wide-angle camera also moves, and the content of the wide-angle image as the background image also changes, but the content of the background image corresponding to each grid remains unchanged. It can also be understood that during the photographing process, the corresponding relationship between the grid and the content of the wide-angle image in the grid remains unchanged, and the grid and the content of the wide-angle image serving as the background image are bound.
  • Mobile phones can bind grids to wide-angle image content in a number of ways.
  • the mobile phone can record the content of the wide-angle image corresponding to each grid after the photo-taking operation is detected; in the subsequent photo-taking process, the mobile phone can use the image matching method to match the grid with the content of the wide-angle image acquired in real time, In this way, the binding of the grid and the content of the wide-angle image is realized.
  • the mobile phone can record the coordinate position of each grid and the corresponding wide-angle image content after the photo-taking operation is detected; in the subsequent photo-taking process, the mobile phone can determine the translation amount and /Rotation amount, so as to calculate the new coordinate position of each grid and the corresponding wide-angle image content according to the translation amount and /rotation amount, so as to realize the binding of the grid and the wide-angle image content.
  • the wide-angle camera and telephoto camera move to the right at the same time, the field of view and image content of the wide-angle image move to the right, and the position of the guide frame on the screen moves to the right with the content of the wide-angle image. Shifting to the left, the position of the telephoto frame relative to the wide-angle image remains largely unchanged.
  • the background image is a wide-angle image collected in real time by a mobile phone
  • a schematic diagram of the shooting interface can be seen in (a) in FIG. 13C , and a target is displayed on the shooting interface.
  • Thumbnail of telephoto image 1 after the grids in the middle row are matched, the schematic diagram of the shooting interface can be seen in (b) in Figure 13C, and the stitched image thumbnails are displayed on the shooting interface; After grid matching, the schematic diagram of the photographing interface can be seen in (c) in FIG. 13C , and the stitched image thumbnails are displayed on the photographing interface.
  • the wide-angle image is shifted to the right
  • the guide frame is shifted to the left
  • the telephoto frame is substantially located in the middle of the wide-angle image.
  • FIG. 12A is described by taking an example that the thumbnail of the target telephoto image is basically aligned with the edge of the matched grid.
  • the edges of the stitched image may be jagged, and Not substantially aligned with the edges of the matched grid.
  • the telephoto camera and the wide-angle camera may not move synchronously.
  • the background image is not fixed and is a wide-angle image collected in real time by the mobile phone, for example, when the background image changes in real time due to the shaking of the mobile phone, the relative position of the guide frame and the background image will change in real time with the content of the background image.
  • the content of the background image corresponding to each grid remains basically unchanged, and the relative position of the telephoto frame and the background image can change in real time as the telephoto camera moves.
  • the size, position, and content of the image covered by the stitched image thumbnail and the wide-angle image are basically the same.
  • the size of the stitched image thumbnail is small, which is inconvenient for users to view the image details.
  • the mobile phone in response to a user's click or long press and other preset operations, can zoom in and display a thumbnail of the stitched image on the shooting interface, so that the user can clearly see the target telephoto stitched image captured by the telephoto camera at any time. The specific details of the object.
  • the mobile phone in order to avoid the size of the stitched image thumbnails being small (especially when the equivalent focal lengths of the telephoto camera and the wide-angle camera are quite different, for example, the ratio of the equivalent focal lengths is greater than a certain preset value), resulting in It is inconvenient for users to view the details of the images, and the mobile phone can automatically zoom in and display the thumbnails of the stitched images on the shooting interface. Similarly, the mobile phone can also enlarge and display the thumbnail of the target telephoto image 1 on the shooting interface.
  • the magnified display multiple may be a default value, or may be related to the ratio of the equivalent focal length of the wide-angle camera to the telephoto camera, or may be a value set by the user, which is not limited.
  • the thumbnail of the target telephoto image 1 after enlarged display or the thumbnail of the spliced image cannot block the grid to be matched.
  • the thumbnail of the zoomed-in target telephoto image 1 and the thumbnail of the stitched image on the shooting interface when shooting in a grid order from left to right, the thumbnail of the zoomed-in target telephoto image 1 and the thumbnail of the stitched image on the shooting interface.
  • the right side is aligned with the right side of the recently matched grid, that is, aligned with the left side of the grid to be matched, so as to avoid the thumbnail of the target telephoto image 1 and the thumbnail of the spliced image from blocking the grid to be matched on the right side.
  • the lower side of the thumbnail of the zoomed-in target telephoto image 1 or the thumbnail of the stitched image is aligned with the lower side of the grid that has been matched recently, that is, the grid to be matched align the upper side of the target telephoto image 1 and the thumbnail of the spliced image to block the grid to be matched on the lower side.
  • the grid in the middle row of the guide frame usually corresponds to the image content that the user most wants to capture, so the thumbnail of the target telephoto image 1 and the thumbnail of the spliced image try not to block the unmatched grid in the middle row.
  • the mobile phone displays the target area image corresponding to the guide frame on the wide-angle image on the shooting interface, instead of displaying the complete wide-angle image.
  • the ratio of the size of the target area image to the size of the guide frame is r, and r ⁇ 1.
  • the target area image can be obtained by cropping and zooming in on the full wide-angle image. In this way, the size of the guide frame and the grid displayed on the shooting interface is larger, which is convenient for the mobile phone to perform mobile shooting and matching according to the larger size grid.
  • the equivalent focal length of the telephoto camera is quite different from the equivalent focal length of the wide-angle camera
  • the guide frame, grid, and target telephoto image 1 Thumbnails/stitched image thumbnails are small in size and inconvenient for users to view.
  • the mobile phone can enlarge the target area image, the guide frame, the thumbnail of the target telephoto image 1/stitched image thumbnail, etc., and display it on the shooting interface, so that the user can view the thumbnail of the larger target telephoto image 1/ The specific content of the stitched image thumbnail.
  • the complete wide-angle image on the shooting interface shown in (d) of FIG. 12A may be replaced with the target area image 1500 in the wide-angle image on the shooting interface shown in FIG. 15C .
  • the target area image, the guide frame and the stitched image thumbnail have been enlarged in equal proportions, and the target area
  • the ratio r of the size of the image to the size of the guide frame is greater than 1.
  • the mobile phone displays the thumbnail of the target telephoto image 1 and the thumbnail of the spliced image on the photographing interface.
  • the mobile phone may not display the thumbnail of the target telephoto image 1 and the thumbnail of the spliced image, but only display a splicing frame on the wide-angle image, and the splicing frame is the target telephoto image 1 , or the border of a stitched image thumbnail.
  • the mobile phone can remind the user of the current shooting progress through the splicing box, and does not need to acquire and display the thumbnail of the target telephoto image 1 and the thumbnail of the spliced image, which can reduce the processing load of the mobile phone.
  • the mobile phone may not display the thumbnail of the target telephoto image 1 and the thumbnail of the stitched image, but only highlight (for example, highlight or bold) the matched grid or the stitched image. Matches the border of the grid. In this way, the mobile phone can remind the user of the current shooting progress by highlighting the stitched grid, and does not need to acquire and display the thumbnail of the target telephoto image 1 and the stitched image thumbnail, which can reduce the processing load of the mobile phone.
  • the mobile phone can also give a corresponding prompt to the user according to the current real-time photographing situation. For example, when the phone is moving too fast for the telephoto image to match the grid, the phone can prompt the user to move the phone slowly. For another example, when the moving direction of the mobile phone is opposite to the direction indicated by the shooting sequence, or when the mobile phone moves in the direction of the matched grid, the mobile phone may prompt the user to move the mobile phone in the indicated direction. Or, the mobile phone directly terminates the shooting process, and generates the final target image according to the acquired target telephoto image. For another example, when the telephoto frame is far away from the top of the grid to be matched. The mobile phone can prompt the user to move the mobile phone downwards; when the telephoto frame deviates far below the grid to be matched, the mobile phone can prompt the user to move the mobile phone upwards.
  • the wide-angle image collected by the mobile phone in real time can be used to display as a background image when the background image is not fixed; on the other hand, it can also be used to move moving objects.
  • the reference wide-angle image with better quality can be used as a reference to perform AE, AWB and DRC configuration on the target telephoto image, used as a reference to register and fuse multiple target telephoto images, and use It is used as a reference to perform processing such as hole filling.
  • a target image is generated according to the stitched image.
  • the target image can be generated according to the stitched images obtained from the multiple frames of target telephoto images.
  • the mobile phone can also perform processing such as ghost removal, dynamic range enhancement, or hole filling on the stitched image before generating the target image according to the stitched image, so as to improve the quality of the stitched image.
  • processing such as ghost removal, dynamic range enhancement, or hole filling on the stitched image before generating the target image according to the stitched image, so as to improve the quality of the stitched image.
  • the spliced image of the target image is generated and the quality of the target image is improved.
  • the mobile phone can perform ghost removal processing on the stitched image after the shooting ends and before generating the target image according to the stitched image, so that according to the stitching after ghost removal image to generate the target image.
  • the mobile phone can process the ghosts on the stitched images obtained after shooting together.
  • the mobile phone in order to make the position of the moving object on the final target image closer to what the user sees last, the mobile phone can retain the complete image of the moving object on the last frame of the target telephoto image.
  • the mobile phone when the mobile phone generates a stitched image after shooting, the mobile phone can process ghost images on the stitched image together.
  • the mobile phone can delete the image of the moving object on the spliced image, and keep the target telephoto image of a certain frame or the wide-angle image of a certain frame of the image. A complete image of a moving object, and fill the hole with the content of the corresponding location on the wide-angle image.
  • the mobile phone can perform motion vector estimation based on multiple frames of wide-angle images, so as to remove the connected areas of moving objects on the wide-angle images to obtain a complete image of moving objects without ghosts.
  • the mobile phone deletes the image area of the moving object on the spliced image, and fills the deleted area with the content of the corresponding position on the wide-angle image without ghosting.
  • This embodiment of the present application does not limit whether the ghost removal process is performed during the shooting process or after the shooting ends.
  • the stitched image of the multi-frame target telephoto images may have a smaller dynamic range, fewer bright layers, a smaller brightness range, insufficient contrast between bright and dark, and insufficient details of dark and bright parts.
  • the mobile phone can enhance the dynamic range of the entire stitched image after the shooting ends and before generating the target image according to the stitched image, for example, a method of enhancing the dynamic range of the photographed photo (HDRnet) can be used to make the stitched image bright
  • HDRnet a method of enhancing the dynamic range of the photographed photo
  • the mobile phone can process the stitched images through the convolutional neural network (AI network) to directly obtain the effect of high dynamic range, thereby obtaining the target image with high dynamic range.
  • AI network convolutional neural network
  • the mobile phone can also adjust the brightness distribution of the spliced image according to the brightness distribution of the reference wide-angle image, so as to increase the brightness range of the spliced image, thereby enhancing the dynamic range of the spliced image.
  • the luminance values in the corresponding regions of the reference wide-angle image and the stitched image range from 30 to 250, and the luminance value of the stitched image ranges from 180 to 230.
  • the mobile phone can adjust the brightness value of some pixels whose brightness value is close to 180 on the spliced image (such as randomly selected pixels or pixels in the edge part, etc.) to 30-180;
  • the brightness value of the pixel is adjusted to between 230-250; thereby increasing the brightness range of the spliced image and enhancing the dynamic range of the spliced image.
  • the mobile phone can also extract the high-frequency component image of the reference wide-angle image, and fuse the high-frequency component image and the stitched image to increase the high-frequency detail of the stitched image (also called high-frequency component fusion).
  • the high-frequency component image includes pixels in the upper edge portion of the reference wide-angle image or pixels in the transition portion of the edges of different objects, and the like.
  • the mobile phone can use wavelet transform or deep learning to extract the high-frequency component image of the reference wide-angle image, so as to extract high-frequency details with large frequency and texture changes and superimpose them on the stitched image to enhance the high-frequency details of the stitched image and enhance the stitching. Image and target image clarity and detail.
  • the mobile phone can also synthesize multiple frames of wide-angle images with high dynamic range, and apply the synthesized high dynamic range effect to the stitched image through the algorithm of uniform light and color.
  • the dynamic range information performs uniform light and color processing on the stitched image, thereby enhancing the dynamic range of the stitched image.
  • the mobile phone can also superimpose the details of the stitched image onto the wide-angle image with high dynamic range effect, and finally obtain the target image with high dynamic range and rich texture details.
  • the mobile phone can use different exposures of multiple frames of wide-angle images to synthesize a wide-angle image with high dynamic range, and then perform uniform light and color processing on the stitched image based on the high dynamic range of the wide-angle image, or The processing of style transfer, so that the stitched image also has the effect of high dynamic range.
  • a cell phone may utilize different exposures of multiple frames of wide-angle images to synthesize a wide-angle image with high dynamic range.
  • the phone can then extract details and textures from the stitched image, overlay it onto a wide-angle image with high dynamic range, and fuse details, color, and brightness, and finally obtain a target image with high dynamic range and high-definition detail.
  • the algorithm for enhancing the dynamic range is not limited to the traditional algorithm or the algorithm of the convolutional neural network.
  • the algorithm for synthesizing a wide-angle image with high dynamic range from multiple frames of wide-angle images is not limited to traditional algorithms or convolutional neural network algorithms.
  • the mobile phone can also perform hole filling on the stitched image, and then generate the target image according to the stitched image after the holes are filled. Since there may be misalignment between the target telephoto images, the edges of the stitched images may not be smooth enough, and there may be empty edges between image parts from different target telephoto images. Therefore, the mobile phone can also fill in the empty edges according to the reference wide-angle image. For neat, beautiful rectangular or square stitched images.
  • the mobile phone fills the empty edge according to the corresponding content of the reference wide-angle image, and does not need to crop the target telephoto image according to the empty edge, which leads to the loss of the image field angle and image resolution during splicing, which can make the spliced image more efficient.
  • Large field of view and image resolution so that the target image generated from the spliced image has a large field of view and image resolution.
  • the mobile phone can use the corresponding areas of multiple frames of wide-angle images for high-resolution synthesis, or use the image super-resolution algorithm to process the corresponding areas of a single frame of reference wide-angle images.
  • the filling of holes and empty edges can have a higher resolution and improve the user experience.
  • the stitched image with empty edges obtained by the mobile phone according to multiple frames of target telephoto images can refer to the image framed by the solid line shown in (a) in FIG. 15D .
  • the obtained stitched image can refer to the image framed by the solid line shown in (b) of FIG. 15D .
  • the reference wide-angle image with better quality is used as a reference to perform AE, AWB adjustment, and DRC configuration on the target telephoto image, which is used as a reference to register and fuse multi-frame target telephoto.
  • the image is used as a reference for hole filling, and is used as a reference to enhance the dynamic range of the entire stitched image after shooting.
  • each frame of target telephoto image is processed according to the same reference wide-angle image, which can make the effect of each target telephoto image and the reference wide-angle image as consistent as possible, so that the final stitched image and The overall effect of the target image is more consistent with the image effect of the reference wide-angle image, so that the final stitched image and the target image as a whole are more natural, with smooth transition and better quality.
  • the mobile phone may prompt the user that the target image is being generated during the processing, so as to prevent the user from mistakenly thinking that the mobile phone is stuck or other abnormal conditions have occurred.
  • the mobile phone can prompt the user through text prompts, voice prompts, or a rotating circle logo.
  • the mobile phone can prompt the user that the target image is currently being generated through text information: processing, please wait; and a rotating circle mark.
  • the mobile phone may determine the end of shooting, and there may also be various methods for the mobile phone to generate the target image according to the spliced image.
  • the mobile phone determines that the photographing is ended after detecting the user's operation of stopping photographing.
  • the mobile phone detects that the user clicks the operation of the stop photographing control 1200 as shown in (e) in FIG. 12A , it determines that the photographing is ended.
  • the operation to stop taking pictures may also be other gesture operations or user voice instruction operations, etc.
  • the embodiment of the present application does not limit the operation of triggering the mobile phone to end the taking pictures process.
  • the shooting ends automatically.
  • the target image is generated by intercepting the part corresponding to the guide frame from the reference wide-angle image.
  • the mobile phone determines that the shooting is completed before the grid in the guide frame is completed, and the shooting process is performed according to the rows of the grid, the mobile phone discards the entire row of grids that are not completed on the spliced image.
  • a target image is generated according to the stitched images corresponding to the entire row of grids that have been captured, so that the target image is an image corresponding to the entire row of grids.
  • the stitched images corresponding to the grids in the middle row and the upper row of the guide frame are generated as shown in FIG. 17 .
  • the mobile phone determines that the shooting is over before the grid in the guide frame has been shot, and the photo is taken according to the columns of the grid, the mobile phone removes the part of the stitched image that has not been photographed in the entire column, according to the A target image is generated from the stitched images corresponding to the entire column of grids that have been captured, so that the target image is an image corresponding to the entire column of grids.
  • the mobile phone determines that the shooting is completed before the grid in the guide frame is completed, and the shooting process is performed according to the rows of the grid, the mobile phone supplements the stitched image according to the reference wide-angle image. The image corresponding to the latest row of grids, thereby generating the target image.
  • the mobile phone determines that the shooting is over before the grid in the guide frame has been shot, and the shooting process is carried out according to the columns of the grid, the mobile phone will supplement the latest grid column on the stitched image according to the reference wide-angle image. corresponding image to generate the target image.
  • the mobile phone determines that the shooting is completed before the grid in the guide frame is completed, the photographing process is abnormal, and the mobile phone does not generate the target image.
  • the mobile phone can also prompt the user that the shooting is stopped or the shooting is abnormal.
  • the phone automatically ends capturing and generates a target image based on the stitched image.
  • the size of the target image may be the same as the size of the guide frame. If the size of the stitched image is inconsistent with the size of the guide frame due to the dislocation between the target telephoto images during stitching, the mobile phone can crop or fill the stitched image according to the size of the guide frame (fill according to the reference wide-angle image). ), so that the size of the stitched image is consistent with the size of the guide frame. Exemplarily, for the target image generated by the mobile phone, see (b) in FIG. 17 .
  • the mobile phone can fill the spliced images according to the reference wide-angle image, so as to obtain rules such as rectangles or squares.
  • the shape of the target image, the position and size of the target image and the position and size of the guide frame may differ.
  • the resolution of the wide-angle image collected by the wide-angle camera and the resolution of the telephoto image collected by the telephoto camera are both larger. Since the field of view of the wide-angle camera is much larger than that of the telephoto camera, the number of pixels corresponding to the telephoto image per unit field of view is much larger than the number of pixels corresponding to the wide-angle image per unit field of view.
  • the resolution of the wide-angle image collected by the wide-angle camera is 4000*3000, that is, 12 million pixels
  • the resolution of the telephoto image collected by the telephoto camera is 3264*2448, that is, 8 million pixels.
  • the target image generated by the mobile phone based on the spliced image of the multi-frame target telephoto image has higher definition and clearer details than the wide-angle image. Shoot better.
  • the mobile phone can obtain a high-definition target image with a large field of view by stitching multiple frames of target telephoto images.
  • the telephoto image in the foregoing embodiment is a single-frame image collected by the telephoto camera.
  • the telephoto images involved in the foregoing embodiments may also be multiple frames (for example, 2 frames or 3 frames, etc.) of telephoto images collected by a telephoto camera and generated after registration and fusion of better quality. a frame of image.
  • the target image obtained by splicing the target telephoto image and obtained by splicing the target telephoto image saved in the mobile phone can be specially identified from other images, so as to facilitate the user to intuitively know this type of image.
  • the target image obtained by the mobile phone has a character identification 1801 of "cp" displayed.
  • a specific symbol 1802 is displayed on the target image obtained by the mobile phone.
  • the photographing method described in the above embodiments may be referred to as a solution for displaying a guide frame.
  • Some other embodiments of the present application provide another shooting method. Different from the above-mentioned embodiments, the mobile phone does not display a guide frame on the preview interface and the shooting interface.
  • the shooting method may include:
  • the mobile phone starts the camera function.
  • the mobile phone can take a photograph through the solution provided by the embodiments of the present application without displaying the guide frame.
  • the mobile phone after the mobile phone activates the photographing function and enters the target photographing mode, the mobile phone can take photographs by using the solution provided by the embodiments of the present application without displaying the guide frame.
  • the target photographing mode is the aforementioned wide-view mode.
  • step 1900 reference may be made to the description in the above-mentioned step 200, which will not be repeated here.
  • the mobile phone displays a wide-angle image and a telephoto frame on the preview interface.
  • the mobile phone displays a wide-angle image on the preview interface.
  • the mobile phone displays a telephoto frame on the preview interface to facilitate the user to know the real-time shooting range of the telephoto camera; and the mobile phone does not display the guide on the preview interface. frame.
  • the mobile phone is on the preview interface.
  • the preview interface includes a wide-angle image and a telephoto frame.
  • step 1901 For other related descriptions in step 1901, reference may be made to the description in the above-mentioned step 201, and details are not repeated here.
  • the mobile phone After detecting the user's photographing operation, the mobile phone displays a wide-angle image and a telephoto frame superimposed on the wide-angle image on the photographing interface.
  • step 1902 after the mobile phone detects the user's photographing operation, it does not display the guide frame on the shooting interface, and the mobile phone displays the wide-angle image and the telephoto frame on the shooting interface.
  • step 1902 For other related descriptions in step 1902, reference may be made to the descriptions in the foregoing step 202, and details are not repeated here.
  • the mobile phone generates a stitched image according to the acquired target telephoto image, and displays a thumbnail of the stitched image on the shooting interface.
  • the user when the user wants to shoot the target image, he can refer to the wide-angle image on the preview interface to move the mobile phone or telephoto camera to compose the picture, so that the shooting range of the telephoto camera and the telephoto frame are located in the position where the user wants to The starting position of the captured area. Then, the user can trigger a photographing operation. After the mobile phone detects the user's photographing operation, it can determine the configuration parameters of the telephoto camera according to the image blocks of the reference wide-angle image corresponding to the shooting range of the telephoto camera.
  • the configuration parameters may include configuration parameters such as AE, AWB, or DRC.
  • the mobile phone collects the telephoto image through the telephoto camera according to the configuration parameters such as AE, AWB or DRC, and obtains the first frame of the target telephoto image, that is, the target telephoto image 1.
  • configuration parameters such as AE, AWB, and DRC
  • the target telephoto image needs to match the grid.
  • the target telephoto image does not match the guide frame. box to match the grid.
  • the frame of telephoto image may be the target telephoto image 2 .
  • the matching of the target telephoto image 2 with the target telephoto image 1 includes: the deviation between the abscissa range of the target telephoto image 2 and the abscissa range of the target telephoto image 1 is less than or equal to the preset threshold 1, or the target length
  • the deviation between the ordinate range of the focus image 2 and the ordinate range of the target telephoto image 1 is less than or equal to the preset threshold 2 . That is to say, the target telephoto image 2 and the target telephoto image 1 are basically distributed side-by-side in left and right, or basically distributed in up and down.
  • the matching of the target telephoto image 2 and the target telephoto image 1 also includes: the overlapping area between the target telephoto image 2 and the target telephoto image 1 is greater than or equal to a preset value of 8, or the target telephoto image 2 and the target telephoto image 1.
  • the gap between 1 is less than or equal to the preset value of 9, etc.
  • the mobile phone determines that the matching content between the collected telephoto image and the reference wide-angle image and the matching content of the target telephoto image 1 on the reference wide-angle image are greater than or equal to the preset value of 8
  • the mobile phone determines that the matching content between the target telephoto image 1 and the reference wide-angle image is greater than or equal to the preset value of 8.
  • the target telephoto image 2 is matched with the focal image 1.
  • the user cannot move the mobile phone or the telephoto camera according to the instructions of the guide frame.
  • the user can move the mobile phone or directly move the telephoto camera according to the position of the obtained image content of the target telephoto image 1 relative to the global wide-angle image, as well as his own needs, habits or wishes.
  • the mobile phone can adjust the configuration parameters of the telephoto camera in real time according to the image blocks of the reference wide-angle image corresponding to the shooting range of the telephoto camera, so as to collect the target telephoto image according to the configuration parameters.
  • Image-guided target telephoto images for configurations such as AE, AWB, and DRC.
  • the mobile phone may perform splicing with the previously obtained spliced image every time a new target telephoto image is obtained, thereby generating a new spliced image.
  • the mobile phone can stitch the target telephoto image 1 and the target telephoto image 2 to obtain a stitched image.
  • the mobile phone obtains a new target telephoto image, and the new target telephoto image can be stitched with the previously obtained stitched image to obtain a new stitched image.
  • the mobile phone generates a stitched image according to the target telephoto image after the shooting is completed. This embodiment of the present application does not limit the stitching timing of the stitched images.
  • the image splicing process reference may be made to the description in the solution for displaying the guide frame above, which will not be repeated here.
  • the mobile phone can display the thumbnail of the target telephoto image 1 on the shooting interface.
  • a photographing interface displaying a thumbnail of the target telephoto image 1 see (b) in FIG. 20 .
  • the mobile phone can display the stitched image thumbnails on the photographing interface to prompt the user of the current real-time photographing progress.
  • the photographing interface displaying the thumbnails of the stitched images please refer to (c)-(d) in FIG. 20 .
  • the mobile phone displays a splicing frame on the photographing interface.
  • the mobile phone can automatically enlarge and display the thumbnail of the target telephoto image 1 and the stitched image thumbnail on the shooting interface.
  • the photographing interface for zooming in and displaying the thumbnail of the target telephoto image 1 may refer to (a) in FIG. 21
  • the photographing interface for zooming and displaying the thumbnail of the spliced image may refer to (b)-(c) in FIG. 21 .
  • the mobile phone displays the target area image corresponding to the guide frame on the wide-angle image on the shooting interface, instead of displaying the complete wide-angle image.
  • the ratio of the size of the target area image to the size of the guide frame is r, and r ⁇ 1.
  • the target area image can be obtained by cropping and zooming in on the full wide-angle image.
  • a target image is generated according to the stitched image.
  • the mobile phone can also perform processing such as ghost removal, dynamic range enhancement or hole filling on the stitched image before generating the target image according to the stitched image, so as to improve the quality of the stitched image. , so as to generate a target image according to the processed spliced image and improve the quality of the target image, which will not be repeated here.
  • processing such as ghost removal, dynamic range enhancement or hole filling on the stitched image before generating the target image according to the stitched image, so as to improve the quality of the stitched image.
  • the mobile phone ends the photographing after detecting the user's operation of stopping the photographing.
  • the mobile phone can preset a maximum number of shooting frames, and the mobile phone automatically ends the shooting after acquiring the target telephoto image with the preset number of frames.
  • the target image is generated according to the stitched image of the target telephoto image.
  • the mobile phone can minimize the edges of the stitched image to obtain a stitched image with a regular shape such as a rectangle or a square, so as to generate a target image according to the stitched image.
  • a regular shape such as a rectangle or a square
  • the schematic diagram of the stitched image thumbnails obtained by the mobile phone according to the target telephoto camera can refer to (a) in Figure 22
  • the schematic diagram of the new stitched image thumbnails obtained by the mobile phone after cropping can refer to Figure 22 (b). ).
  • the mobile phone can fill the vacant part according to the content of the corresponding position on the reference wide-angle image, so as to obtain a stitched image of a regular shape such as a rectangle or a square, so as to obtain a stitched image with a regular shape such as a rectangle or a square. Stitch the images to generate the target image.
  • the schematic diagram of the stitched image thumbnails obtained by the mobile phone according to the target telephoto camera can refer to (a) in FIG. (b) in 23A.
  • the mobile phone regardless of whether the edges of the stitched image are aligned, the mobile phone generates a target image according to the stitched image, and the size of the target image is consistent with the size of the stitched image.
  • the size of the target image obtained by the mobile phone may not be the size desired by the user. Therefore, in a possible technical solution, the user can also edit the generated target image in the gallery to obtain the target image of the desired size.
  • the user can also edit the stitched image to obtain a stitched image of an ideal size, thereby generating a target image of an ideal size.
  • the mobile phone may prompt the user on the shooting interface after the shooting is completed, please set the range of the target image on the stitched image.
  • the mobile phone detects the user's operation of setting the dotted frame 2300 and clicks the OK control, the stitched image is cropped according to the dotted frame 2300, thereby generating the target image within the range indicated by the user as shown in (b) in FIG. 23B .
  • the target image generated by the mobile phone based on the spliced image of the multi-frame target telephoto images is smaller than that of the wide-angle image.
  • the sharpness is higher, the details are clearer, and the shooting effect is better.
  • the mobile phone can obtain a high-definition target image with a large field of view by stitching multiple frames of target telephoto images.
  • the user can move the mobile phone or telephoto camera to collect telephoto images according to their own needs, so as to obtain the target telephoto image of the user's desired position, and then according to the user's desired position.
  • the mobile phone can obtain the target image of the corresponding size or shape according to the stitched image of any specification, which can facilitate the user to shoot and obtain the target image of various sizes in wide format, square format or panoramic format.
  • Some other embodiments of the present application also provide a shooting method, which can obtain a stitched image with a larger and clearer field of view by stitching a telephoto image with a smaller field of view, and then crop the stitched image to obtain different A clear target image corresponding to the target zoom magnification.
  • the mobile phone does not need to use digital zoom to enlarge the image, so the high resolution of the telephoto camera and the high definition of the telephoto image can be retained, and the zoom effect of optical zoom can be realized.
  • This scheme combines stitching and cropping of telephoto images, and can also be called a hybrid zoom scheme.
  • the hybrid zoom scheme may include:
  • the mobile phone starts the camera function.
  • the hybrid zoom method provided by the embodiments of the present application can be used for processing.
  • the mobile phone after the mobile phone starts the photographing function and enters the target photographing mode, it can be processed by the hybrid zoom method provided by the embodiments of the present application.
  • the target photographing mode is taken as an example of the hybrid zoom mode for description.
  • step 2400 For other related descriptions of step 2400, reference may be made to the description in the foregoing step 200, and details are not repeated here.
  • the mobile phone displays a wide-angle image on the preview interface.
  • the mobile phone may not display the guide frame on the preview interface first, and then display the guide corresponding to the target zoom magnification on the preview interface after obtaining the target zoom magnification subsequently. frame.
  • the phone does not display a telephoto frame on the preview interface. In other embodiments, the mobile phone continuously displays the telephoto frame on the preview interface. In some other embodiments, the mobile phone does not display the telephoto frame on the preview interface first, and displays the telephoto frame on the preview interface after obtaining the target zoom magnification subsequently.
  • the mobile phone obtains the target zoom ratio.
  • the target zoom ratio is greater than the zoom ratio of the wide-angle camera (ie, the first camera), and is smaller than the zoom ratio of the telephoto camera (ie, the second camera).
  • the target zoom ratio is the zoom ratio of the target image obtained based on the stitched image of the target telephoto image, the zoom ratio of the final image the user wants to capture, not the zoom ratio of the wide-angle image as the background image. Before and after the user sets the target zoom magnification, the zoom magnification of the wide-angle image as the background image does not change.
  • the target zoom magnification is the zoom magnification set by the user.
  • the mobile phone may prompt the user to set the target zoom magnification by displaying information or voice playback. Exemplarily, see (a)-(c) in Figure 25, the mobile phone prompts the user on the preview interface: In this mode, you can set the zoom ratio to capture a high-definition image corresponding to the zoom ratio.
  • the preview interface of the mobile phone includes multiple optional zoom magnification controls, such as 1.5X controls, 2X controls, 2.5X controls, 3X controls, 3.5X controls, 4X controls and 4.5X controls, etc., the mobile phone determines the corresponding target zoom magnification according to the zoom magnification control selected by the user.
  • a setting interface can be displayed, and the user can set the target zoom magnification based on the setting page.
  • the mobile phone after detecting the zooming/zooming operation of the user on the preview interface, the mobile phone obtains the corresponding target zoom ratio after the zooming/zooming operation.
  • the mobile phone after detecting the user's drag operation on the zoom magnification adjustment lever, the mobile phone obtains the corresponding target zoom magnification after the drag operation.
  • the mobile phone after the mobile phone detects the user's voice instruction to set the zoom ratio, the mobile phone obtains the target zoom ratio set by the user's voice.
  • the mobile phone can use the hybrid zoom method provided by the embodiment of the present application for processing. If the target zoom ratio is less than or equal to the zoom ratio of the wide-angle camera, the mobile phone may not use the hybrid zoom scheme, but directly generate the target image corresponding to the target zoom ratio according to the image captured by the wide-angle camera or the ultra-wide-angle camera. If the target zoom ratio is greater than or equal to the zoom ratio of the long-angle camera, the mobile phone may not use the hybrid zoom scheme, but directly generate the target image corresponding to the target zoom ratio according to the image captured by the telephoto camera or the ultra-telephoto camera.
  • the target zoom ratio may also be the default zoom ratio (for example, the default zoom ratio of the wide-angle camera) or the last used zoom ratio.
  • the mobile phone can also modify the target zoom ratio according to the user's operation.
  • the mobile phone superimposes and displays a guide frame corresponding to the target zoom magnification on the wide-angle image of the preview interface.
  • the guide frame in the hybrid zoom scheme corresponds to the target zoom magnification, and is a guide frame of the minimum specification including the image area size corresponding to the field angle of the target zoom magnification.
  • the image area corresponding to the field of view of the target zoom magnification is located in the middle of the wide-angle image by default.
  • the zoom ratio of the wide-angle camera is 1X
  • the zoom ratio of the telephoto camera is 5X
  • the guide frame can include up to 5*5 grids.
  • the target zoom magnification is 2.5X
  • the field of view of the target zoom magnification corresponds to the image area 2601; for the guide frame corresponding to the target zoom magnification 2.5X, please refer to the dashed guide frame 2602 shown in (a) in FIG. 26A .
  • Block 2602 includes 3*3 grids, and the size of the grid corresponds to the field angle of the telephoto camera; the size of the image area corresponding to the field angle of the target zoom magnification is 2.5 times the grid size.
  • the target zoom magnification is 2.5X
  • FIG. 26A including a guide frame 2604 corresponding to the target zoom magnification.
  • the image area corresponding to the target zoom magnification 3X is the same size as the guide frame.
  • the guide frame corresponding to the target zoom magnification 3X please refer to the dotted guide frame 2603 shown in (b) in FIG. 26A .
  • the guide frame 2603 includes 3*3 grids, and the image size corresponding to the field angle of the target zoom magnification is 3 times the grid size.
  • the mobile phone in the preview state, can prompt the user by displaying information, voice broadcast, etc., during the photographing process, please press the guide frame to shoot, so as to crop the generated stitched image and obtain a zoom ratio that meets the target zoom ratio. target image.
  • the mobile phone can prompt the user by displaying information on the preview interface: Please press the dotted guide frame to capture a stitched image during the photographing process, and the stitched image is used for cropping to obtain the zoom you instructed.
  • the mobile phone can also continuously display the target frame corresponding to the target zoom ratio on the wide-angle image of the preview interface.
  • the position and size of the target frame are consistent with the position and size of the image area corresponding to the field angle of the target zoom magnification.
  • the mobile phone can display the target frame corresponding to the target zoom magnification on the preview interface, so as to prompt the user with the position and size of the image area size corresponding to the field of view of the current target zoom magnification, so that the user can know the zoom according to the current target zoom ratio.
  • the guide frame is the smallest size guide frame including the target frame.
  • the size of the guide frame is greater than or equal to the size of the target frame, that is, greater than or equal to the image size corresponding to the field angle of the target zoom magnification.
  • the target frame is located in the middle of the wide-angle image by default.
  • the zoom ratio of the wide-angle camera is 1X
  • the zoom ratio of the telephoto camera is 5X
  • the guide frame can include up to 5*5 grids.
  • the target zoom magnification is 2.5X
  • the target frame corresponding to the target zoom magnification of 2.5X may refer to the solid-line rectangular frame 2601 shown in (b) of FIG. 26A .
  • the target zoom magnification is 3X
  • the size of the target frame corresponding to the target zoom magnification 3X is consistent with the size of the frame of the guide frame.
  • FIG. 26B for a schematic diagram of the preview interface, including a guide frame 2604 corresponding to the target zoom ratio and a target frame 2605 corresponding to the target zoom ratio.
  • the target frame on the preview interface is located in the middle of the wide-angle image by default.
  • the user can also move the position of the target frame, and the position of the guide frame also changes correspondingly with the change of the position of the target frame.
  • the preview interface shown in (a) in FIG. 27 if the mobile phone detects that the user drags the target frame to the right, as shown in (b) in FIG. 27 , the target frame and the guide frame are move to the right.
  • the user can move the position of the guide frame, and the position of the target frame changes accordingly.
  • the mobile phone after acquiring the target zoom ratio, the mobile phone briefly displays the target frame on the preview interface to prompt the user the size of the image corresponding to the field of view of the current target zoom ratio, so as to facilitate the user to determine the current target Check whether the zoom ratio is appropriate, and then stop displaying the target frame.
  • the mobile phone can record the position and size of the target frame. After the mobile phone determines that the shooting is completed, the stitched image can be cropped according to the recorded position and size of the target frame, thereby obtaining the target image.
  • the mobile phone After detecting the user's photographing operation, the mobile phone displays a wide-angle image on the photographing interface, and a guide frame superimposed on the wide-angle image, where the guide frame corresponds to the target zoom magnification.
  • the phone can display a wide-angle image and a guide frame superimposed on the wide-angle image.
  • the mobile phone can also display a telephoto frame on the shooting interface.
  • the mobile phone can also display a target frame on the shooting interface.
  • the mobile phone can temporarily or continuously prompt the user by displaying information, voice broadcast, etc., during the photographing process, please press the guide frame to photograph, so as to crop the stitched image obtained by photographing, Get the target image that matches the target zoom ratio.
  • the mobile phone displays the captured complete wide-angle image on the preview interface and the shooting interface.
  • the mobile phone can replace the complete wide-angle image displayed on the preview interface and the shooting interface with the target area image corresponding to the guide frame on the wide-angle image.
  • the mobile phone generates a stitched image according to the acquired target telephoto image, and displays a thumbnail of the stitched image on the shooting interface.
  • step 2404 For the relevant description of step 2404, reference may be made to the description in step 203, which will not be repeated here.
  • a thumbnail image of the target telephoto image 1 as well as a wide-angle image, a guide frame, and a telephoto frame are displayed on the shooting interface.
  • the stitched image thumbnails, as well as a wide-angle image, a guide frame, and a telephoto frame are displayed on the shooting interface.
  • the stitched image is cropped to generate a target image.
  • the mobile phone can also perform processing such as ghost removal, dynamic range enhancement or hole filling on the stitched image before generating the target image according to the stitched image, so as to improve the quality of the stitched image. , so as to generate a target image according to the processed spliced image and improve the quality of the target image, which will not be repeated here.
  • processing such as ghost removal, dynamic range enhancement or hole filling on the stitched image before generating the target image according to the stitched image, so as to improve the quality of the stitched image.
  • the mobile phone determines that the shooting ends. For example, in the first case, after the grids in the guide frame are all photographed, the mobile phone automatically ends the photographing, and the stitched image is cropped to the size of the target frame to generate the target image. Wherein, when the size of the guide frame is the same as the size of the target frame, the mobile phone does not need to crop the stitched image, but can directly generate the target image according to the stitched image. Exemplarily, in the case shown in (c) of FIG. 28 , the target image generated by the mobile phone may refer to (d) of FIG. 28 .
  • the mobile phone determines that the photographing is ended after detecting the user's operation of stopping photographing.
  • the mobile phone crops and enlarges the reference wide-angle image or the wide-angle image, thereby generating an image of the target zoom magnification through digital zooming. If the target frame is not displayed on the shooting interface, after the mobile phone determines that the shooting is over, the position and size of the target frame can be determined according to the target zoom magnification, so that the target image can be obtained by cropping the reference wide-angle image or wide-angle image according to the target frame.
  • the mobile phone can determine the position and size of the image area corresponding to the field of view of the target zoom magnification after the shooting is completed, so as to determine the position and size of the stitched image according to the position and size of the image area. Crop to get the target image.
  • the mobile phone can determine the position and size of the target frame according to the target zoom ratio after the shooting is completed, so as to obtain the target image by cropping the stitched image according to the position and size of the target frame.
  • the mobile phone can determine the position and size of the target frame according to the target zoom ratio, and record the position and size of the target frame, so that after the shooting, according to the The position and size of the target frame are used to crop the stitched image to obtain the target image.
  • the position and size of the image area corresponding to the field of view of the target zoom magnification are the position and size of the target frame.
  • the solution corresponding to the above-mentioned first case is obtained by stitching and cropping a telephoto image with a higher resolution and a clearer small field of view.
  • the resolution and clarity of the target image are also higher, the image quality is better, and the zoom effect of optical zoom can be realized.
  • the solutions corresponding to the above-mentioned first case in the embodiment of the present application can make the definition of the entire target image higher, and achieve the zoom effect of optical zoom.
  • the telephoto image collected in real time is not displayed in the telephoto frame, but is only used to indicate the real-time shooting range corresponding to the telephoto camera.
  • a down-sampling image of the telephoto image collected by the telephoto camera in real time is displayed in the telephoto frame, so as to present the telephoto image collected by the telephoto camera in real time to the user.
  • the position of the telephoto image is opposite to the position of the same content on the wide-angle image.
  • a telephoto image captured in real time by the telephoto camera is displayed in the telephoto frame.
  • the telephoto frame is located at a preset position on the interface, for example, at the lower left corner or the lower right corner of the interface.
  • the telephoto frame 2900 is located at the lower left corner of the interface.
  • the shooting method provided in the above embodiment can still be used to obtain the The target image is not repeated in this embodiment of the present application.
  • the above description takes the electronic device as a mobile phone as an example.
  • the electronic device is other devices such as a tablet computer or a smart watch, the target image can still be obtained by using the shooting method provided in the above embodiment, which is not repeated in this embodiment of the present application.
  • the electronic device includes corresponding hardware and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software in conjunction with the algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functionality for each particular application in conjunction with the embodiments, but such implementations should not be considered beyond the scope of this application.
  • the electronic device can be divided into functional modules according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that, the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • Embodiments of the present application further provide an electronic device, including one or more processors and one or more memories.
  • the one or more memories are coupled to the one or more processors for storing computer program code, the computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform
  • the above-mentioned related method steps implement the shooting method in the above-mentioned embodiment.
  • An embodiment of the present application also provides an electronic device, as shown in FIG. 30 , including: a display screen (or screen) 3001, one or more processors 3002, multiple cameras 3003, a memory 3004, and one or more computers
  • the above-mentioned devices may be connected through one or more communication buses 3006.
  • the one or more computer programs 3005 are stored in the aforementioned memory 3004 and configured to be executed by the one or more processors 3002, the one or more computer programs 3005 comprising instructions that may be used to perform the aforementioned implementations the steps in the example.
  • the processor 3002 may be the processor 110 shown in FIG. 1
  • the memory 3004 may be the internal memory 121 shown in FIG. 1
  • the camera 3003 may be the camera 193 shown in FIG.
  • the display screen 3001 may specifically be the display screen 194 shown in FIG. 1 .
  • Embodiments of the present application further provide a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are executed on an electronic device, the electronic device executes the above-mentioned related method steps to realize the above-mentioned embodiments shooting method in .
  • Embodiments of the present application also provide a computer program product, which, when the computer program product runs on a computer, causes the computer to execute the above-mentioned relevant steps, so as to realize the photographing method executed by the electronic device in the above-mentioned embodiment.
  • the embodiments of the present application also provide an apparatus, which may specifically be a chip, a component or a module, and the apparatus may include a connected processor and a memory; wherein, the memory is used for storing computer execution instructions, and when the apparatus is running, The processor can execute the computer-executed instructions stored in the memory, so that the chip executes the photographing method executed by the electronic device in the foregoing method embodiments.
  • the electronic device, computer-readable storage medium, computer program product or chip provided in this embodiment are all used to execute the corresponding method provided above. Therefore, for the beneficial effects that can be achieved, reference may be made to the above-provided method. The beneficial effects in the corresponding method will not be repeated here.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be Incorporation may either be integrated into another device, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place, or may be distributed to multiple different places . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, which are stored in a storage medium , including several instructions to make a device (may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

Les modes de réalisation de la présente invention se rapportent au domaine technique de l'électronique, et concernent un procédé et un dispositif de capture d'image. Dans le procédé, il est possible de faire référence à une image collectée par une première caméra ayant un grand champ de vision, une seconde caméra ayant un petit champ de vision est utilisée pour capturer des images et les épisser pour obtenir une image cible ayant un grand champ de vision, et l'image cible est en haute définition, présente des détails clairs et un bon effet de capture d'image. La solution comprend les étapes suivantes : le lancement d'une fonction photographique par un dispositif électronique ; après détection d'une opération de photographie d'un utilisateur, l'affichage d'une première image et d'une trame de guidage sur une interface de capture d'image, la première image étant obtenue en fonction d'une image collectée par une première caméra, la trame de guidage comprenant une pluralité de grilles, et une grille unique correspondant à la taille du champ de vision d'une seconde caméra ; l'affichage d'informations d'épissage sur l'interface de capture d'image, les informations d'épissage étant utilisées pour indiquer la progression de capture d'image ; la génération d'une image épissée selon une pluralité de trames acquises d'une image de capture d'image cible ; et la génération d'une image cible selon l'image épissée après que la capture d'image est terminée. Les modes de réalisation de la présente invention sont utilisés pour une capture d'images.
PCT/CN2021/109922 2020-07-31 2021-07-30 Procédé et dispositif de capture d'image WO2022022726A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010757119.9 2020-07-31
CN202010757119 2020-07-31
CN202011296335.4 2020-11-18
CN202011296335.4A CN114071009B (zh) 2020-07-31 2020-11-18 一种拍摄方法及设备

Publications (1)

Publication Number Publication Date
WO2022022726A1 true WO2022022726A1 (fr) 2022-02-03

Family

ID=80037668

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/109922 WO2022022726A1 (fr) 2020-07-31 2021-07-30 Procédé et dispositif de capture d'image

Country Status (1)

Country Link
WO (1) WO2022022726A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449174A (zh) * 2022-02-28 2022-05-06 维沃移动通信有限公司 拍摄方法、装置和电子设备
CN114511595A (zh) * 2022-04-19 2022-05-17 浙江宇视科技有限公司 多模态协同和融合的目标跟踪方法、装置及系统、介质
CN115171200A (zh) * 2022-09-08 2022-10-11 深圳市维海德技术股份有限公司 基于变倍的目标跟踪特写方法、装置、电子设备及介质
CN115314636A (zh) * 2022-08-03 2022-11-08 天津华来科技股份有限公司 基于摄像头的多路视频流处理方法和系统
CN116320761A (zh) * 2023-03-13 2023-06-23 北京城市网邻信息技术有限公司 图像采集方法、装置、设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040003409A1 (en) * 2002-06-27 2004-01-01 International Business Machines Corporation Rendering system and method for images having differing foveal area and peripheral view area resolutions
CN103685945A (zh) * 2013-11-28 2014-03-26 宇龙计算机通信科技(深圳)有限公司 全景拍照的方法及其移动终端
CN107749944A (zh) * 2017-09-22 2018-03-02 华勤通讯技术有限公司 一种拍摄方法及装置
CN107948394A (zh) * 2016-10-12 2018-04-20 Lg 电子株式会社 移动终端
CN109559280A (zh) * 2018-12-19 2019-04-02 维沃移动通信有限公司 一种图像处理方法及终端
CN111010510A (zh) * 2019-12-10 2020-04-14 维沃移动通信有限公司 一种拍摄控制方法、装置及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040003409A1 (en) * 2002-06-27 2004-01-01 International Business Machines Corporation Rendering system and method for images having differing foveal area and peripheral view area resolutions
CN103685945A (zh) * 2013-11-28 2014-03-26 宇龙计算机通信科技(深圳)有限公司 全景拍照的方法及其移动终端
CN107948394A (zh) * 2016-10-12 2018-04-20 Lg 电子株式会社 移动终端
CN107749944A (zh) * 2017-09-22 2018-03-02 华勤通讯技术有限公司 一种拍摄方法及装置
CN109559280A (zh) * 2018-12-19 2019-04-02 维沃移动通信有限公司 一种图像处理方法及终端
CN111010510A (zh) * 2019-12-10 2020-04-14 维沃移动通信有限公司 一种拍摄控制方法、装置及电子设备

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449174A (zh) * 2022-02-28 2022-05-06 维沃移动通信有限公司 拍摄方法、装置和电子设备
CN114511595A (zh) * 2022-04-19 2022-05-17 浙江宇视科技有限公司 多模态协同和融合的目标跟踪方法、装置及系统、介质
CN115314636A (zh) * 2022-08-03 2022-11-08 天津华来科技股份有限公司 基于摄像头的多路视频流处理方法和系统
CN115314636B (zh) * 2022-08-03 2024-06-07 天津华来科技股份有限公司 基于摄像头的多路视频流处理方法和系统
CN115171200A (zh) * 2022-09-08 2022-10-11 深圳市维海德技术股份有限公司 基于变倍的目标跟踪特写方法、装置、电子设备及介质
CN115171200B (zh) * 2022-09-08 2023-01-31 深圳市维海德技术股份有限公司 基于变倍的目标跟踪特写方法、装置、电子设备及介质
CN116320761A (zh) * 2023-03-13 2023-06-23 北京城市网邻信息技术有限公司 图像采集方法、装置、设备和存储介质

Similar Documents

Publication Publication Date Title
WO2022022715A1 (fr) Procédé et dispositif photographique
WO2022022726A1 (fr) Procédé et dispositif de capture d'image
EP2779628B1 (fr) Procédé et dispositif de traitement d'images
KR102045957B1 (ko) 휴대단말의 촬영 방법 및 장치
CN114071010B (zh) 一种拍摄方法及设备
WO2021223500A1 (fr) Procédé et dispositif photographique
WO2017088678A1 (fr) Appareil et procédé de prise d'image panoramique à exposition prolongée
US9195880B1 (en) Interactive viewer for image stacks
US20050276596A1 (en) Picture composition guide
US20100134641A1 (en) Image capturing device for high-resolution images and extended field-of-view images
US20130076941A1 (en) Systems And Methods For Editing Digital Photos Using Surrounding Context
US8525913B2 (en) Digital photographing apparatus, method of controlling the same, and computer-readable storage medium
WO2021244104A1 (fr) Procédé de photographie à laps de temps et dispositif
US20150138309A1 (en) Photographing device and stitching method of captured image
CN106791390B (zh) 广角自拍实时预览方法及用户终端
US8947558B2 (en) Digital photographing apparatus for multi-photography data and control method thereof
CN109089045A (zh) 一种基于多个摄像装置的摄像方法及设备及其终端
WO2021185374A1 (fr) Procédé de capture d'image et dispositif électronique
WO2021238317A1 (fr) Procédé et dispositif de capture d'image panoramique
WO2018196854A1 (fr) Procédé de photographie, appareil de photographie et terminal mobile
CN114071009B (zh) 一种拍摄方法及设备
US20230353864A1 (en) Photographing method and apparatus for intelligent framing recommendation
JP2011217275A (ja) 電子機器
CN114979458B (zh) 一种图像的拍摄方法及电子设备
JP2011193066A (ja) 撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21848872

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21848872

Country of ref document: EP

Kind code of ref document: A1