CN114466101B - Display method and electronic equipment - Google Patents

Display method and electronic equipment Download PDF

Info

Publication number
CN114466101B
CN114466101B CN202110753284.1A CN202110753284A CN114466101B CN 114466101 B CN114466101 B CN 114466101B CN 202110753284 A CN202110753284 A CN 202110753284A CN 114466101 B CN114466101 B CN 114466101B
Authority
CN
China
Prior art keywords
picture
target image
target
interface
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110753284.1A
Other languages
Chinese (zh)
Other versions
CN114466101A (en
Inventor
韩笑
暴文莹
冯文瀚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110753284.1A priority Critical patent/CN114466101B/en
Publication of CN114466101A publication Critical patent/CN114466101A/en
Application granted granted Critical
Publication of CN114466101B publication Critical patent/CN114466101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Abstract

The application provides a display method and electronic equipment. The method comprises the following steps: the electronic device selects a target image frame from a plurality of image frames in a video photographed in a multi-view mode, and selects a target picture from a plurality of pictures in the target image frame. The electronic device may generate a target image based on the acquired target screen. And displaying the target image in the target interface. The electronic equipment can generate the target image based on the target picture in the multi-scene shot image so as to ensure the integrity and the attractiveness of the target image displayed in the target interface.

Description

Display method and electronic equipment
Technical Field
The present application relates to the field of terminal devices, and in particular, to a display method and an electronic device.
Background
With the increasing shooting function of the terminal, more and more scenes shot by the user using the terminal. The current terminals can perform photographing in different modes, such as a multi-view mode or a picture-in-picture mode. However, when the videos shot in the multi-view mode and the pip mode are displayed in the terminal interface in the form of thumbnails, covers, or the like, each picture in the images may be partially cut, so that each picture is split, and the requirement of the user on the beautification of the interface display cannot be met.
Disclosure of Invention
In order to solve the above technical problem, the present application provides a display method and an electronic device. In the method, the electronic device may display an image generated based on a target picture among a plurality of pictures in the image frame in the interface. Therefore, the problem that the image is partially cut to cause unattractive image display is avoided.
In a first aspect, the present application provides an electronic device. The electronic device includes a memory and a processor, the memory coupled with the processor. The memory stores program instructions that, when executed by the processor, cause the electronic device to perform the steps of: the electronic device starts recording in the multi-view mode in response to the received user operation. The electronic equipment acquires a video recorded in a multi-view mode, wherein the video comprises a plurality of image frames, and each image frame in the plurality of image frames comprises a plurality of pictures collected by a plurality of cameras of the electronic equipment. The electronic device selects a target image frame from a plurality of image frames. Next, the electronic device selects a target picture from a plurality of pictures in the target image frame. And the electronic equipment generates a target image based on the target picture. The electronic equipment displays the target image on the target interface. In this way, the electronic apparatus can generate the target image based on one of the plurality of pictures in the image frame, that is, the target picture, when generating the target image based on the video photographed in the multi-view photographing mode. Therefore, partial cutting of the picture is avoided, the completeness of picture display is improved, the target picture displayed on the interface is more attractive, the requirement of a user on the attractiveness of the interface display is met, and the use experience of the user is improved.
Illustratively, the target screen is optionally a preselected image in the present application.
According to a first aspect, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: a target screen is selected according to each of the plurality of screens. In this way, the electronic apparatus can select a screen to be used when generating the target image based on the content of each of the plurality of screens. That is, the electronic device may select a more appropriate picture as the target image based on different shooting scenes and different shooting modes.
According to a first aspect, or any implementation of the first aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: detecting the number of persons included in a plurality of pictures; and if the number of the people contained in the plurality of pictures is detected to be zero, selecting a target picture according to the shooting parameters of each picture in the plurality of pictures. In this way, the electronic device can select the target picture as the target image based on the person image in the image as the judgment basis. For example, when no person is included in each screen, the electronic apparatus may further select a target screen based on the shooting parameters of the screen.
Illustratively, the shooting parameters may be camera parameters. For example, when the shooting parameters of the respective screens are consistent, the electronic device may further select a target screen based on the set rule.
According to a first aspect, or any implementation of the first aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: if the number of the people contained in the multiple pictures is M, detecting whether the M people comprise the owner of the electronic equipment; m is an integer greater than 1; if the M personal objects including the owner of the electronic equipment are detected, determining that a picture including the owner of the electronic equipment is a target picture; and if the owner of the electronic equipment is detected not to be included in the M persons, selecting a target picture according to the shooting parameters of each picture in the plurality of pictures. In this way, the electronic device can select the target picture as the target image based on the person image in the image as the judgment basis. For example, when a plurality of persons are included in the screen, the electronic device may preferentially select a screen corresponding to the owner as the target screen. For example, if the owner is not included in the plurality of persons, the electronic device may further select the target screen based on the shooting parameters.
According to a first aspect, or any implementation of the first aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and if the number of the people contained in the plurality of pictures is detected to be one, determining the picture containing the people as the target picture. In this way, the electronic device can select the target picture as the target image based on the person image in the image as the judgment basis. For example, if only one person is included in the screen, the electronic device selects the screen including the person as the screen of the generation target image.
According to a first aspect, or any implementation of the first aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and cutting the target picture based on the set size to obtain a target image. In this way, by cutting the target screen, a target image having a size corresponding to the setting can be obtained. Illustratively, the set size may be a size of a thumbnail, and may be, for example, a ratio of long to high may be 1. For example, the set size may also be the size of the cover, for example, the ratio of length to height may be 3.
According to the first aspect, or any implementation manner of the first aspect, the target interface is a gallery interface, and the target image is a thumbnail in the gallery interface. In this way, when the target image is displayed as a reduced image in the gallery, the screen of each thumbnail generated based on the multi-view captured video is complete and beautiful.
According to a first aspect, or any implementation of the first aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and displaying the video in response to the received operation of clicking the target image by the user.
Illustratively, the electronic device receives a click operation of a user, and a preview interface of the video can be displayed. Or directly playing the video.
According to the first aspect, or any implementation manner of the first aspect, the target interface is an album interface, and the target image is an album cover in the album interface. In this way, when the target image is displayed as a cover, the cover generated based on each of the multi-view mode captured videos is complete and beautiful.
According to a first aspect, or any implementation form of the first aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and displaying a multi-scene identifier on the target image, wherein the multi-scene identifier is used for indicating that the video is shot in a multi-scene mode. In this way, by displaying the multi-view logo on the target image, the user may be caused to determine that the thumbnail or cover page is generated based on the image frame in the video captured in the multi-view mode.
According to a first aspect, or any implementation form of the first aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and in response to the received operation that the user clicks the multi-scene identifier, displaying other pictures except the target picture in the plurality of pictures at the position of the target image. Therefore, the user can switch the display of the picture by triggering the multi-scene identification, so that the user can more conveniently acquire the picture information which is not displayed in the front cover or the thumbnail. Therefore, the integrity of the information of the display image is improved while the attractiveness of the interface is ensured.
According to a first aspect, or any implementation of the first aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: selecting a first image frame in a video as a target image frame; or selecting any one of the first N image frames in the video as a target image frame, wherein N is an integer greater than 1. Illustratively, N may be 20. That is, the electronic device selects one image frame from the first 20 image frames. For example, the electronic device may select an image frame with the best image quality from the front 20 image frames as the target image frame. Thereby avoiding the problem of blurring the generated thumbnail or cover page.
In a second aspect, the present application provides an electronic device. The electronic device includes a memory and a processor, the memory coupled with the processor; the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the steps of: responding to the received first user operation, and displaying a picture-in-picture mode shooting interface, wherein the picture-in-picture mode shooting interface comprises a first window and a second window, and the first window is in the second window; the method comprises the steps that pictures collected by a first camera of the electronic equipment are displayed in a first window, and pictures collected by a second camera of the electronic equipment are displayed in a second window; in response to the received second user operation, starting recording in a picture-in-picture mode; acquiring a video recorded in a picture-in-picture mode, wherein the video comprises a plurality of image frames; each image frame in the plurality of image frames comprises a first picture and a second picture, wherein the first picture corresponds to a first window, and the second picture corresponds to a second window; selecting a target image frame from a plurality of image frames; generating a target image based on a second picture in the target image frame; and displaying the target image on the target interface. In this way, the electronic apparatus can generate the target image based on the large screen (i.e., the second screen described above) in the image frame, i.e., the screen corresponding to the large window, when generating the target image based on the video photographed in the picture-in-picture photographing mode. Therefore, the picture corresponding to the small window is prevented from being partially cut, the integrity of picture display is improved, the target picture displayed on the interface is more attractive, the requirement of a user on the attractiveness of the interface display is met, and the user experience is improved.
Illustratively, the first user action may be a user clicking on a picture-in-picture option.
For example, the second user operation may be an operation in which the user clicks a shooting option.
Illustratively, the first window floats above the second window. The size of the first window is smaller than or equal to the size of the second window.
According to a second aspect, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and cutting the second picture based on the set size to obtain a target image. In this way, by cutting the target screen, a target image having a size corresponding to the setting can be obtained. Illustratively, the set size may be a size of a thumbnail, and may be, for example, a ratio of long to high may be 1. For example, the set size may also be the size of the cover, for example, the ratio of length to height may be 3.
According to the second aspect, or any implementation manner of the second aspect, the target interface is a gallery interface, and the target image is a thumbnail in the gallery interface. In this way, when the target image is displayed as a reduced image in the gallery, the pictures of the thumbnails each generated based on the multi-view captured video are all complete and beautiful.
According to a second aspect, or any implementation of the second aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and displaying the video in response to the received operation of clicking the target image by the user. Illustratively, the electronic device receives a click operation of a user, and a preview interface of the video can be displayed. Or the video can be directly played.
According to a second aspect, or any implementation manner of the second aspect, the target interface is an album interface, and the target image is an album cover in the album interface. Thus, when the object image is displayed as a cover, the cover generated based on each of the picture-in-picture mode-captured videos is complete and beautiful.
According to a second aspect, or any implementation of the second aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: a picture-in-picture flag indicating that video is captured in a picture-in-picture mode is displayed on a target image. In this way, by displaying the picture-in-picture flag on the target image, the user can be made sure that the thumbnail or cover is generated based on the image frame in the video captured in the picture-in-picture mode.
According to a second aspect, or any implementation of the second aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and displaying the first picture at the position of the target image in response to the received operation of clicking the picture-in-picture mark by the user. In this way, by displaying the picture-in-picture flag on the target image, the user can be made sure that the thumbnail or cover is generated based on the image frame in the video captured in the picture-in-picture mode. Therefore, the user can switch the display of the picture by triggering the picture-in-picture mark, so that the user can more conveniently acquire the picture information which is not displayed in the cover or the thumbnail. Therefore, the integrity of the information of the display image is improved while the attractiveness of the interface is ensured.
According to a second aspect, or any implementation of the second aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: selecting a first image frame in a video as a target image frame; or selecting any image frame of the first N image frames in the video as a target image frame, wherein N is an integer greater than 1. For example, the electronic device may select an image frame with the best image quality from the front 20 image frames as the target image frame. Thereby avoiding the problem of blurring of the generated thumbnail or cover.
In a third aspect, the present application provides an electronic device. The electronic device includes: a memory and a processor, the memory coupled with the processor; the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the steps of: in response to the received user operation, starting recording in a multi-view mode; acquiring a video recorded in a multi-scene mode, wherein the video comprises a plurality of image frames; each of the plurality of image frames includes a plurality of pictures acquired by a plurality of cameras of the electronic device; selecting a target image frame from a plurality of image frames; selecting a target picture from the target image frame; detecting whether the size of the target image frame meets a preset condition or not; if the size of the target image frame is detected to meet the preset condition, generating a target image based on the target image frame; if the size of the target image frame is detected to be not in accordance with the preset condition, generating a target image based on the target image; and displaying the target image on the target interface. In this way, the electronic device can determine whether to generate the target image based on the image frame or based on the target picture in the image frame based on the size of the target image. Thus, under different circumstances, the target image can be generated based on different generation modes to be displayed on the interface. Accordingly, whether the target image generated based on the image frame or the target image generated based on the target picture is in accordance with the set size, the integrity and the attractiveness of the target image displayed on the interface can be ensured.
According to a third aspect, the preset conditions include: the difference between the size of the target image frame and the set size is less than or equal to the set threshold. In this way, the electronic device may further determine whether there is a possibility that a part of a picture in the image frame may be clipped if the target image is generated based on the image frame, based on a difference between the image frame and the set size. Based on the determination result, the electronic device may select to generate the target image based on the target image frame or to generate the target image based on the target screen.
For example, if the difference between the size of the target image frame and the set size is greater than the set threshold, the electronic device may generate the target image based on the target screen.
According to a third aspect, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: a target screen is selected according to each of the plurality of screens. In this way, the electronic apparatus can select a screen used when generating the target image based on the content of each of the plurality of screens. That is, the electronic device may select a more appropriate picture as the target image based on different shooting scenes and different shooting modes.
According to the third aspect, or any implementation manner of the third aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: detecting the number of persons included in a plurality of pictures; and if the number of the people contained in the plurality of pictures is detected to be zero, selecting the target picture according to the shooting parameters of each picture in the plurality of pictures. In this way, the electronic device can select the target picture as the target image based on the person image in the image as the judgment basis. For example, when no person is included in each screen, the electronic apparatus may further select a target screen based on the shooting parameters of the screen.
Illustratively, the shooting parameters may be camera parameters. For example, when the shooting parameters of the respective screens are consistent, the electronic device may further select a target screen based on the set rule.
According to the third aspect, or any implementation manner of the third aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: if the number of the people contained in the multiple pictures is M, detecting whether the M people comprise the owner of the electronic equipment; m is an integer greater than 1; if the M personal objects including the owner of the electronic equipment are detected, determining that a picture including the owner of the electronic equipment is a target picture; and if the owner of the electronic equipment is detected not to be included in the M persons, selecting a target picture according to the shooting parameters of each picture in the plurality of pictures. In this way, the electronic device can select the target picture as the target image based on the person image in the image as the judgment basis. For example, when a plurality of persons are included in the screen, the electronic device may preferentially select a screen corresponding to the owner as the target screen. For example, if the owner of the plurality of persons is not included, the electronic device may further select the target frame based on the shooting parameters.
According to the third aspect, or any implementation manner of the third aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and if the number of the people contained in the plurality of pictures is detected to be one, determining the picture containing the people to be the target picture. In this way, the electronic device can select the target picture as the target image based on the person image in the image as the judgment basis. For example, if only one person is included in the screen, the electronic device selects the screen including the person as the screen for generating the target image.
According to the third aspect, or any implementation manner of the third aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and cutting the target picture based on the set size to obtain a target image. In this way, by cutting the target screen, a target image having a size corresponding to the setting can be obtained. Illustratively, the set size may be a size of a thumbnail, and may be, for example, a ratio of long to high may be 1. For example, the set size may also be the size of the cover, for example, the ratio of length to height may be 3.
According to the third aspect, or any implementation manner of the third aspect, the target interface is a gallery interface, and the target image is a thumbnail in the gallery interface. In this way, when the target image is displayed as a reduced image in the gallery, the screen of each thumbnail generated based on the multi-view captured video is complete and beautiful.
According to a third aspect, or any implementation form of the third aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and displaying the video in response to the received operation of clicking the target image by the user.
Illustratively, the electronic device receives a click operation of a user, and a preview interface of the video can be displayed. Or the video can be directly played.
According to the third aspect, or any implementation manner of the third aspect, the target interface is an album interface, and the target image is an album cover in the album interface. In this way, when the target image is displayed as a cover, the cover generated based on each multi-view mode captured video is complete and beautiful.
According to the third aspect, or any implementation manner of the third aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and displaying a multi-scene identifier on the target image, wherein the multi-scene identifier is used for indicating that the video is shot in a multi-scene mode. In this way, by displaying the multi-view logo on the target image, the user may be allowed to determine that the thumbnail or cover page is generated based on the image frames in the video photographed in the multi-view mode.
According to a third aspect, or any implementation form of the third aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and in response to the received operation that the user clicks the multi-scene identifier, displaying other pictures except the target picture in the plurality of pictures at the position of the target image. Therefore, the user can switch the display of the picture by triggering the multi-scene identification, so that the user can more conveniently acquire the picture information which is not displayed in the front cover or the thumbnail. Therefore, the integrity of the information of the display image is improved while the attractiveness of the interface is ensured.
According to the third aspect, or any implementation manner of the third aspect above, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: selecting a first image frame in a video as a target image frame; or selecting any image frame of the first N image frames in the video as a target image frame, wherein N is an integer greater than 1. Illustratively, N may be 20. That is, the electronic device selects one image frame from the first 20 image frames. For example, the electronic device may select an image frame with the best image quality from the front 20 image frames as the target image frame. Thereby avoiding the problem of blurring of the generated thumbnail or cover.
In a fourth aspect, the present application provides an electronic device. The electronic device comprises a memory and a processor, the memory coupled with the processor; the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the steps of: responding to the received first user operation, and displaying a picture-in-picture mode shooting interface, wherein the picture-in-picture mode shooting interface comprises a first window and a second window, and the first window is in the second window; the method comprises the steps that a picture collected by a first camera of the electronic equipment is displayed in a first window, and a picture collected by a second camera of the electronic equipment is displayed in a second window; in response to the received second user operation, starting recording in a picture-in-picture mode; acquiring a video recorded in a picture-in-picture mode, wherein the video comprises a plurality of image frames; each image frame in the plurality of image frames comprises a first picture and a second picture, the first picture corresponds to the first window, and the second picture corresponds to the second window; selecting a target image frame from a plurality of image frames; detecting whether the size of the target image frame meets a preset condition or not; if the size of the target image frame is detected to accord with the preset condition, generating a target image based on the target image frame; if the size of the target image frame is detected to be not in accordance with the preset condition, generating a target image based on a second picture in the target image frame; and displaying the target image on the target interface. In this way, the electronic device may determine whether to generate the target image based on the image frame or based on the target picture in the image frame based on the size of the target image. Thus, under different circumstances, the target image can be generated based on different generation modes to be displayed on the interface. Accordingly, whether the target image generated based on the image frame or the target image generated based on the target picture is in accordance with the set size, the integrity and the attractiveness of the target image displayed on the interface can be ensured.
According to a fourth aspect, the preset conditions include: the difference between the size of the target image frame and the set size is less than or equal to the set threshold, and the first picture is complete if the target image frame is generated based on the target image frame. In this way, the electronic device may further determine whether there is a possibility of partial cropping of a small picture (i.e., the first picture) in the image frame if the target image is generated based on the image frame, based on the difference between the image frame and the set size. Based on the determination result, the electronic device may select to generate the target image based on the target image frame or to generate the target image based on the target screen.
In a fifth aspect, the present application provides a display method. The method comprises the following steps: the electronic equipment responds to the received user operation and starts recording in the multi-scene mode; the method comprises the steps that the electronic equipment obtains a video recorded in a multi-scene mode, wherein the video comprises a plurality of image frames; each of the plurality of image frames includes a plurality of pictures acquired by a plurality of cameras of the electronic device; the electronic equipment selects a target image frame from a plurality of image frames; the electronic equipment selects a target picture from a plurality of pictures in the target image frame; the electronic equipment generates a target image based on the target picture; the electronic device displays a target image on a target interface.
According to a fifth aspect, an electronic device selects a target picture from a plurality of pictures in a target image frame, comprising: the electronic device selects a target screen according to each screen content in the plurality of screens.
According to a fifth aspect, or any implementation manner of the above fifth aspect, the electronic device selects a target screen according to each screen content of a plurality of screens, and the method comprises: the electronic equipment detects the number of people contained in the plurality of pictures; if the number of the people contained in the plurality of pictures is detected to be zero, the electronic equipment selects the target picture according to the shooting parameters of each picture in the plurality of pictures.
According to the fifth aspect, or any implementation manner of the above fifth aspect, the electronic device selects a target screen according to each screen content in a plurality of screens, including: if the number of the people contained in the multiple pictures is M, the electronic equipment detects whether the M people comprise the owner of the electronic equipment; m is an integer greater than 1; if the M personal objects comprise the owner of the electronic equipment, the electronic equipment determines that a picture containing the owner of the electronic equipment is a target picture; if the owner of the electronic equipment is detected not to be included in the M persons, the electronic equipment selects a target picture according to the shooting parameters of each picture in the multiple pictures.
According to the fifth aspect, or any implementation manner of the above fifth aspect, the electronic device selects a target screen according to each screen content in a plurality of screens, including: if the number of the people contained in the plurality of pictures is detected to be one, the electronic equipment determines that the picture containing the people is the target picture.
According to a fifth aspect, or any implementation manner of the above fifth aspect, the electronic device generates a target image based on a target screen, including: and the electronic equipment cuts the target picture based on the set size to obtain the target image.
According to the fifth aspect, or any implementation manner of the fifth aspect above, the target interface is a gallery interface, and the target image is a thumbnail in the gallery interface.
According to a fifth aspect, or any implementation manner of the above fifth aspect, the method further comprises: the electronic equipment responds to the received operation that the user clicks the target image and displays the video.
According to a fifth aspect or any implementation manner of the above fifth aspect, the target interface is an album interface, and the target image is an album cover in the album interface.
According to a fifth aspect, or any implementation manner of the above fifth aspect, the method further comprises: the electronic equipment displays a multi-scene identification on the target image, wherein the multi-scene identification is used for indicating that the video is shot in a multi-scene mode.
According to a fifth aspect, or any implementation manner of the above fifth aspect, the method further comprises: the electronic equipment responds to the received operation that the user clicks the multi-scene identification, and other pictures except the target picture in the multiple pictures are displayed at the position of the target image.
According to a fifth aspect, or any implementation manner of the above fifth aspect, the electronic device selects a target image frame from a plurality of image frames, including: selecting a first image frame in a video as a target image frame; or selecting any image frame of the first N image frames in the video as a target image frame, wherein N is an integer greater than 1.
Any one implementation manner of the fifth aspect and the fifth aspect corresponds to any one implementation manner of the first aspect and the first aspect, respectively. For technical effects corresponding to any one of the implementation manners of the fifth aspect and the fifth aspect, reference may be made to the technical effects corresponding to any one of the implementation manners of the first aspect and the first aspect, and details are not repeated here.
In a sixth aspect, the present application provides a display method. The method comprises the following steps: the electronic equipment responds to the received first user operation and displays a picture-in-picture mode shooting interface, wherein the picture-in-picture mode shooting interface comprises a first window and a second window, and the first window is in the second window; the method comprises the steps that a picture collected by a first camera of the electronic equipment is displayed in a first window, and a picture collected by a second camera of the electronic equipment is displayed in a second window; the electronic equipment responds to the received second user operation and starts recording in the picture-in-picture mode; the method comprises the steps that the electronic equipment acquires a video recorded in a picture-in-picture mode, wherein the video comprises a plurality of image frames; each image frame in the plurality of image frames comprises a first picture and a second picture, wherein the first picture corresponds to a first window, and the second picture corresponds to a second window; the electronic equipment selects a target image frame from a plurality of image frames; the electronic equipment generates a target image based on a second picture in the target image frame; the electronic device displays a target image on a target interface.
According to a sixth aspect, the electronic device generates a target image based on a second screen in a target image frame, including: and the electronic equipment cuts the second picture based on the set size to obtain the target image.
According to a sixth aspect, or any implementation manner of the sixth aspect above, the target interface is a gallery interface, and the target image is a thumbnail in the gallery interface.
According to a sixth aspect, or any implementation manner of the sixth aspect above, the method further comprises: the electronic equipment responds to the received operation that the user clicks the target image and displays the video.
According to a sixth aspect or any implementation manner of the sixth aspect above, the target interface is an album interface, and the target image is an album cover in the album interface.
According to a sixth aspect, or any implementation manner of the sixth aspect above, the method further comprises: the electronic device displays a picture-in-picture flag on the target image, the picture-in-picture flag indicating that the video was captured in a picture-in-picture mode.
According to a sixth aspect, or any implementation manner of the sixth aspect above, the method further comprises: the electronic equipment responds to the received operation that the user clicks the picture-in-picture mark, and displays a first picture on the position of the target image.
According to a sixth aspect, or any implementation manner of the sixth aspect above, the electronic device selects a target image frame from a plurality of image frames, including: selecting a first image frame in a video as a target image frame; or selecting any image frame of the first N image frames in the video as a target image frame, wherein N is an integer greater than 1.
Any one of the implementation manners of the sixth aspect and the sixth aspect corresponds to any one of the implementation manners of the first aspect and the first aspect, respectively. For technical effects corresponding to any one implementation manner of the sixth aspect and the sixth aspect, reference may be made to the technical effects corresponding to any one implementation manner of the second aspect and the second aspect, and details are not described here again.
In a seventh aspect, the present application provides a display method. The method comprises the following steps: the electronic equipment responds to the received user operation and starts recording in the multi-scene mode; the method comprises the steps that the electronic equipment obtains a video recorded in a multi-scene mode, wherein the video comprises a plurality of image frames; each of the plurality of image frames includes a plurality of pictures acquired by a plurality of cameras of the electronic device; the electronic equipment selects a target image frame from a plurality of image frames; the electronic equipment selects a target picture from the target image frame; the electronic equipment detects whether the size of the target image frame meets a preset condition or not; if the size of the target image frame is detected to accord with the preset condition, the electronic equipment generates a target image based on the target image frame; if the size of the target image frame is detected to be not in accordance with the preset condition, the electronic equipment generates a target image based on the target image; the electronic device displays a target image on a target interface.
According to the seventh aspect, the conditions include: the difference between the size of the target image frame and the set size is less than or equal to the set threshold.
According to a seventh aspect, an electronic device selects a target picture from a plurality of pictures in a target image frame, comprising: the electronic device selects a target screen according to each screen content in the plurality of screens.
According to a seventh aspect or any one of the above implementation manners of the seventh aspect, the electronic device selecting a target screen according to each screen content of a plurality of screens, comprising: the electronic equipment detects the number of people contained in the multiple pictures; if the number of the people contained in the plurality of pictures is detected to be zero, the electronic equipment selects the target picture according to the shooting parameters of each picture in the plurality of pictures.
According to a seventh aspect, or any one of the above implementations of the seventh aspect, the electronic device selecting a target screen according to each screen content in a plurality of screens, includes: if the number of the people contained in the multiple pictures is M, the electronic equipment detects whether the M people comprise the owner of the electronic equipment; m is an integer greater than 1; if the M personal objects comprise the owner of the electronic equipment, the electronic equipment determines that a picture containing the owner of the electronic equipment is a target picture; if the owner of the electronic equipment is detected not to be included in the M persons, the electronic equipment selects a target picture according to the shooting parameters of each picture in the multiple pictures.
According to a seventh aspect or any one of the above implementation manners of the seventh aspect, the electronic device selecting a target screen according to each screen content of a plurality of screens, comprising: if the number of the people contained in the plurality of pictures is detected to be one, the electronic equipment determines that the picture containing the people is the target picture.
According to a seventh aspect, or any implementation manner of the seventh aspect above, the electronic device generating a target image based on a target screen includes: and the electronic equipment cuts the target picture based on the set size to obtain a target image.
According to the seventh aspect, or any implementation manner of the seventh aspect, the target interface is a gallery interface, and the target image is a thumbnail in the gallery interface.
According to a seventh aspect, or any implementation manner of the seventh aspect above, the method further comprises: the electronic equipment responds to the received operation that the user clicks the target image and displays the video.
According to a seventh aspect or any implementation manner of the seventh aspect above, the target interface is an album interface, and the target image is an album cover in the album interface.
According to a seventh aspect, or any one of the above implementations of the seventh aspect, the method further comprises: the electronic equipment displays a multi-scene identification on the target image, wherein the multi-scene identification is used for indicating that the video is shot in a multi-scene mode.
According to a seventh aspect, or any implementation manner of the seventh aspect above, the method further comprises: the electronic equipment responds to the received operation that the user clicks the multi-scene identification, and other pictures except the target picture in the multiple pictures are displayed at the position of the target image.
According to a seventh aspect, or any one of the above implementation manners of the seventh aspect, the electronic device selects a target image frame from a plurality of image frames, including: selecting a first image frame in a video as a target image frame; or selecting any image frame of the first N image frames in the video as a target image frame, wherein N is an integer greater than 1.
Any one of the implementations of the seventh aspect and the seventh aspect corresponds to any one of the implementations of the first aspect and the first aspect, respectively. For technical effects corresponding to any one of the implementation manners of the seventh aspect and the seventh aspect, reference may be made to the technical effects corresponding to any one of the implementation manners of the third aspect and the third aspect, and details are not repeated here.
In an eighth aspect, the present application provides a display method. The method comprises the following steps: the electronic equipment responds to the received first user operation and displays a picture-in-picture mode shooting interface, wherein the picture-in-picture mode shooting interface comprises a first window and a second window, and the first window is in the second window; the method comprises the steps that pictures collected by a first camera of the electronic equipment are displayed in a first window, and pictures collected by a second camera of the electronic equipment are displayed in a second window; the electronic equipment responds to the received second user operation and starts recording in the picture-in-picture mode; the method comprises the steps that the electronic equipment acquires a video recorded in a picture-in-picture mode, wherein the video comprises a plurality of image frames; each image frame in the plurality of image frames comprises a first picture and a second picture, wherein the first picture corresponds to a first window, and the second picture corresponds to a second window; the electronic equipment selects a target image frame from a plurality of image frames; the electronic equipment detects whether the size of the target image frame meets a preset condition or not; if the size of the target image frame is detected to meet the preset condition, the electronic equipment generates a target image based on the target image frame; if the size of the target image frame is detected to be not in accordance with the preset condition, the electronic equipment generates a target image based on a second picture in the target image frame; the electronic device displays a target image on a target interface.
According to the eighth aspect, the preset conditions include: the difference between the size of the target image frame and the set size is less than or equal to the set threshold, and the first picture is complete if the target image frame is generated based on the target image frame.
According to the eighth aspect, the electronic device generates the target image based on the second screen in the target image frame, including: and the electronic equipment cuts the second picture based on the set size to obtain a target image.
According to an eighth aspect, or any implementation manner of the eighth aspect, the target interface is a gallery interface, and the target image is a thumbnail in the gallery interface.
According to an eighth aspect, or any implementation manner of the above eighth aspect, the method further comprises: the electronic equipment responds to the received operation that the user clicks the target image and displays the video.
According to an eighth aspect, or any implementation manner of the eighth aspect, the target interface is an album interface, and the target image is an album cover in the album interface.
According to an eighth aspect, or any implementation manner of the above eighth aspect, the method further comprises: the electronic device displays a picture-in-picture indication on the target image, the picture-in-picture indication indicating that the video was captured in a picture-in-picture mode.
According to an eighth aspect, or any implementation manner of the above eighth aspect, the method further comprises: the electronic equipment responds to the received operation that the user clicks the picture-in-picture mark, and displays a first picture on the position of the target image.
According to an eighth aspect or any one implementation manner of the above eighth aspect, the electronic device selects a target image frame from a plurality of image frames, and includes: selecting a first image frame in a video as a target image frame; or selecting any image frame of the first N image frames in the video as a target image frame, wherein N is an integer greater than 1.
Any one implementation manner of the eighth aspect and the eighth aspect corresponds to any one implementation manner of the first aspect and the first aspect, respectively. For technical effects corresponding to any one implementation manner of the eighth aspect and the eighth aspect, reference may be made to the technical effects corresponding to any one implementation manner of the fourth aspect and the fourth aspect, and details are not described here again.
In a ninth aspect, the present application provides a chip comprising one or more interface circuits and one or more processors; the interface circuit is used for receiving signals from a memory of the electronic equipment and sending signals to the processor, and the signals comprise computer instructions stored in the memory; instructions for causing an electronic device to perform a method of the fifth aspect or any possible implementation of the fifth aspect, or a method of the sixth aspect or any possible implementation of the sixth aspect, when executed by a processor; or, instructions of the seventh aspect or any possible implementation of the seventh aspect; or instructions of the method of the eighth aspect or any possible implementation of the eighth aspect.
In a tenth aspect, the present application provides a computer readable medium for storing a computer program comprising instructions for carrying out the method of the fifth aspect or any possible implementation of the fifth aspect, or of the sixth aspect or any possible implementation of the sixth aspect; alternatively, instructions of the seventh aspect or any possible implementation of the seventh aspect; or instructions of the method of the eighth aspect or any possible implementation of the eighth aspect.
In an eleventh aspect, the present application provides a computer program comprising instructions for carrying out the method of the fifth aspect or any possible implementation of the fifth aspect, or of the sixth aspect or any possible implementation of the sixth aspect; or, instructions of the seventh aspect or any possible implementation of the seventh aspect; or instructions of the method of the eighth aspect or any possible implementation of the eighth aspect.
Drawings
Fig. 1 is a schematic diagram of a hardware configuration of an exemplary electronic device;
fig. 2 is a schematic diagram of a hardware structure of an exemplary electronic device;
FIG. 3 is a schematic diagram of a software architecture of an exemplary illustrated electronic device;
FIGS. 4a to 4e are diagrams illustrating exemplary shooting scenes;
FIG. 5 is an exemplary illustrative gallery display diagram;
FIG. 6 is an exemplary illustration of an album display;
FIGS. 7 a-7 f are schematic diagrams illustrating exemplary generation of a thumbnail or album cover;
FIG. 8a is a schematic diagram illustrating an exemplary process for selecting a preselected image;
FIG. 8b is a schematic diagram of an exemplary generated thumbnail;
FIG. 9a is a schematic diagram of an exemplary illustrative user interface;
FIG. 9b is an exemplary user interface diagram;
FIG. 10 is a schematic diagram of an exemplary dual-scene shooting scene;
FIG. 11 is a schematic diagram of an exemplary shown preselected image selection;
FIG. 12 is a schematic diagram of an exemplary thumbnail or preset scale of an album cover;
FIG. 13a is a schematic diagram illustrating an exemplary detection of whether an image frame is close in size to a thumbnail;
FIG. 13b is a schematic diagram of an exemplary generation thumbnail;
FIGS. 14 a-14 b are schematic diagrams of exemplary generated album covers;
FIGS. 15 a-15 b are schematic diagrams of exemplary generated album covers;
FIG. 16 is an exemplary illustration of a thumbnail display;
FIG. 17 is a schematic view of an exemplary dual shot scene;
FIG. 18 is a schematic diagram of an exemplary illustrated preselected image selection;
FIG. 19 is a schematic view of an exemplary dual-scene capture scene;
FIG. 20 is a schematic diagram of an exemplary illustrated preselected image selection;
FIG. 21 is a schematic view of an exemplary dual-scene capture scene;
FIG. 22 is a schematic diagram of an exemplary shown preselected image selection;
FIG. 23a is a schematic view of an exemplary multi-shot scene;
FIG. 23b is a schematic diagram of an exemplary shown preselected image selection;
FIG. 24a is a schematic diagram of an exemplary illustrated generation of a thumbnail;
FIG. 24b is an exemplary illustration of a thumbnail display;
FIG. 25a is a diagram of an exemplary illustrated picture-in-picture capture scene;
FIG. 25b is a schematic diagram of an exemplary shown preselected image selection;
FIG. 26a is a schematic diagram illustrating an exemplary detection of whether an image frame is close in size to a thumbnail;
FIG. 26b is a schematic diagram of an exemplary generation thumbnail;
FIGS. 27 a-27 b are schematic diagrams of exemplary generated album covers;
FIGS. 28 a-28 b are schematic diagrams of exemplary generated album covers;
FIG. 29a is a diagram illustrating an exemplary image frame of similar size to an album cover;
FIG. 29b is a schematic diagram of an exemplary generated album cover;
FIG. 30 is an exemplary illustrative user interface diagram;
fig. 31 is a schematic view of the structure of an exemplary illustrated apparatus.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second," and the like, in the description and in the claims of the embodiments of the present application are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first target object and the second target object, etc. are specific sequences for distinguishing different target objects, rather than describing target objects.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present application, the meaning of "a plurality" means two or more unless otherwise specified. For example, a plurality of processing units refers to two or more processing units; a plurality of systems refers to two or more systems.
Fig. 1 shows a schematic structural diagram of an electronic device 100. It should be understood that the electronic device 100 shown in fig. 1 is only one example of an electronic device, and that the electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits. The electronic device 100 may be a tablet, a mobile phone, a notebook, an in-vehicle device, or the like having a display screen.
The electronic device 100 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
Wherein the controller may be a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The charging management module 140 is configured to receive charging input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 141 may be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to be converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Exemplarily, fig. 2 is a schematic structural diagram of an exemplary electronic device. Exemplarily, the electronic device is taken as a mobile phone. Referring to fig. 2 (1), the front side (i.e. the display side) of the mobile phone may include a front camera 201. The front camera 201 may include one or more cameras. Alternatively, when the front camera 201 includes a plurality of front cameras, the pixels of the plurality of front cameras may be the same or different. Referring to fig. 2 (2), the back side of the mobile phone (i.e. the side opposite to the display screen) may include a rear camera 202. Illustratively, the rear camera 202 may include one or more rear cameras. Optionally, the rear camera 202 includes a plurality of rear cameras, for example, as shown in fig. 2, pixels of the plurality of rear cameras may be the same or different, and the present application is not limited thereto. For example, the rear camera 202a may be a wide-angle camera with 1600 ten thousand pixels, the rear camera 202b may be a super-sensitive camera with 5000 ten thousand pixels, and the rear camera 202c may be a tele camera with 800 ten thousand pixels.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. For example, in the embodiment of the present application, the processor 110 may execute instructions of the internal memory 121, so that the electronic device 100 can process the video to generate the corresponding thumbnail image or cover page in the manner in the embodiment of the present application. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it is possible to receive voice by placing the receiver 170B close to the human ear.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or sending voice information, the user can input a voice signal to the microphone 170C by uttering a voice signal close to the microphone 170C through the mouth of the user. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and perform directional recording.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a hierarchical architecture as an example, and exemplarily illustrates a software structure of the electronic device 100.
Fig. 3 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application.
The layered architecture of the electronic device 100 divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 3, the application package may include camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, camera, gallery, and the like. For example, a camera application may be used to process pictures or video captured by a camera. For example, a camera application may be used to perform the flow shown in FIG. 8 a. The gallery application may be used to manage a gallery. For example, a gallery application may be used to execute the flow shown in FIG. 8b to generate and display thumbnails, album covers, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a brief dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
It is to be understood that the components contained in the system framework layer, the system library and the runtime layer shown in fig. 3 do not constitute a specific limitation of the electronic device 100. In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
Fig. 4a to 4e are schematic diagrams of exemplary shooting scenes. In the embodiment of the present application, processing of a video is described as an example. In other embodiments, the method for generating the thumbnail and the album cover in the embodiment of the present application may also be applied to the processing of the still picture, and a description of the method is not repeated in the present application.
Illustratively, as shown in fig. 2, the mobile phone may be provided with a front camera and a rear camera. The photographic scene shown in fig. 4a is a two-shot photographic scene. Referring to fig. 4a (1), in this scenario, the front camera and the rear camera are turned on, and the front camera captures an image in a shooting area of the front camera, for example, the captured image may be an image of a user who is taking a video with a mobile phone. The images captured by the rear camera are optionally displayed in the display window 402. The rear camera can collect images in the shooting area of the rear camera. Optionally, as described above, the rear camera may include a plurality of rear cameras, and the rear cameras may call different cameras to collect images according to user requirements. For example, as shown in (1) of fig. 4a, when the user selects the wide-angle shooting, an image captured by the wide-angle camera is optionally displayed in the display window 401.
Referring to fig. 4a (2), in the dual-view scene, the captured images may be both images captured by the rear camera. For example, an image acquired by the rear camera at 2 times of the focal length is optionally displayed in the display window 403. The display window 404 optionally displays images captured by the wide-angle camera. It should be noted that the double shot shown in the embodiments of the present application is only an exemplary example. And combining the images collected by any two lenses which can be displayed in two display windows of the double-scene shooting. The present application is not limited.
Fig. 4b is a schematic diagram of an exemplary picture-in-picture capture scene. Referring to fig. 4b (1), for example, the display window 405 optionally displays the image captured by the rear camera. A small window 406 (which may also be referred to as a small display window or a floating display window) optionally displays images captured by the wide angle camera. Referring to fig. 4b (2), for example, a large window 407 (which may also be referred to as a large display window) optionally displays an image captured by the rear camera. The small window 408 optionally displays images captured by the front facing camera. It should be noted that the scenario in fig. 4b is only a schematic example. The combination of images captured by any two lenses can be displayed in two windows in a picture-in-picture shooting scene, which is not limited in the present application. It should be further noted that the position and size of the small window are exemplary, and the application is not limited thereto.
Fig. 4c is a schematic diagram of an exemplary multi-shot scene. In the embodiment of the present application, a folding screen mobile phone is taken as an example to describe a multi-scene shooting scene. Referring to fig. 4c, the image acquired by the front camera is optionally displayed in the display window 409. The display window 410 optionally displays images captured by the wide-angle camera. The display window 411 optionally displays an image acquired by the rear camera at 2 times of the focal length. It should be noted that the scenario in fig. 4c is only a schematic example. The combination of images collected by any of the multiple lenses can be displayed in multiple windows in a multi-scene shooting scene, which is not limited in the present application. It should be further noted that the positions and sizes of the windows are exemplary, and the application is not limited thereto. For example, fig. 4d and 4e are schematic diagrams of exemplary multi-shot scenes. Referring to fig. 4d, a display window 412 optionally displays images captured by the wide-angle camera, for example. The display window 413 optionally displays images captured by the front camera. The display window 414 optionally displays images acquired by the rear camera at 2 times the focal length. Referring to fig. 4e, for example, the display window 415 optionally displays the image captured by the wide-angle camera. The display window 416 optionally displays images captured by the front facing camera. The display window 417 optionally displays images acquired by the rear camera at 2 times focal length.
For example, after the user finishes shooting the scene shown in any one of fig. 4a to 4e, the user may click a shooting button to finish shooting. The mobile phone responds to the received user operation and saves the acquired image. Illustratively, the user may view the captured images through a gallery of the cell phone. FIG. 5 is an exemplary illustration of a gallery display. Referring to fig. 5 (1), for example, thumbnails of one or more photos or videos, such as thumbnail 502, may be displayed in the gallery interface 501. In fig. 5 (1), the thumbnail images are illustrated with a ratio (i.e., length: height) of 1. In other embodiments, the ratio of the thumbnails in the gallery interface 501 may be other values, for example, the preset ratio (long: high) of the thumbnails may be 3. Of course, the shape of the thumbnail may be other than the rectangle shown in (1) of fig. 5. For example, the shape may be circular or other polygonal shapes, and the application is not limited thereto.
Illustratively, in the gallery, thumbnails of images or videos are displayed, except in the method shown in (1) of fig. 5, that is, in the set scale (e.g., 1. Referring to fig. 5 (2), the gallery interface 503 may illustratively display thumbnails of one or more images or videos. Each thumbnail is displayed in the original scale of the image or video. For example, if the thumbnail 504 has a scale (length: height) of 3.
Fig. 6 is an exemplary illustration of an album display. Referring to fig. 6 (1), for example, when the user clicks on the album option, the mobile phone may display an album interface 601 in response to the received user operation. The album interface 601 may include one or more album covers. Each album cover may correspond to an album. Taking the camera album cover 602 as an example, the camera album cover may be displayed according to a preset ratio (for example, 1. In the embodiment of the present application, the proportion of each album cover displayed in the album is the same, and is, for example, 1. In other embodiments, the proportion of the album cover may also be set according to implementation requirements, and the present application is not limited.
Referring to fig. 6 (2), for example, each album cover displayed in the album interface 601 may be displayed in a preset scale. For example, the preset ratio (length: height) of the camera album cover 603 is 4. The above proportions are merely illustrative examples and are not intended to be limiting in this application.
Fig. 7a to 7f are schematic diagrams illustrating generation of thumbnails or album covers. Referring to (1) of fig. 7a, for example, in the embodiment of the present application, the album cover of each album may be generated based on the latest saved picture or video in the album. For example, please refer to (2) of fig. 7a, taking the cover of all photo albums as an example. Illustratively, the most recently saved of all photo albums is a video file. The video comprises a plurality of image frames (which may be, for example, 180 image frames). In one example, the cell phone may generate a cover for all photo albums based on the first image frame of the video. In another example, the mobile phone may select one image frame with the best image quality from the first 20 image frames (which may be set according to actual requirements and are not limited in this application) (the selection method may refer to the prior art and is not limited in this application), and of course, may also select one image frame from the first 20 image frames arbitrarily. With reference to (1) of fig. 7a, in this embodiment, for example, the mobile phone selects an image frame 701 with the best image quality from the first 20 image frames in the video. After the image frame 701 for generating the cover is selected by the mobile phone, the mobile phone can cut the image frame 701 based on the preset proportion of the covers of all photo albums. Referring to fig. 7a (2), an example is described in which the preset ratio (length: height) of all photo album covers is 3. Accordingly, the cell phone may crop the image frame 701 based on a crop box with a ratio of 3. Still referring to fig. 7a (1), for example, the cell phone may crop the image frame based on the cropping box 703. Illustratively, the geometric center of the cropping frame 703 coincides with the geometric center of the image frame 701, and none of the four borders of the cropping frame 703 exceeds the four borders of the image frame 701, for example, the left and right borders of the cropping frame 703 overlap the left and right borders of the image frame 701, and the top and bottom borders of the cropping frame 703 are within the image frame 701. Referring to fig. 7a (2), for example, after the mobile phone scales down the cropped image (for example, down by 50%), an all photo album cover 702 is displayed at a designated position of the album interface.
Fig. 7b is a schematic diagram of an exemplary thumbnail image with a generation scale of 1. Referring to (1) of fig. 7b, for example, a video file is still taken as an example. When the mobile phone generates the thumbnail of the video file in the gallery, the mobile phone can select the image frame for generating the thumbnail from the video. The selection manner is as above, the first image frame may be selected, and the image frame with the best image quality in the first 20 image frames may also be selected, which is not limited in the present application. Illustratively, the image frame 701 is still selected by the mobile phone. The mobile phone crops the image frame according to a preset scale (for example, 1. The cutting method is as described above, and is not described herein again. Referring to fig. 7b (2), the mobile phone scales down the cropped image to generate a thumbnail 705.
Fig. 7c is a schematic diagram illustrating an example of generating an album cover based on a double shot video. Referring to fig. 7c (1), the mobile phone can select the image frame 706 of the video for generating all photo album covers. The selection method can be referred to the above, and is not described herein.
The cell phone crops the image frame 706 based on the same crop box 707 as the preset ratio 3 (length: height) of all photo album covers. Referring to fig. 7c (2), the mobile phone generates an all photo album cover 709 based on the cut image. Referring to fig. 7c (3), the mobile phone selects the image frame 706 for generating a camera album cover. The cell phone may crop the image frame 706 based on the same crop box 708 as the preset ratio of 4 (length: height) of the camera album cover. Referring to (2) of fig. 7c, the mobile phone generates a camera album cover 710 based on the cropped image. The undescribed parts can refer to the related description of fig. 7a, and are not described again here.
Fig. 7d is a schematic diagram illustrating exemplary generation of a thumbnail based on a two-shot video. Referring to fig. 7d (1), the phone can select the image frame 706 of the video for generating all photo album covers. The selection method can be referred to the above, and is not described herein. The cell phone crops the image frame 706 based on the same cropping frame 711 as the preset ratio of thumbnail 1 (length: height). Referring to fig. 7d (2), the mobile phone generates a thumbnail 712 based on the cropped image, as an example.
Fig. 7e is a diagram illustrating exemplary generation of an album cover based on picture-in-picture captured video. Referring to fig. 7e (1), the mobile phone can select the image frame 713 of the video for generating all photo album covers. The selection method can be referred to the above, and is not described herein. The phone crops the image frame 713 based on the same crop box 714 as the preset ratio of 3 (length: height) for all photo album covers. Referring to fig. 7e (2), the mobile phone generates an all photo album cover 715 based on the cropped image. Referring to fig. 7e (3), the mobile phone selects the image frame 713 for creating a camera album cover. The phone may crop the image frame 713 based on the same crop box 716 as the preset ratio of 4 (length: height) of the camera album cover. Referring to fig. 7e (2), the mobile phone generates a camera album cover 717, for example, based on the cut image. The undescribed part can refer to the description related to fig. 7a, and the description is omitted here.
Fig. 7f is a schematic diagram illustrating exemplary generation of a thumbnail based on picture-in-picture captured video. Referring to fig. 7f (1), the phone can select the image frame 713 of the video for creating all photo album covers. The selection method can be referred to the above, and is not described herein. The cell phone crops the image frame 713 based on the same crop box 718 as the preset ratio 1 (length: height) of the thumbnail image. Referring to fig. 7d (2), the mobile phone generates a thumbnail 719, illustratively, based on the cropped image.
Obviously, in fig. 7c to 7f, since the thumbnail or the album cover generated based on the two-shot image and the pip image needs to be cut according to the preset ratio, for the two-shot image, the upper part image and the lower part image in the two-shot image may be divided, for example, only one part of the upper part image and only one part of the lower part image are left, for example, all the photo album covers 709, the camera album cover 710 and the thumbnail 712. For picture-in-picture taken images, it may result in the absence of small images in the picture-in-picture (i.e., images in a small window, which may also be referred to as small pictures), such as all-photo album cover 715, camera album cover 717, thumbnail 719. This may cause information of the generated thumbnail and cover to be lost and affect the beauty.
The embodiment of the application provides a generation mode of a thumbnail or a cover map, and attractiveness of the thumbnail and a cover of an album can be improved. Fig. 8a is a schematic diagram illustrating an exemplary flow for selecting a preselected image. Illustratively, the preselected image is optionally an image used to generate a thumbnail or album cover. Referring to fig. 8a, the method specifically includes:
and S101, starting recording.
Illustratively, the handset initiates recording in response to a received user instruction. Illustratively, a camera of the mobile phone (including a front camera and/or a rear camera) starts to acquire images. Referring to the schematic diagram of the software structure in fig. 3, for example, the camera driver may obtain an image acquired by the camera, and the camera application may perform corresponding processing on the image acquired by the camera driver. Exemplary, processing of images by a camera application optionally includes, but is not limited to: rendering the image, such as adding filters and the like; the camera application can also display images acquired by the camera in a display interface of the mobile phone. And, in embodiments of the present application, the camera application may also be used to select a preselected image. Namely, S102 to S110 are executed.
For example, referring to fig. 9a, the camera application interface 901 illustratively includes one or more controls, such as but not limited to: portrait control, photo control, video control, and more controls 902, and the like. The user may click on more control 902. Referring to fig. 9b, the camera application illustratively displays more settings boxes 903 in response to the received user operation. Illustratively, the more settings box 903 includes one or more options corresponding to the shooting mode, including but not limited to: slow motion option, black and white art option, delayed photography option, panoramic option, double shot option 904, picture-in-picture recording 905, and the like. It should be noted that, in the embodiment of the present application, a mobile phone supporting dual-view video is taken as an example for description. In other embodiments, if the electronic device supports multiple shots, for example, three shots or four shots may be supported, and the more setting box 903 may include corresponding options, for example, a two shot option, a three shot option, and the like.
Still referring to fig. 9b, for example, the user may click the corresponding option according to the requirement, so that the mobile phone performs shooting in the designated mode. For example, the user may click on the dual video option 904. The camera application initiates a dual-view recording mode, such as invoking a front-facing camera and a rear-facing camera to simultaneously capture images, in response to the received user operation. And starts recording the double-scene video after receiving that the user clicks the recording option 906. For another example, the user may click on the pip recording option 905, and the handset starts the pip recording mode in response to the received user operation, for example, invoking the front camera and the rear camera to simultaneously capture images. And upon receiving a user click on record option 906, begin recording picture-in-picture video.
And S102, detecting a mode.
For example, a camera application may detect a current video recording mode. Optionally, in this embodiment of the present application, the video recording mode includes but is not limited to: single shot mode, multiple shot mode, picture-in-picture mode, etc.
In one example, if the camera application detects that the current shooting mode is the monoscopic shooting mode, the processing may be performed according to the prior art, and the description of the present application is not repeated.
In another example, if the camera application detects that the current shooting mode is the multi-view shooting mode, for example, the current shooting mode may be a dual-view shooting mode. Then, S103 is executed.
In yet another example, if the camera application detects that the current photographing mode is a picture-in-picture photographing mode, S110 is performed.
For example, in the embodiment of the present application, the camera application may determine the current video recording mode based on the received user operation. For example, as described above (described in connection with fig. 9 b), the user may select the dual view shooting mode (i.e., click on the dual view shooting option 904), and accordingly, the camera application may determine that the current shooting mode is the dual view shooting mode in response to the received user click operation. For another example, if the user selects the pd mode (i.e., clicks on pd option 905), the camera application determines that the current capture mode is the pd mode in response to the received user click.
S103, detecting the number of people in the picture.
For example, the camera application may detect a plurality of pictures photographed in a multi-view photographing mode. For example, in the dual-view shooting mode, the preview interface of the mobile phone displays two pictures, one picture optionally displays an image acquired by the front camera, and the other picture optionally displays an image acquired by the rear camera. The camera application may detect two pictures to detect the number of people in each picture. In the embodiment of the present application, the "person" may be a front face of the portrait, a side face of the portrait, a back shadow of the portrait, or the like, and the present application is not limited thereto. In other embodiments, only the frontal portrait may be designated as "person" and other non-frontal portraits, such as landscapes or backgrounds of people, may be classified as "non-people". The specific division can be according to actual need, and this application does not make a limitation.
In one example, if the camera application detects that one person is included in the plurality of pictures, a picture including the person is selected from the plurality of pictures as a preselected image, i.e., S104 is performed.
In another example, if the camera application detects that a plurality of persons are included in the plurality of pictures, S105 is performed.
In yet another example, if the camera application detects that no person is included in the plurality of pictures, S107 is performed.
In one possible implementation, if multiple people are in one frame, for example, there are multiple people in one frame, and no people are included in another frame, for example, the frame may be a landscape. In this example, a character screen may be selected as the preselected image.
S104, selecting a character picture.
S105, detecting whether the owner is included.
For example, after the camera application detects that the plurality of pictures include a plurality of people, it may further detect whether the owner is included in the plurality of people. Optionally, in this embodiment of the application, a facial image of the owner may be stored in the mobile phone, and the camera application may call the facial image of the owner stored in the mobile phone, and match features of the multiple persons with the facial image features of the owner to identify whether the owner is included in the multiple persons. The specific image recognition process may refer to the existing image recognition technology, and is not described herein again. Alternatively, the face image of the individual may be saved by the user when the entry face is unlocked. Alternatively, the facial image of the owner may also be the facial image of the owner that is entered by the camera application after the user starts the camera application.
In one example, if the owner is included in the plurality of persons, the camera application selects a screen including the owner as the preselected image, i.e., S106 is performed.
In another example, if the individual is not included in the plurality of persons, S107 is performed.
S106, selecting the main picture.
S107, detecting whether the parameters of different lenses are the same.
In an exemplary multi-scene shooting scene, a mobile phone calls two or more cameras to shoot and displays pictures collected by different cameras. As noted above, the parameters of the multiple cameras of the cell phone are optionally different, for example, the cell phone has a wide angle camera of 1600 ten thousand pixels, a super-sensitive camera of 5000 ten thousand pixels, a tele camera of 800 ten thousand pixels, and a front camera of 800 ten thousand pixels. The camera application may detect whether the camera parameters for each picture are the same based on different cameras corresponding to different pictures. For example, in the embodiment of the present application, the camera parameter corresponding to each picture may also be understood as a shooting parameter corresponding to each picture.
In another example, if the parameters of the lens corresponding to each frame are different, the camera application may select the frame of the specified lens as the preselected image, i.e., execute S108. For example, in the dual-scene shooting mode, the mobile phone calls a front camera with 800 ten thousand pixels and a wide-angle camera with 1600 ten thousand pixels to shoot. When the camera application detects that the parameters of the cameras corresponding to the two pictures are different, the camera application can select the specified camera, for example, the picture corresponding to the front camera can be used as a preselected image. Alternatively, the camera application may preset the priority of the shot selection, for example: the super-sensitive camera with the front camera more than 5000 ten thousand pixels and the wide-angle camera with the front camera more than 16000 ten thousand pixels and the telephoto camera with the front camera more than 800 ten thousand pixels, and the camera application can select the picture of the corresponding lens as a preselected image based on the priority. The above selection is only an illustrative example, and the present application is not limited thereto.
In another example, if the parameters of the lens corresponding to each frame are the same, the camera application may select the frame as the preselected image according to a preset rule, i.e., execute S109. For example, in the double-view shooting mode, the mobile phone calls a front camera with 800 ten thousand pixels and a rear camera with 800 ten thousand pixels, and the camera application detects that the parameters of the cameras corresponding to the two pictures are the same. The camera application may select a picture as the preselected image according to preset rules. Alternatively, the preset rule may be a manner of specifying a picture, for example, in the two-shot mode, a specified picture (e.g., a lower picture) is selected as the preselected image. Alternatively, the preset rule may be to select a picture with higher brightness as the preselected image. The above selection is merely an illustrative example, and the present application is not limited thereto.
And selecting the picture corresponding to the appointed lens as a preselected image. For example, a picture captured by a front camera may be selected as the preselected image. The method can be specifically set according to actual requirements, and is not limited in the application.
S108, selecting the picture of the appointed lens.
And S109, selecting the picture according to a preset rule.
S110, selecting a large screen.
For example, in a picture-in-picture capture mode, a camera application may select a large picture in picture-in-picture as the preselected image. Of course, in other embodiments, a small picture in a picture-in-picture may be selected as the preselected image, and the application is not limited thereto.
As described above, when selecting an image frame as a thumbnail or a cover of an album, the first image frame of the video may be selected, or the image frame with the highest picture quality among the first 20 image frames may be selected. In conjunction with the flowchart shown in fig. 8a, if the image frame as the preselected image is an image frame in the first 20 image frames, in one example, the camera application may first determine the image frame as the target picture based on the first frame of the video. For example, taking a picture-in-picture scene as an example, the camera application may perform S102 and S110 based on the first image frame. After the camera application determines that a large picture needs to be selected as a preselected image, the camera application may select an image frame of the best picture quality from the first 20 image frames and use the large picture of the image frame as the preselected image. In another example, the camera application may select an image frame with the best quality from the first 20 image frames, and then perform a picture selection process based on the selected image frame, i.e. the process in fig. 8 a. The sequence of selecting the image frame with the optimal quality and selecting the image by the mobile phone is not limited.
For example, after the camera application acquires the preselected image, the selected image frame (which may be the first image frame, or an image frame with the best picture quality from the first 20 image frames) and the selected preselected image selection may be output to the gallery application. The gallery application may perform a photo album cover or thumbnail generation process based on the selection results (including the image frames and the pre-selected images) input by the camera application. In the embodiment of the present application, only an embodiment in which a gallery application generates a thumbnail and an album cover will be described. In other embodiments, the gallery application may also be used for storing videos or images to designated areas and the like. This application is not repeated.
Fig. 8b is a schematic diagram of an exemplary generation thumbnail. Note that, fig. 8b illustrates only the flow of generating thumbnails, and a mobile phone may also generate an album cover based on the flow in fig. 8 b. Referring to fig. 8b, the method specifically includes:
s201, thumbnail generation is started.
For example, after acquiring the image frame and the pre-selected image selected by the camera application, the gallery application starts to execute the process of generating the cover page and/or the thumbnail. Namely, S202 to S207 are executed.
S202, whether the sizes of the image frame and the thumbnail are close or not is detected.
Illustratively, as described above, the proportion of the album cover or thumbnail generated by the gallery application is a preset proportion, for example, the preset proportion (long: high) of thumbnail is 1. In the embodiment of the present application, only the three preset ratios are taken as an example for description, in other embodiments, the preset ratio between the thumbnail and the album cover may be set according to actual requirements, and the present application is not limited.
For example, taking the preset ratio of the thumbnail image as 1. For example, in order to distinguish the selected image frame from other image frames in the camera application, the selected image frame is referred to as a target image frame in the following embodiments.
For example, the gallery application may compare the target image frame to the thumbnail to determine whether the target image frame and the thumbnail are similar in size. The specific manner of comparison will be illustrated in the following examples.
In one example, if the gallery application detects that the target image frame is similar in size to the thumbnail, S207 is performed.
In another example, if the gallery application detects that the target image frame and the thumbnail are not similar in size, S203 is performed.
And S203, detecting a mode.
For example, the gallery application may detect a current shooting mode. S102 may be referred to specifically, and details thereof are not repeated herein.
In one example, if the gallery application detects that the shooting mode is the multi-view mode, S204 is performed.
In another example, if the gallery application detects that the shooting mode is the picture-in-picture mode, S205 is performed.
And S204, displaying the multi-scene identification by taking the preselected image as a thumbnail.
For example, in the case that the gallery application detects that the target image frame is not similar to the thumbnail in size and the current shooting mode is the multi-view shooting mode, the gallery application may generate a thumbnail based on the pre-selected image, and the generated thumbnail includes the multi-view identifier thereon. For example, the multi-view identifier may be used to identify that the video corresponding to the thumbnail is a video captured in a multi-view capturing mode.
S205, whether the small image is cut or not is detected.
For example, as shown in fig. 4b, a video photographed in a picture-in-picture photographing mode may include a large window and a small window. In the embodiment of the present application, a picture displayed in a large window is referred to as a large picture or a large image, and a picture displayed in a small window is referred to as a small picture, a small image, or a picture-in-picture image. Under the condition that the gallery application detects that the target image frame is not close to the thumbnail and the shooting mode is the picture-in-picture mode, the gallery application can detect whether the small image is cut or not based on the preset proportion of the thumbnail. Where "thumbnail is cropped" is optionally that when a thumbnail or album cover is generated based on the image frame, a partial image of the thumbnail will be cropped so that the thumbnail is incomplete when displayed in the thumbnail or album cover, such as shown in fig. 7e or 7 f.
In one example, if the gallery application detects that the thumbnail will not be cropped, then S207 is performed.
In another example, if the gallery application detects that a small image is to be cropped, then S206 is performed.
And S206, taking the preselected image as a thumbnail and displaying the picture-in-picture mark.
For example, in the case where the gallery application detects that the target image frame is not of a similar size to the thumbnail, the capture mode is a picture-in-picture capture mode, and the thumbnail is to be cropped, the gallery application may generate the thumbnail based on the preselected image. And displaying the picture-in-picture identification on the generated thumbnail. Illustratively, the picture-in-picture identifier is used to identify the video corresponding to the thumbnail as a video captured in a picture-in-picture capture mode.
S207, the preselected image is taken as a thumbnail.
For example, if the gallery application detects that the target image frame is a similar size to the thumbnail, or the gallery application detects that the thumbnail is not to be cropped, the gallery application may generate the thumbnail based on the target image frame. Illustratively, if the user clicks the thumbnail, the mobile phone responds to the received user operation and displays the video corresponding to the thumbnail. For example, the interface may be a preview interface that displays the video, or the interface may directly play the video.
In the embodiment of the present application, a gallery application and a camera application are described as an example. In other embodiments, the flows in fig. 8a and 8b may also be executed by one or more other modules of the application layer, which is not limited in this application.
The flow in fig. 8a and 8b is described in detail below with several embodiments. Fig. 10 is a schematic diagram of an exemplary dual-view shooting scene. Referring to fig. 10, a preview interface 1001 illustratively includes, but is not limited to, a capture control 1002, a display window 1003, and a display window 1004. In this embodiment, an example will be described in which the display window 1003 displays an image captured by a rear camera and the display window 1004 displays an image captured by a front camera. It should be noted that the cameras called in each scene in the embodiment of the present application are only schematic examples, and the present application is not limited thereto.
Continuing with fig. 10, the user can click on a capture control 1002, as an example. And the mobile phone starts recording in response to the received user operation.
Illustratively, as described in S102 of fig. 8a, after the mobile phone starts recording, the camera application detects the shooting mode. For example, the camera application detects that the current shooting mode is a dual shot mode. Then, the camera application may select an image frame 1101 with the best picture quality from the first 20 image frames of the video as the target image frame described above. Next, the camera application may detect the number of people in the image frame 1101, i.e. perform S113 in fig. 8 a.
Referring to fig. 11, an image frame 1101 illustratively includes an image 1102 and an image 1103. The image 1102 is an image acquired by a rear camera, and the image 1103 is an image acquired by a front camera. The cell phone detects the image frame 1101 and determines that two persons are included in the image frame 1101, namely a person in the image 1102 and a person in the image 1103. After the camera application detects that two human images are included in the image frame 1101 as described in S103 in fig. 8a, the camera application may further detect whether the owner is included in the two human images.
Continuing to refer to fig. 11, an example is shown in which the person in the image 1103 is the owner. After the camera application determines that the person in the image 1103 is the owner, the camera application selects an image including the owner as the preselected image, i.e., selects the image 1103 as the preselected image 1104.
Illustratively, the camera application may output the selected results, i.e., the image frame 1101 and the preselected image 1104, to the gallery application. The gallery application may generate a thumbnail or album cover based on the image frame 1101 or the preselected image 1104 in accordance with the flow shown in FIG. 8 b.
As described above, the thumbnail has a preset ratio with the album cover, and fig. 12 is a schematic view of the preset ratio of the thumbnail or the album cover, which is exemplarily shown. Referring to fig. 12, exemplary preset scales (length: height) for thumbnails or album covers may include, but are not limited to: 1, 3. In the embodiment of the present application, taking the preset proportion of the thumbnail as 1, the preset proportion of the album cover includes 3.
Illustratively, the gallery application performs S202 in fig. 8b, i.e., determines whether the image frame 1101 is similar in size to a thumbnail or album cover. For example, fig. 13a is a schematic diagram illustrating an exemplary detection of whether an image frame is close to a thumbnail in size. Referring to fig. 13a, the gallery application illustratively compares a crop box 1301 corresponding to the thumbnail to an image frame 1101. Illustratively, the crop box 1301 is in the same preset proportion as the thumbnail, namely 1. In the embodiment of the present application, the gallery application aligns the geometric center of the crop box 1301 with the geometric center of the image frame 1101, and reduces or enlarges the crop box 1301 in an equal ratio according to a preset ratio (i.e., 1). For example, as shown in fig. 13a, the left and right borders of the crop box 1301 overlap the left and right borders of the image frame 1101, and the top and bottom borders of the crop box 1301 are within the image frame 1101. It should be noted that the detection manner shown in fig. 13a is only an illustrative example, and in other embodiments, the geometric center of the crop box 1301 may not be aligned with the center of the image frame 1101, for example, the gallery application may align the upper border of the crop box 1301 with the upper border of the image frame 1101, and scale up or scale down the crop box 1301 such that the four borders of the crop box 1301 do not exceed the four borders of the image frame 1101. The manner of comparing the image frame with the crop box in the embodiment of the present application is only an illustrative example, and the present application is not limited thereto. Hereinafter, the same, and description will not be repeated.
The gallery application may determine whether the image frame is similar in size to the thumbnail based on the overlapping area of the crop box 1301 and the image frame 1101. Illustratively, in the embodiment of the present application, the gallery application may set a similarity threshold, for example, the similarity threshold may be set to 80%. For example, if the ratio of the overlapping area of the crop box 1301 and the image frame 1101 to the area of the image frame 1101 is greater than or equal to 80%, it may be determined that the image frame 1101 and the thumbnail are similar in size. If the ratio of the overlapping area of the crop frame 1301 and the image frame 1101 to the area of the image frame 1101 is less than 80%, it may be determined that the image frame 1101 and the thumbnail are not similar in size.
Referring to fig. 13a, for example, in the embodiment of the present application, a ratio of an overlapping area of the image frame 1101 and the crop box 1301 to an area of the image frame 1101 is less than 80%, that is, sizes of the image frame 1101 and the thumbnail are not similar to each other.
For example, after the gallery application determines that the image frame 1101 is not similar in size to the thumbnail, S203 in fig. 8b may be executed. For example, the gallery application may detect that the current capture scene is in a dual view capture mode, and the gallery application may generate a thumbnail based on the preselected image 1104. Fig. 13b is a schematic diagram of an exemplary generation thumbnail. Referring to fig. 13b, the gallery application may crop the preselected image 1104 based on the thumbnail's corresponding crop frame 1301, for example. For example, the gallery application may align the geometric center of the crop box 1301 with the geometric center of the preselected image 1104 and zoom the crop box 1301 up or down to the four borders of the crop box 1301 not exceeding the four borders of the preselected image 1104.
Continuing with FIG. 13b, the gallery application crops the preselected image 1104 based on the crop box 1301 to obtain a thumbnail 1302, as an example. The gallery application displays a multi-view logo 1303 on the thumbnail 1302, indicating that the video corresponding to the thumbnail 1302 was captured in a multi-view capture mode. It should be noted that the pattern, size and position of the multi-scene mark 1304 in the embodiments of the present application are only schematic examples. In other embodiments, the setting may be according to actual requirements, and the present application is not limited.
Continuing with fig. 13b, for example, if the handset is responding to the received user action, the gallery application is launched. A thumbnail 1305 may be displayed in the gallery interface 1304, and a multi-scene identification is displayed on the thumbnail 1305. Optionally, the thumbnail 1305 may be an optional thumbnail library application, and the thumbnail 1302 may be scaled down according to a predetermined ratio, for example, 50% (which may be set according to actual requirements, and is not limited in this application).
Fig. 14a to 14b are schematic views of an exemplary generated album cover. Referring to FIG. 14a, the gallery application may illustratively compare a crop box 1401 corresponding to an album cover with an image frame 1101 to determine whether the image frame is of a similar size to the album cover. Alternatively, the ratio (length: height) of the crop box 1501 is 3. The specific details can be described with reference to fig. 13a, and are not repeated herein.
Illustratively, the image frame 1101 is not similar to the size of the album cover. Then the gallery application generates an album cover based on the preselected image 1104.
Referring to FIG. 14b, the gallery application may illustratively crop the preselected image 1104 based on the crop box 1401. Illustratively, the gallery application aligns the geometric center of the crop box 1401 with the geometric center of the preselected image 1104 and zooms or zooms out the crop box 1401 to the extent that the four borders of the crop box 1401 do not extend beyond the four borders of the preselected image 1104. For example, as shown in fig. 14b, the top and bottom borders of the crop box 1401 overlap the top and bottom borders of the preselected image 1104, and the left and right borders of the crop box 1401 are within the preselected image 1104. The gallery application gets a cropped album cover 1402. The gallery application displays a multi-view logo 1403 on the album cover 1402.
Continuing with fig. 14b, for example, if the handset is responding to the received user action, the gallery application is launched. An album cover 1402 may be displayed in the album interface 1404, and a multi-view logo may be displayed on the album cover 1402. Alternatively, the album cover 1402 can be scaled down by a predetermined ratio, for example, 50% (which can be set according to actual requirements, but is not limited in this application), and then displayed at a designated position of the album interface 1404.
Fig. 15a to 15b are schematic views of an exemplary generated album cover. Referring to FIG. 15a, for example, the gallery application may compare a crop box 1501 corresponding to the album cover with the image frame 1101 to determine whether the image frame is of a similar size to the album cover. Alternatively, the ratio (length: height) of the crop box 1501 is 4. The specific details can be described with reference to fig. 13a, and are not repeated herein.
Illustratively, the image frame 1101 is not similar to the size of the album cover. Then the gallery application generates an album cover based on the preselected image 1104.
Referring to FIG. 15b, the gallery application may illustratively crop the preselected image 1104 based on the crop box 1501. Illustratively, the gallery application aligns the geometric center of the crop box 1501 with the geometric center of the preselected image 1104 and zooms in or zooms out the crop box 1501 to the extent that the four borders of the crop box 1501 do not exceed the four borders of the preselected image 1104. For example, as shown in fig. 15b, the left and right borders of the crop box 1501 overlap the left and right borders of the preselected image 1104, and the top and bottom borders of the crop box 1501 are within the preselected image 1104. The gallery application gets a cropped album cover 1502. The gallery application displays a multi-view logo 1503 on the album cover 1502.
Continuing with fig. 15b, for example, if the handset is responding to the received user action, the gallery application is launched. An album cover 1502 may be displayed in the album interface 1504 and a multi-view logo may be displayed on the album cover 1502. Alternatively, the album cover 1502 may be scaled down by a predetermined ratio, for example, 50% (which may be set according to actual requirements, but is not limited in this application), and then displayed at a designated position of the album interface 1504.
In one possible implementation, as described above, the thumbnail of each image or video displayed by the gallery interface may be displayed in the original scale of the image or video. As shown in fig. 16, the mobile phone displays a gallery interface 1601 in response to a received user operation. A thumbnail 1602 may be displayed on gallery interface 1601. Wherein the thumbnail 1602 is generated for the gallery application based on the image frame 1101. For example, the gallery application reduces the image frame 1101 by a preset scale (e.g., 50%) and displays the reduced image frame at a designated position on the gallery interface 1601.
Fig. 17 is a schematic diagram of an exemplary dual-view shooting scene. Referring to FIG. 17, a preview interface 1701 illustratively includes, but is not limited to, a capture control 1702, a display window 1703, and a display window 1704. In this embodiment, the image captured by the rear camera displayed in the display window 1703 and the image captured by the front camera displayed in the display window 1704 will be described as an example. It should be noted that the cameras called in each scene in the embodiment of the present application are only schematic examples, and the present application is not limited thereto.
Continuing with FIG. 17, the user can click on a capture control 1702, for example. And the mobile phone starts recording in response to the received user operation.
Referring to fig. 18, after the mobile phone starts recording, as described in S102 of fig. 8a, the camera application detects the shooting mode. For example, the camera application detects that the current shooting mode is a double shot mode. Then, after the camera application determines that the shooting mode is the dual shot mode, the camera application may select the image frame 1801 with the best picture quality from the first 20 image frames of the video as the target image frame described above. The camera application may detect the number of people in the image frame 1801, i.e., perform S103 in fig. 8 a.
Continuing to refer to fig. 18, an exemplary image frame 1801 includes an image 1802 and an image 1803. The image 1802 is an image collected by a rear camera, and the image 1803 is an image collected by a front camera. The cell phone detects the image frame 1801 and determines that a person is included in the image frame 1801, i.e., in the image 1803. After the camera application detects that an image of a person is included in the image frame 1801, as described in reference to S103 of fig. 8a, the camera application may select the image including the person, i.e., the image 1803, as the preselected image 1804.
Illustratively, the camera application may output the results of the selection, i.e., the preselected image 1804 and the target image frame 1801, to the gallery application. The gallery application may perform the flow in fig. 8b based on the preselected image 1804 and the target image frame 1801. The specific details can be described with reference to fig. 10 to 15b, and are not described herein again.
Fig. 19 is a schematic view of another exemplary dual-view shooting scene. Referring to fig. 19, the preview interface 1901 illustratively includes, but is not limited to, a capture control 1902, a display window 1903, and a display window 1904. In this embodiment, an example in which both the display window 1903 and the display window 1904 display images captured by a rear camera is taken is described. The display window 1903 shows an image acquired by the wide-angle camera, and the display window 1904 shows an image acquired at 2 times of the focal length. It should be noted that the cameras called in each scene in the embodiment of the present application are only schematic examples, and the present application is not limited thereto.
Continuing with FIG. 19, the user can click on a capture control 1902, as an example. And the mobile phone starts recording in response to the received user operation.
Referring to fig. 20, after the mobile phone starts recording, the camera application may select the image frame 2001 with the best picture quality from the first 20 image frames of the video as the target image frame described above. As described in S102 in fig. 8a, the camera application detects the shooting mode, for example. For example, the camera application detects that the current shooting mode is a double shot mode. Next, after the camera application determines that the photographing mode is the dual view photographing mode, the camera application may detect the number of persons in the image frame 2001, i.e., perform S103 in fig. 8 a.
Continuing to refer to fig. 20, image frame 2001 illustratively includes image 2002 and image 2003. Illustratively, the image 2002 is an image acquired by a rear camera with 800 ten thousand pixels at a focal length of 2 times, and the image 2003 is an image acquired by a wide-angle camera with 1600 ten thousand pixels. The mobile phone detects the image frame 2001 and determines that two persons are included in the image frame 2001, that is, the person in the image 2002 and the person in the image 2003. After the camera application detects that two human images are included in the image frame 2001, as described in S103 of fig. 8a, the camera application further detects whether the owner is included. Illustratively, the camera application detects that the owner is not included in the two people. Then the camera application executes as per S107 in fig. 8a, i.e. the camera application detects whether the parameters of the different lenses are the same. In this embodiment, the camera application detects that the lens parameters are different (i.e. one is a rear camera with 800 ten thousand pixels and one is a wide-angle camera with 1600 ten thousand pixels). Accordingly, the camera application performs S108 in fig. 8a, i.e., the camera application selects a screen of the specified lens as a preset image. For example, in the present embodiment, the camera application may select an image captured by the wide-angle camera as the preset image, that is, the camera application selects the image 2003 as the preset image 2004.
Illustratively, the camera application may output the selected results, i.e., the pre-selected image 2004 and the target image frame 2001, to the gallery application. The gallery application may perform the flow in fig. 8b based on the pre-selected image 2004 and the target image frame 2001. The specific details can be described with reference to fig. 10 to 15b, and are not described herein again.
Fig. 21 is a schematic view of another exemplary dual-scene shooting scene. Referring to fig. 21, the preview interface 2101 illustratively includes, but is not limited to, a capture control 2102, a display window 2103, and a display window 2104. In this embodiment, an example in which both the display window 2103 and the display window 2104 display images captured by the rear camera is taken as an example will be described. Illustratively, the images displayed by the display window 2103 and the display window 2104 are images acquired by a rear camera with 800 ten thousand pixels at 1-time focal length. It should be noted that the cameras called in each scene in the embodiment of the present application are only schematic examples, and the present application is not limited thereto.
Continuing with fig. 21, the user can click on a capture control 2102, for example. And the mobile phone starts recording in response to the received user operation.
Illustratively, as described in S102 of fig. 8a, after the mobile phone starts recording, the camera application detects the shooting mode. For example, the camera application detects that the current shooting mode is a dual shot mode. Next, referring to fig. 22, after determining that the shooting mode is the double-shot shooting mode, the camera application may select an image frame 2201 with the best picture quality from the first 22 image frames of the video as the target image frame. The camera application detects the number of people in the image frame 2201, i.e. performs S103 in fig. 8 a.
Continuing to refer to fig. 22, image frame 2201 illustratively includes an image 2202 and an image 2203. The gallery application detects image frame 2201 and determines that no person is included in image frame 2201. After the camera application detects that the image frame 2201 does not include the image of the person as described in S103 in fig. 8a, the camera application performs as in S107 in fig. 8a, that is, the camera application detects whether the parameters of different lenses are the same. In this embodiment, the camera application detects that the lens parameters are the same, that is, the camera application is shooting with a rear camera with 800 ten thousand pixels. Accordingly, the camera application performs S109 in fig. 8a, i.e., the camera application selects the preset image according to the preset rule. In the embodiment of the present application, a preset rule is taken as an example to select the lower image. Illustratively, the camera application selects the lower image, image 2203, as the preset image 2204.
Illustratively, the camera application may output the results of the selection, i.e., the pre-selected image 2204 and the target image frame 2201, to the gallery application. The gallery application may perform the flow in fig. 8b based on the preselected image 2204 and the target image frame 2201. The specific details can be described with reference to fig. 10 to 15b, and are not described herein again.
Fig. 23a is a schematic diagram of an exemplary multi-shot scene. Referring to fig. 23a, the preview interface 2301 includes, but is not limited to, a display window 2302, a display window 2303, a display window 2304 and a shooting control 2305. In this embodiment, the display window 2302 displays an image captured by a wide-angle camera with 1600 ten thousand pixels, the display window 2303 displays an image captured by a front camera, and the display window 2304 displays an image captured by a rear camera with 800 ten thousand pixels at 2 times of focal length. It should be noted that the cameras called in each scene in the embodiment of the present application are only schematic examples, and the present application is not limited thereto.
Continuing with fig. 23a, the user can click on a capture control 2305, for example. And the mobile phone starts recording in response to the received user operation.
Illustratively, after the handset starts recording, the camera application detects the shooting mode as described in S102 in fig. 8 a. For example, the camera application detects that the current shooting mode is a double shot mode. Referring to fig. 23b, after the camera application determines that the shooting mode is the double-shot shooting mode, the camera application may select an image frame 2306 with the best picture quality from the first 20 image frames of the video as the target image frame described above. The camera application may detect the number of people in the image frame 2306, i.e. perform S103 in fig. 8 a.
Continuing to refer to fig. 23b, image frame 2306 illustratively includes image 2307, image 2308, and image 2309. Illustratively, the gallery application detects the image frame 2306 and determines that three people are included in the image frame 2306, namely, a person in the image 2307, a person in the image 2308, and a person in the image 2309. After the camera application detects that three person images are included in the image frame 2306, the camera application further detects whether the owner is included, as described in S103 in fig. 8 a. Illustratively, the camera application detects that the person in the image 2308 is an owner. The camera application is executed as S106 in fig. 8a, i.e., the camera application may select the image 2308 including the main image as the preset image 2310.
Referring to FIG. 24a, the gallery application compares a thumbnail corresponding crop box 2401 to an image frame 2306, for example. Illustratively, the crop box 2401 is the same as the thumbnail in a preset scale, i.e., 1. In the embodiment of the present application, the gallery application aligns the geometric center of the crop box 2401 with the geometric center of the image frame 2306, and scales down or up the crop box 2401 by a preset ratio (i.e., 1). For example, as shown in FIG. 24a, the four borders of crop box 2401 overlap the four borders of image frame 2306.
With continued reference to fig. 24a, in the embodiment of the present application, the ratio of the overlapping area of the image frame 2306 and the crop box 2401 to the area of the image frame 2306 is 100%, that is, greater than the similarity threshold (80%), that is, the image frame 2306 is similar to the thumbnail in size.
Illustratively, after the gallery application determines that the image frame 2306 is similar in size to a thumbnail, S207 in fig. 8b may be performed, i.e., a thumbnail is generated based on the target image frame. Referring to fig. 24b, the cell phone may illustratively launch a gallery application in response to a received user action. A thumbnail 2403 may be displayed in gallery interface 2402. Illustratively, the thumbnail 2403 is generated by the gallery application based on the image frame 2306, for example, the gallery application displays the image frame 2403 at a designated position of the gallery interface 2402 after scaling down the image frame by a preset scale, for example, by 50%.
It should be noted that, for the image captured in the multi-view shooting mode shown in fig. 23a, the step of generating the album cover with the preset ratio of 3.
Fig. 25a is a diagram of an exemplary picture-in-picture capture scene. Referring to fig. 25a, the preview interface 2501 illustratively includes, but is not limited to, a large display window 2502 (which may also be referred to as a large window), a small display window 2503 (which may also be referred to as a small window, a picture-in-picture window, or a floating window), and a capture control 2504. It should be noted that the mobile phone may change the position and size of the small display window 2503 in response to the received user operation (e.g., dragging the small display window 2503). Alternatively, the small display window 2503 is suspended over the large display window 2502, and it can be understood that the small display window 2503 is in the large display window 2502, and the size of the small display window 2503 is smaller than or equal to the size of the large display window 2503. Accordingly, a small picture (or a small image) in the small display window 2503 in the generated image frame is among large pictures (or large images) in the large display window 2502. Optionally, the adjustable maximum size of the small display window 2503 is smaller than the size of the large display window 2502. It should be noted that, in the embodiment of the present application, only the example that the picture-in-picture mode includes a small display window is taken as an example for description. In other embodiments, the pip mode may also include two or more small display windows, which are all suspended above the large display window, during the shooting process. Accordingly, a plurality of small images are generated on top of the large image in the image frame.
Illustratively, the large display window 2502 displays images captured by a rear camera, and the small display window 2503 displays images captured by a front camera. It should be noted that the cameras called in each scene in the embodiment of the present application are only illustrative examples, and the present application is not limited thereto.
Continuing with fig. 25a, for example, the user can click on a capture control 2504. And the mobile phone starts recording in response to the received user operation.
After the mobile phone starts recording, as described in S102 in fig. 8a, the camera application detects the shooting mode. For example, the camera application detects that the current photography mode is a picture-in-picture photography mode. Referring to fig. 25b, after the camera application determines that the shooting mode is the picture-in-picture shooting mode, the camera application can select an image frame 2505 with the best picture quality from the first 20 image frames of the video as the target image frame. Illustratively, the image frame 2505 includes an image 2506 and an image 2507. The image 2506 may be referred to as a large image, i.e., an image in the large display window 2502. The image 2507 may be referred to as a small image or a picture-in-picture image, i.e., an image in the small display window 2503. Accordingly, the camera application performs S110 in fig. 8a, i.e., the camera application selects the large image (image 2506) as the preset image 2508.
Illustratively, the camera application may output the results of the selection, i.e., the image frame 2505 and the pre-selected image 2508, to the gallery application. The gallery application may generate a thumbnail or album cover based on the image frame 2505 or the pre-selected image 2508 according to the flow shown in FIG. 8 b.
Illustratively, the gallery application performs S202 in FIG. 8b, i.e., determines whether the image frame 2505 is of a similar size to a thumbnail or album cover.
Fig. 26a is a schematic diagram illustrating exemplary detection of whether an image frame is close to a thumbnail in size. Referring to fig. 26a, the gallery application compares a thumbnail corresponding crop box 2601 to an image frame 2505, for example. Illustratively, the crop box 2601 is the same as the thumbnail in a preset scale, namely 1. In an embodiment of the present application, the gallery application aligns the geometric center of the crop box 2601 with the geometric center of the image frame 2505, and zooms out or enlarges the crop box 2601 by an equal ratio according to a preset scale (i.e., 1) so that the four borders of the crop box 2601 do not exceed the four borders of the image frame 2505, i.e., the borders of the crop box 2601 may overlap the borders of the image frame 2505 or be within the borders of the image frame 2505. As shown in fig. 13a, the left and right borders of the crop box 2601 overlap the left and right borders of the image frame 2505, and the top and bottom borders of the crop box 2601 are within the image frame 2505.
Referring to fig. 26a, for example, in the embodiment of the present application, the ratio of the overlapping area of the image frame 2505 and the crop box 2601 to the area of the image frame 2505 is less than 80% (i.e., the preset similarity threshold), that is, the size of the image frame 2505 is not similar to that of the thumbnail.
For example, after the gallery application determines that the image frame 2505 is not of a similar size to the thumbnail, S203 in fig. 8b may be performed. For example, if the gallery application can detect that the current shooting scene is in the picture-in-picture shooting mode, the gallery application performs S205 in fig. 8b, that is, further detects whether the small image is to be cropped.
With continued reference to FIG. 26a, the gallery application may determine that a thumbnail image 2507 is to be cropped based on the position of the cropping box 2601 in the image frame 2505, for example. Then the gallery application executes S206, i.e., generates a thumbnail based on the preselected image 2508.
Fig. 26b is a schematic diagram of an exemplary generation thumbnail. Referring to FIG. 26b, the gallery application may crop the preselected image 2508 based on the thumbnail's corresponding crop box 2601, for example. For example, the gallery application may align the geometric center of the crop box 2601 with the geometric center of the preselected image 2508 and zoom the crop box 2601 up or down to four borders of the crop box 2601 that do not exceed four borders of the preselected image 2508. For example, the left and right borders of the crop box 2601 overlap the left and right borders of the image frame 2508, respectively, and the top and bottom borders of the crop box 2601 are within the image 2508.
Continuing with FIG. 26b, the gallery application, after cropping the preselected image 2508 based on the crop box 2601, illustratively results in a thumbnail 2602. The gallery application displays a picture-in-picture flag 2603 on the thumbnail 2602 indicating that the video corresponding to the thumbnail 2602 is captured in a picture-in-picture capture mode. It should be noted that the pattern, size and position of the pip mark 2603 in this embodiment of the application are only illustrative examples. In other embodiments, the setting may be according to actual requirements, and the present application is not limited.
Continuing with fig. 26b, the gallery application is illustratively launched if the handset responds to the received user action. A thumbnail 2605 may be displayed in the gallery interface 2604, and a pip logo is displayed on the thumbnail 2605. Alternatively, the thumbnail 2605 may be the thumbnail 2602 reduced by 50%. Details which are not described may be referred to above and are not described herein.
Fig. 27a to 27b are schematic views of an exemplary generated album cover. Referring to FIG. 27a, the gallery application may, for example, compare the crop box 2701 corresponding to the album cover to the image frame 2505 to determine if the image frame is of a similar size to the album cover. Optionally, the ratio (length: height) of crop box 2701 is 3. The specific details can be described with reference to fig. 13a, and are not repeated herein.
Illustratively, the image frame 2505 is not as close in size to the album cover. The gallery application further checks whether the thumbnail image 2507 is to be cropped.
With continued reference to FIG. 26a, the gallery application may determine that the thumbnail 2507 is to be cropped based on the location of the crop box 2701 in the image frame 2505, as an example. Then the gallery application executes S206, i.e., generates a thumbnail based on the preselected image 2508.
FIG. 27b is a schematic diagram of an exemplary generated album cover. Referring to FIG. 27b, the gallery application may illustratively crop preselected images 2508 based on a crop box 2701 corresponding to the album cover. For example, the gallery application may align the geometric center of the crop box 2701 with the geometric center of the preselected image 2508 and zoom the crop box 2701 up or down to a point where the four borders of the crop box 2701 do not exceed the four borders of the preselected image 2508. For details, reference may be made to the above description, which is not repeated here.
Continuing with FIG. 27b, the gallery application, after cropping the preselected image 2508 based on crop box 2701, illustratively results in an album cover 2702. The gallery application displays a pip flag 2703 on the album cover 2702 to indicate that the video corresponding to the album cover 2602 is captured in the pip capturing mode.
Continuing with FIG. 27b, the gallery application is illustratively launched if the handset responds to the received user action. An album cover 2705 may be displayed in the album interface 2704, and a pip logo is displayed on the album cover 2705. Alternatively, the album cover 2705 may be reduced by 50% from 2702. Details which are not described can be referred to above, and are not described herein.
Fig. 28a to 28b are schematic views of an exemplary generated album cover. Referring to FIG. 28a, for example, the gallery application may compare a crop box 2801 corresponding to the album cover to image frame 2505 to determine whether the image frame is of a similar size to the album cover. Alternatively, the proportion (length: height) of the crop box 2801 is 4. The specific details can be referred to the description related to fig. 13a, and are not described herein again.
Illustratively, the image frame 2505 is not similar in size to the album cover. The gallery application further detects whether the thumbnail image 2507 is to be cropped.
With continued reference to FIG. 28a, the gallery application may determine that a thumbnail image 2507 is to be cropped based, for example, on the location of crop box 2801 in image frame 2505. Then the gallery application executes S206, i.e., generates a thumbnail based on the preselected image 2508.
FIG. 28b is a schematic diagram of an exemplary generated album cover. Referring to FIG. 28b, the gallery application may illustratively crop the preselected image 2508 based on the corresponding crop box 2801 of the album cover. For example, the gallery application may align the geometric center of the crop box 2801 with the geometric center of the preselected image 2508 and zoom the crop box 2801 up or down to the four borders of the crop box 2801 that do not exceed the four borders of the preselected image 2508. For details, reference may be made to the above description, which is not repeated here.
Continuing with FIG. 28b, the gallery application, after cropping the preselected image 2508 based on crop box 2801, illustratively results in album cover 2802. The gallery application displays a pip logo 2803 on the album cover 2802 indicating that the video corresponding to the album cover 2802 was captured in the pip capture mode.
Continuing with FIG. 28b, the gallery application is illustratively launched if the handset responds to the received user action. An album cover 2805 may be displayed in the album interface 2804, and a pip logo may be displayed on the album cover 2805. Alternatively, the album cover 2805 may be reduced by 50% from the album cover 2802. Details which are not described may be referred to above and are not described herein.
It should be noted that the size and the position of the small image in the image frame may be set according to actual requirements, and the present application is not limited thereto. It is further noted that as the position and size of the small image changes, in some embodiments, the small image is not cropped. The following description will be made with reference to specific examples. Referring to fig. 29a, an exemplary image frame 2901 includes a large image 2902 and a small image 2903. In a scenario where an album cover with a preset ratio of 3. The gallery application detects whether the capture mode is a picture-in-picture mode, and further detects whether the small image is cropped. As shown in fig. 29a, the thumbnail 2903 is within the cropping frame 2902, i.e., the thumbnail 2903 is not cropped. Accordingly, the gallery application performs S207 in fig. 8b, i.e., generates a thumbnail based on the target image frame.
FIG. 29b is a diagram illustrating an exemplary creation of an album cover. Referring to fig. 29b, the gallery application may, for example, crop the image frame 2901 based on a crop box 2904 corresponding to the album cover. For example, the gallery application may align the geometric center of the crop box 2904 with the geometric center of the image frame 2901 and zoom the crop box 2904 up or down to the four borders of the crop box 2904 that do not exceed the four borders of the image frame 2901. For details, reference may be made to the above description, which is not repeated here.
Continuing with FIG. 29b, the gallery application crops the image frame 2901 based on the cropping frame 2904, illustratively resulting in an album cover 2905. The album cover 2905 includes a thumbnail 2903.
Continuing with FIG. 29b, the gallery application is illustratively launched if the handset responds to the received user action. An album cover 2907 may be displayed in the album interface 2906. Alternatively, the album cover 2907 may be reduced by 50% from the album cover 2905. Details which are not described can be referred to above, and are not described herein.
In a possible implementation manner, please refer to (1) of fig. 30, which exemplarily takes the thumbnail generated in fig. 13b in the embodiment of the present application as an example. That is, the thumbnail 3001 is generated based on the image frames of the dual shot mode, and specifically, one of the dual shot images is displayed in the thumbnail 3001, which may be the main image shown in fig. 13 b. Also, a multi-scene identification 3002 is displayed on the thumbnail 3001. The user may click on the multi-scene identification 3002. Referring to (2) of fig. 30, the cellular phone may switch the image displayed in the thumbnail 3001 to another image of the panoramic image, for example, to the image 1102 shown in fig. 11, in response to the received user operation, for example. It should be noted that the above-mentioned user operation manner is only an illustrative example, in other embodiments, the user may slide left (or right) on the thumbnail 3001, and the mobile phone displays the other image of the two-scene images, i.e. the image 1102, in the thumbnail 3001 in response to the received user operation.
It will be appreciated that the electronic device, in order to implement the above-described functions, comprises corresponding hardware and/or software modules for performing the respective functions. The present application can be realized in hardware or a combination of hardware and computer software in connection with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In one example, fig. 31 illustrates a schematic block diagram of an apparatus 3100 according to an embodiment of the disclosure that the apparatus 3100 may include: a processor 3101 and transceiver/transceiver pins 3102, and optionally, memory 3103.
The various components of device 3100 are coupled together by bus 3104, where bus 3104 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, the various busses are referred to in the figures as the bus 3104.
Optionally, memory 3103 may be used for the instructions in the foregoing method embodiments. The processor 3101 may be used to execute instructions in the memory 3103 and to control the receive pin to receive signals and the transmit pin to transmit signals.
The apparatus 3100 may be an electronic device or a chip of an electronic device in the above-described method embodiments.
All relevant contents of the steps related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The present embodiment also provides a computer storage medium, where computer instructions are stored, and when the computer instructions are run on an electronic device, the electronic device is caused to execute the above related method steps to implement the display method in the above embodiment.
The present embodiment also provides a computer program product, which when running on a computer, causes the computer to execute the relevant steps described above, so as to implement the display method in the above embodiment.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored by the memory, so that the chip can execute the display method in the above method embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is only one type of logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
Any of the various embodiments of the present application, as well as any of the same embodiments, can be freely combined. Any combination of the above is within the scope of the present application.
The integrated unit, if implemented as a software functional unit and sold or used as a separate product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
The steps of a method or algorithm described in connection with the disclosure of the embodiments of the application may be embodied in hardware or in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, read Only Memory (ROM), erasable Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (44)

1. An electronic device, comprising:
a memory and a processor, the memory coupled with the processor;
the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the steps of:
responding to the received user operation, and starting recording in the multi-scene mode;
acquiring a video recorded in the multi-view mode, wherein the video comprises a plurality of image frames; each of the plurality of image frames comprises a plurality of pictures collected by a plurality of cameras of the electronic device;
selecting a target image frame from the plurality of image frames;
selecting a target picture from a plurality of pictures in the target image frame;
generating a target image based on the target picture;
and displaying the target image on a target interface, wherein the target interface is a gallery interface or an album interface, and the target image is a thumbnail in the gallery interface or an album cover in the album interface.
2. The electronic device of claim 1, wherein the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
and selecting the target picture according to the content of each picture in the plurality of pictures.
3. The electronic device of claim 2, wherein the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
detecting the number of persons contained in the plurality of pictures;
and if the number of the people contained in the plurality of pictures is detected to be zero, selecting the target picture according to the shooting parameters of each picture in the plurality of pictures.
4. The electronic device of claim 3, wherein the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
if the number of the people contained in the multiple pictures is M, detecting whether the owner of the electronic equipment is included in the M people; m is an integer greater than 1;
if the M personal objects comprise the owner of the electronic equipment, determining that the picture containing the owner of the electronic equipment is the target picture;
and if the owner of the electronic equipment is detected not to be included in the M persons, selecting the target picture according to the shooting parameters of each picture in the plurality of pictures.
5. The electronic device of claim 4, wherein the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
and if the number of the people contained in the plurality of pictures is detected to be one, determining the picture containing the people to be the target picture.
6. The electronic device of claim 1, wherein the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
and cutting the target picture based on the set size to obtain the target image.
7. The electronic device of claim 1, wherein when the target interface is a gallery interface and the target image is a thumbnail image in the gallery interface, the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
and responding to the received operation that the user clicks the target image, and displaying the video.
8. The electronic device of claim 1, wherein the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
displaying a multi-view identifier on the target image, the multi-view identifier indicating that the video was captured in the multi-view mode.
9. The electronic device of claim 8, wherein the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
and responding to the received operation that the user clicks the multi-scene identification, and displaying other pictures except the target picture in the plurality of pictures at the position of the target image.
10. The electronic device of claim 1, wherein the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
selecting a first image frame in the video as the target image frame;
alternatively, the first and second electrodes may be,
and selecting any image frame of the first N image frames in the video as the target image frame, wherein N is an integer greater than 1.
11. An electronic device, comprising:
a memory and a processor, the memory coupled with the processor;
the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the steps of:
displaying a picture-in-picture mode shooting interface in response to the received first user operation, wherein the picture-in-picture mode shooting interface comprises a first window and a second window, and the first window is in the second window; the first window displays a picture acquired by a first camera of the electronic equipment, and the second window displays a picture acquired by a second camera of the electronic equipment;
in response to the received second user operation, starting recording in the picture-in-picture mode;
acquiring a video recorded in the picture-in-picture mode, wherein the video comprises a plurality of image frames; each image frame of the plurality of image frames comprises a first picture and a second picture, the first picture corresponds to the first window, and the second picture corresponds to the second window;
selecting a target image frame from the plurality of image frames;
generating a target image based on a second picture in the target image frame;
and displaying the target image on a target interface, wherein the target interface is a gallery interface or an album interface, and the target image is a thumbnail in the gallery interface or an album cover in the album interface.
12. The electronic device of claim 11, wherein the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
and cutting the second picture based on the set size to obtain the target image.
13. The electronic device of claim 11, wherein when the target interface is a gallery interface and the target image is a thumbnail image in the gallery interface, the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
and displaying the video in response to the received operation that the user clicks the target image.
14. The electronic device of claim 11, wherein the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
displaying a picture-in-picture identifier on the target image, the picture-in-picture identifier indicating that the video was captured in the picture-in-picture mode.
15. The electronic device of claim 14, wherein the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
and responding to the received operation that the user clicks the picture-in-picture identification, and displaying the first picture on the position of the target image.
16. The electronic device of claim 11, wherein the program instructions, when executed by the processor, cause the electronic device to perform the steps of:
selecting a first image frame in the video as the target image frame;
alternatively, the first and second electrodes may be,
and selecting any one of the first N image frames in the video as the target image frame, wherein N is an integer greater than 1.
17. An electronic device, comprising:
a memory and a processor, the memory coupled with the processor;
the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the steps of:
responding to the received user operation, and starting recording in the multi-scene mode;
acquiring a video recorded in the multi-view mode, wherein the video comprises a plurality of image frames; each of the plurality of image frames comprises a plurality of pictures collected by a plurality of cameras of the electronic device;
selecting a target image frame from the plurality of image frames;
selecting a target picture from the target image frame;
detecting whether the size of the target image frame meets a preset condition or not;
if the size of the target image frame is detected to meet the preset condition, generating a target image based on the target image frame;
if the size of the target image frame is detected to be not in accordance with the preset condition, generating a target image based on the target image;
and displaying the target image on a target interface, wherein the target interface is a gallery interface or an album interface, and the target image is a thumbnail in the gallery interface or an album cover in the album interface.
18. The electronic device according to claim 17, wherein the preset condition includes:
the difference between the size of the target image frame and the set size is less than or equal to a set threshold.
19. An electronic device, comprising:
a memory and a processor, the memory coupled with the processor;
the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the steps of:
responding to a received first user operation, and displaying a picture-in-picture mode shooting interface, wherein the picture-in-picture mode shooting interface comprises a first window and a second window, and the first window is in the second window; the first window displays a picture collected by a first camera of the electronic equipment, and the second window displays a picture collected by a second camera of the electronic equipment;
in response to the received second user operation, starting recording in the picture-in-picture mode;
acquiring a video recorded in the picture-in-picture mode, wherein the video comprises a plurality of image frames; each image frame of the plurality of image frames comprises a first picture and a second picture, the first picture corresponds to the first window, and the second picture corresponds to the second window;
selecting a target image frame from the plurality of image frames;
detecting whether the size of the target image frame meets a preset condition or not;
if the size of the target image frame is detected to meet the preset condition, generating a target image based on the target image frame;
if the size of the target image frame is detected to be not in accordance with the preset condition, generating a target image based on a second picture in the target image frame;
and displaying the target image on a target interface, wherein the target interface is a gallery interface or an album interface, and the target image is a thumbnail in the gallery interface or an album cover in the album interface.
20. The electronic device of claim 19, wherein the preset condition comprises:
the difference between the size of the target image frame and the set size is less than or equal to a set threshold, and the first picture is complete if the target image frame is generated based on the target image frame.
21. A display method, comprising:
the electronic equipment responds to the received user operation and starts recording in the multi-scene mode;
the electronic equipment acquires a video recorded in the multi-scene mode, wherein the video comprises a plurality of image frames; each of the plurality of image frames comprises a plurality of pictures collected by a plurality of cameras of the electronic device;
the electronic equipment selects a target image frame from the plurality of image frames;
the electronic equipment selects a target picture from a plurality of pictures in the target image frame;
the electronic equipment generates a target image based on the target picture;
the electronic equipment displays the target image on a target interface, the target interface is a gallery interface or an album interface, and the target image is a thumbnail in the gallery interface or an album cover in the album interface.
22. The method of claim 21, wherein the electronic device selects a target picture from a plurality of pictures in the target image frame, comprising:
and the electronic equipment selects the target picture according to the content of each picture in the plurality of pictures.
23. The method of claim 22, wherein the electronic device selects the target screen according to each screen content of the plurality of screens, comprising:
the electronic equipment detects the number of people contained in the plurality of pictures;
and if the number of the people contained in the plurality of pictures is detected to be zero, the electronic equipment selects the target picture according to the shooting parameters of each picture in the plurality of pictures.
24. The method of claim 23, wherein the electronic device selects the target screen according to each screen content of the plurality of screens, comprising:
if the number of the people contained in the multiple pictures is M, the electronic equipment detects whether the owner of the electronic equipment is included in the M people; m is an integer greater than 1;
if the M personal objects comprise the owner of the electronic equipment, the electronic equipment determines that a picture containing the owner of the electronic equipment is the target picture;
and if the owner of the electronic equipment is detected not to be included in the M persons, the electronic equipment selects the target picture according to the shooting parameters of each picture in the multiple pictures.
25. The method of claim 24, wherein the electronic device selecting the target screen according to the content of each screen of the plurality of screens comprises:
if the number of the persons contained in the plurality of pictures is detected to be one, the electronic equipment determines that the picture containing the persons is the target picture.
26. The method of claim 22, wherein the electronic device generates a target image based on the target screen, comprising:
and the electronic equipment cuts the target picture based on the set size to obtain the target image.
27. The method of claim 21, wherein when the target interface is a gallery interface and the target image is a thumbnail image in the gallery interface, the method further comprises:
and the electronic equipment responds to the received operation of clicking the target image by the user and displays the video.
28. The method of claim 21, further comprising:
the electronic equipment displays a multi-scene identifier on the target image, wherein the multi-scene identifier is used for indicating that the video is shot in the multi-scene mode.
29. The method of claim 28, further comprising:
and the electronic equipment responds to the received operation of clicking the multi-scene identification by the user, and displays other pictures except the target picture in the plurality of pictures at the position of the target image.
30. The method of claim 21, wherein the electronic device selects a target image frame from the plurality of image frames, comprising:
selecting a first image frame in the video as the target image frame;
alternatively, the first and second electrodes may be,
and selecting any image frame of the first N image frames in the video as the target image frame, wherein N is an integer greater than 1.
31. A display method, comprising:
the electronic equipment responds to the received first user operation and displays a picture-in-picture mode shooting interface, wherein the picture-in-picture mode shooting interface comprises a first window and a second window, and the first window is in the second window; the first window displays a picture acquired by a first camera of the electronic equipment, and the second window displays a picture acquired by a second camera of the electronic equipment;
the electronic equipment responds to the received second user operation and starts recording in the picture-in-picture mode;
the electronic equipment acquires a video recorded in the picture-in-picture mode, wherein the video comprises a plurality of image frames; each image frame of the plurality of image frames comprises a first picture and a second picture, the first picture corresponds to the first window, and the second picture corresponds to the second window;
the electronic equipment selects a target image frame from the plurality of image frames;
the electronic equipment generates a target image based on a second picture in the target image frame;
the electronic equipment displays the target image on a target interface, the target interface is a gallery interface or an album interface, and the target image is a thumbnail in the gallery interface or an album cover in the album interface.
32. The method of claim 31, wherein the electronic device generates a target image based on a second picture in the target image frame, comprising:
and the electronic equipment cuts the second picture based on the set size to obtain the target image.
33. The method of claim 31, wherein when the target interface is a gallery interface and the target image is a thumbnail image in the gallery interface, the method further comprises:
and the electronic equipment responds to the received operation of clicking the target image by the user and displays the video.
34. The method of claim 31, further comprising:
the electronic device displays a picture-in-picture indication on the target image, the picture-in-picture indication indicating that the video was captured in the picture-in-picture mode.
35. The method of claim 34, further comprising:
the electronic equipment responds to the received operation that the user clicks the picture-in-picture mark, and displays the first picture on the position of the target image.
36. The method of claim 31, wherein the electronic device extracts a target image frame from the plurality of image frames, comprising:
selecting a first image frame in the video as the target image frame;
alternatively, the first and second liquid crystal display panels may be,
and selecting any image frame of the first N image frames in the video as the target image frame, wherein N is an integer greater than 1.
37. A display method, comprising:
the electronic equipment responds to the received user operation and starts recording in the multi-scene mode;
the electronic equipment acquires a video recorded in the multi-scene mode, wherein the video comprises a plurality of image frames; each of the plurality of image frames comprises a plurality of pictures acquired by a plurality of cameras of the electronic device;
the electronic equipment selects a target image frame from the plurality of image frames;
the electronic equipment selects a target picture from the target image frame;
the electronic equipment detects whether the size of the target image frame meets a preset condition or not;
if the size of the target image frame is detected to meet the preset condition, the electronic equipment generates a target image based on the target image frame;
if the size of the target image frame is detected to be not in accordance with the preset condition, the electronic equipment generates a target image based on the target image;
the electronic equipment displays the target image on a target interface, the target interface is a gallery interface or an album interface, and the target image is a thumbnail in the gallery interface or an album cover in the album interface.
38. The method according to claim 37, wherein the preset conditions include:
the difference between the size of the target image frame and the set size is less than or equal to a set threshold.
39. A display method, comprising:
the electronic equipment responds to a received first user operation and displays a picture-in-picture mode shooting interface, wherein the picture-in-picture mode shooting interface comprises a first window and a second window, and the first window is in the second window; the first window displays a picture collected by a first camera of the electronic equipment, and the second window displays a picture collected by a second camera of the electronic equipment;
the electronic equipment responds to the received second user operation and starts recording in the picture-in-picture mode;
the electronic equipment acquires a video recorded in the picture-in-picture mode, wherein the video comprises a plurality of image frames; each of the plurality of image frames includes a first picture and a second picture, the first picture corresponding to the first window and the second picture corresponding to the second window;
the electronic equipment selects a target image frame from the plurality of image frames;
the electronic equipment detects whether the size of the target image frame meets a preset condition or not;
if the size of the target image frame is detected to meet the preset condition, the electronic equipment generates a target image based on the target image frame;
if the size of the target image frame is detected to be not in accordance with the preset condition, the electronic equipment generates a target image based on a second picture in the target image frame;
the electronic equipment displays the target image on a target interface, the target interface is a gallery interface or an album interface, and the target image is a thumbnail in the gallery interface or an album cover in the album interface.
40. The method of claim 39, wherein the preset conditions comprise:
the difference between the size of the target image frame and the set size is less than or equal to a set threshold, and the first picture is complete if the target image frame is generated based on the target image frame.
41. A chip comprising one or more interface circuits and one or more processors; the interface circuit is configured to receive signals from a memory of an electronic device and to transmit the signals to the processor, the signals including computer instructions stored in the memory; the computer instructions, when executed by the processor, cause the electronic device to perform the display method of any of claims 21 to 30.
42. A chip comprising one or more interface circuits and one or more processors; the interface circuit is configured to receive signals from a memory of an electronic device and to transmit the signals to the processor, the signals including computer instructions stored in the memory; the computer instructions, when executed by the processor, cause the electronic device to perform the display method of any of claims 31 to 36.
43. A chip comprising one or more interface circuits and one or more processors; the interface circuit is configured to receive signals from a memory of an electronic device and to transmit the signals to the processor, the signals comprising computer instructions stored in the memory; the computer instructions, when executed by the processor, cause the electronic device to perform the display method of claim 37 or claim 38.
44. A chip comprising one or more interface circuits and one or more processors; the interface circuit is configured to receive signals from a memory of an electronic device and to transmit the signals to the processor, the signals including computer instructions stored in the memory; the computer instructions, when executed by the processor, cause the electronic device to perform the display method of claim 39 or claim 40.
CN202110753284.1A 2021-07-02 2021-07-02 Display method and electronic equipment Active CN114466101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110753284.1A CN114466101B (en) 2021-07-02 2021-07-02 Display method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110753284.1A CN114466101B (en) 2021-07-02 2021-07-02 Display method and electronic equipment

Publications (2)

Publication Number Publication Date
CN114466101A CN114466101A (en) 2022-05-10
CN114466101B true CN114466101B (en) 2022-11-29

Family

ID=81405024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110753284.1A Active CN114466101B (en) 2021-07-02 2021-07-02 Display method and electronic equipment

Country Status (1)

Country Link
CN (1) CN114466101B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147524B (en) * 2022-09-02 2023-01-17 荣耀终端有限公司 3D animation generation method and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008283347A (en) * 2007-05-09 2008-11-20 Nikon Corp Electronic apparatus and electronic camera
CN106028120A (en) * 2016-06-27 2016-10-12 徐文波 Method and device for performing video direction in mobile terminal
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4368819B2 (en) * 2005-03-30 2009-11-18 株式会社日立製作所 Summary playback apparatus and control method for summary playback apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008283347A (en) * 2007-05-09 2008-11-20 Nikon Corp Electronic apparatus and electronic camera
CN106028120A (en) * 2016-06-27 2016-10-12 徐文波 Method and device for performing video direction in mobile terminal
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment

Also Published As

Publication number Publication date
CN114466101A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN110401766B (en) Shooting method and terminal
CN110072070B (en) Multi-channel video recording method, equipment and medium
CN109951633B (en) Method for shooting moon and electronic equipment
WO2020073959A1 (en) Image capturing method, and electronic device
CN110506416B (en) Method for switching camera by terminal and terminal
CN110231905B (en) Screen capturing method and electronic equipment
CN113556461B (en) Image processing method, electronic equipment and computer readable storage medium
CN111010506A (en) Shooting method and electronic equipment
CN113489894B (en) Shooting method and terminal in long-focus scene
CN113747048B (en) Image content removing method and related device
CN113194242B (en) Shooting method in long-focus scene and mobile terminal
CN113891009B (en) Exposure adjusting method and related equipment
CN113709354A (en) Shooting method and electronic equipment
CN113170037A (en) Method for shooting long exposure image and electronic equipment
CN110138999B (en) Certificate scanning method and device for mobile terminal
CN114500901A (en) Double-scene video recording method and device and electronic equipment
CN114466101B (en) Display method and electronic equipment
CN113556466A (en) Focusing method and electronic equipment
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
CN115686182B (en) Processing method of augmented reality video and electronic equipment
CN116939093A (en) Shooting method and electronic equipment
CN115145514A (en) Method, electronic device and system for splicing contents
CN115802144A (en) Video shooting method and related equipment
CN115268742A (en) Method for generating cover and electronic equipment
CN115775400A (en) Image processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant